4 New Things I Saw at AWE 2024 That Will Make You Want AR and VR in Your Life
Productivity, immersion, creativity, UI options
See the full story here: https://www.cnet.com/tech/computing/4-new-things-i-saw-at-awe-2024-that-will-make-you-want-ar-and-vr-in-your-life/
The Vatican Library Explores Virtual Reality Technology to Enhance Visitor Experience
This innovative approach marks a significant departure from traditional methods of preservation and conservation, signaling a new era in the dissemination of historical knowledge. By embracing technology, the Vatican Library is not only safeguarding its treasures for future generations but also ensuring that they remain relevant and accessible in an increasingly digital world. ...
The Vatican Library was founded in 1475 by Pope Sixtus IV and has since been a cornerstone of preservation and research in the fields of theology, philosophy, history, and culture. ...
See the full story here: https://smartphonemagazine.nl/en/2024/06/18/the-vatican-library-explores-virtual-reality-technology-to-enhance-visitor-experience/
OpenAI co-founder Sutskever sets up new AI company devoted to ‘safe superintelligence’
Sutskever, a respected AI researcher who left the ChatGPT maker last month, said in a social media post Wednesday that he's created Safe Superintelligence Inc. with two co-founders. The company's only goal and focus is safely developing "superintelligence” - a reference to AI systems that are smarter than humans.
The company vowed not to be distracted by “management overhead or product cycles,” and under its business model, work on safety and security would be “insulated from short-term commercial pressures,” Sutskever and his co-founders Daniel Gross and Daniel Levy said in a prepared statement.
The three said Safe Superintelligence is an American company with roots in Palo Alto, California, and Tel Aviv, “where we have deep roots and the ability to recruit top technical talent.” ...
See the full story here: https://abcnews.go.com/Technology/wireStory/openai-founder-sutskever-sets-new-ai-company-devoted-111268590
Apple joins the race to find an AI icon that makes sense
... The thing is, no one knows what AI looks like, or even what it is supposed to look like. It does everything but looks like nothing. Yet it needs to be represented in user interfaces so people know they’re interacting with a machine learning model and not just plain old searching, submitting, or whatever else. ...
See the full story here: https://techcrunch.com/2024/06/15/apple-joins-the-race-to-find-an-ai-icon-that-makes-sense
What Apple’s AI Tells Us: Experimental Models⁴
I wanted to give some quick thoughts on the Apple AI (sorry, “Apple Intelligence”) release. I haven’t used it myself, and we don’t know everything about their approach, but I think the release highlights something important happening in AI right now: experimentation with four kinds of models - AI models, models of use, business models, and mental models of the future. What is worth paying attention to is how all the AI giants are trying many different approaches to see what works. ...
AI Models
... You may not have seen that GPT-4 (the old, pre-turbo version with a small context window), without specialized finance training or special tools, beat BloombergGPT on almost all finance tasks. This demonstrates a pattern: the most advanced generalist AI models often outperform specialized models, even in the specific domains those specialized models were designed for. ... That means that if you want a model that can do a lot - reason over massive amounts of text, help you generate ideas, write in a non-robotic way — you want to use one of the three frontier models: GPT-4o, Gemini 1.5, or Claude 3 Opus. ...
But these models are expensive to train and slow and expensive to run, which leaves room for much smaller models that aren’t as good as the frontier models but can run cheaply and easily - even on a PC or phone. ...
Models of Use
Large Language Models are Swiss army knives of the mind - they can help with a wide range of intellectual tasks, though they do some badly (the toothpick in the Swiss army knife), and some not at all. Knowing what they are good or bad at is a process of learning by doing and acquiring expertise. ...
Contrast this with Apple’s narrow focus on making AI get stuff done for you. ... They do a really good job of providing easily understood “it just works” (mostly) integration of AI into work in easy ways. But both the Apple and app-specific Copilot models are constrained, which limits their upside, as well as their downside. ...
Business Models
The best access to an advanced model costs you $20 a month...
Apple sounds like they will start with free service as well, but may decide to charge in the future. The truth is that everyone is exploring this space, and how they make money and cover costs is still unclear...
What every one of these companies needs to succeed, however, is trust. ... But Apple goes many steps further, putting extra work into making sure it could never learn about your data, even if it wanted to. ...
Between the limited use cases and the privacy focus, this is a very “ethical” use of AI (though we still know little about Apple’s training data). We will see if that is enough to get the public to trust AI more.
Models of the Future
... While Apple is building narrow AI systems that can accurately answer questions about your personal data (“tell me when my mother is landing”), OpenAI wants to build autonomous agents that would complete complex tasks for you (“You know those emails about the new business I want to start, could you figure out what I should do to register it so that it is best for my taxes and do that.”). The first is, as Apple demonstrated, science fact, while the second is science fiction, at least for now. ...
Having companies take many approaches to AI is likely to lead to faster adoption in the long term. And, as companies experiment, we will learn more about which sets of models are correct.
Read the full post here: https://www.oneusefulthing.org/p/what-apples-ai-tells-us-experimental
Apple Intelligent AI announcement at this link
https://urldefense.com/v3/https://shellypalmer.us3.list-manage.com/track/click?u=c45bf0ae5539b15b901766ddd&id=a96c5a4a11&e=3ce5196977;!!LIr3w8kk_Xxm!ozO6thz0gHsSrArO14xcIdjcteIlfaISEOYpq7XxHKBUCiaAF8VLdzsIdVi3zn4BWBtvlNdafsJ4ug$
Talentir, the most anticipated Web3 creative project in 2024
... Key Opinion Leaders (KOLs) within the crypto space have swiftly recognized the financial advantages of Talentir. They are delighted to receive their earnings directly in their wallets, bypassing the need for intermediaries. On average, creators earn between 15% and 30% more when they join Talentir, thanks to the platform's efficiency and reduced transaction costs. The unique setup of Talentir helps Creators worldwide with their withholding taxes as Talentir specializes in revenue optimization. ...
See the full PR here: https://www.morningstar.com/news/globe-newswire/9151248/6500-growth-in-one-month-talentir-the-most-anticipated-web3-creative-project-in-2024
Robert Tercek and Peter Csathy: When It Comes to Media and AI, Copyright Law Is Not an Open and Shut Case
... Csathy argues that AI’s reliance on copyrighted content necessitates fair compensation for creators. He said the use of AI to generate content raises concerns about the loss of control and the potential devaluation of creative works. ...
Playing devil’s advocate, Tercek laid out some of the defense that Big Tech will use to justify training on copyrighted work as “fair use.”
It’s an interesting argument, and goes like this:
“The AI reads all the books, or looks at all the images, or listening to all the music and then that model begins to build parameters. A big question about this is, is it fair use? Is it okay to look or read or listen? There is no law that prohibits reading a book. There’s no law that says, You can’t learn by looking at a picture.” ...
It’s not replicating any of Van Gogh’s paintings but we have millions of incredibly precise, measurements about his paintings. Those are facts and facts can’t be copyrighted. What we can recreate in LLM is a factual representation of all those different values that go into creating that work.” ...
“What LLMs are doing is transforming those fixed works into something that is participatory, that billions of people can interact with to build new creative things,” he says. ...
See the full story here: https://amplify.nabshow.com/articles/ic-ai-copyright-law-robert-tercek-peter-csathy/?utm_source=substack&utm_medium=email
Mohammad Hosseini: What will happen to generative AI after November’s election?
... Over the past year and a half, federal agencies, among many others, have used GenAI models such as ChatGPT to generate text, images, audio and video, making GenAI a priority concern for the U.S. government. And while we may assume President Joe Biden and former President Donald Trump are at opposite ends of the political spectrum on this issue, as they are on practically every issue, their approach to AI has actually been very similar. They have pampered AI developers with significant funding and deregulation, giving them global leverage, credit and visibility.
While Biden and Trump have expressed concerns about citizens’ privacy, safety and security, the way they have regulated AI shows they’re actually on the developers’ side. ...it is the climate policy of the next president that will affect GenAI developers the most. ...
Last July, the Biden administration announced securing voluntary commitments from seven AI companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — under the guise of underscoring “safety, security and trust.” But in reality, these so-called commitments were more like gifts because their scope is limited to GenAI tools that are overall more powerful than existing ones. ...
Perhaps the most compelling aspect of the executive order for GenAI developers is in Section 10.1(f)(i), which discourages federal agencies from “imposing broad general bans or blocks on agency use of generative AI.” This means that even in cases when an oversight agency has reasons to believe that using GenAI is harmful, it cannot ban the use. ...
See the full editorial here: https://www.chicagotribune.com/2024/06/10/opinion-artificial-intelligence-ai-joe-biden-donald-trump-regulations/
Exploring Zero-Knowledge Artificial General Intelligence Within The Context Of DePIN
... DePIN empowers a more democratic future for computing power through a vast global network where everyone can participate in a resource-sharing economy. As a peer-to-peer infrastructure, it unlocks efficient distribution of computing resources and allows users to earn rewards for their contributions. ...
The emergence of DePIN has paved the way for innovators to create cost-effective solutions to the inefficiencies and limitations of the existing critical infrastructure such as transportation, water supply, energy, communication systems, financial services, healthcare, and defense. Most especially, it is set to eliminate centralization and encourage decentralized ownership of these critical services.
As an independent researcher, I find DePIN projects fascinating enough to be studied individually. In this article, we explore Aten Krotos (aka Suraj Venkat) and Arthava’s works on Zero-knowledge Artificial General Intelligence(ZkAGI), an open-source AI project built on DePIN. Leveraging Zero-Knowledge and DePIN, the duo proposed ZkAGI to tackle the privacy concerns in AI. ...
See the full story here: https://hackernoon.com/exploring-zero-knowledge-artificial-general-intelligence-within-the-context-of-depin
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio