What Apple’s AI Tells Us: Experimental Models⁴
I wanted to give some quick thoughts on the Apple AI (sorry, “Apple Intelligence”) release. I haven’t used it myself, and we don’t know everything about their approach, but I think the release highlights something important happening in AI right now: experimentation with four kinds of models - AI models, models of use, business models, and mental models of the future. What is worth paying attention to is how all the AI giants are trying many different approaches to see what works. ...
AI Models
... You may not have seen that GPT-4 (the old, pre-turbo version with a small context window), without specialized finance training or special tools, beat BloombergGPT on almost all finance tasks. This demonstrates a pattern: the most advanced generalist AI models often outperform specialized models, even in the specific domains those specialized models were designed for. ... That means that if you want a model that can do a lot - reason over massive amounts of text, help you generate ideas, write in a non-robotic way — you want to use one of the three frontier models: GPT-4o, Gemini 1.5, or Claude 3 Opus. ...
But these models are expensive to train and slow and expensive to run, which leaves room for much smaller models that aren’t as good as the frontier models but can run cheaply and easily - even on a PC or phone. ...
Models of Use
Large Language Models are Swiss army knives of the mind - they can help with a wide range of intellectual tasks, though they do some badly (the toothpick in the Swiss army knife), and some not at all. Knowing what they are good or bad at is a process of learning by doing and acquiring expertise. ...
Contrast this with Apple’s narrow focus on making AI get stuff done for you. ... They do a really good job of providing easily understood “it just works” (mostly) integration of AI into work in easy ways. But both the Apple and app-specific Copilot models are constrained, which limits their upside, as well as their downside. ...
Business Models
The best access to an advanced model costs you $20 a month...
Apple sounds like they will start with free service as well, but may decide to charge in the future. The truth is that everyone is exploring this space, and how they make money and cover costs is still unclear...
What every one of these companies needs to succeed, however, is trust. ... But Apple goes many steps further, putting extra work into making sure it could never learn about your data, even if it wanted to. ...
Between the limited use cases and the privacy focus, this is a very “ethical” use of AI (though we still know little about Apple’s training data). We will see if that is enough to get the public to trust AI more.
Models of the Future
... While Apple is building narrow AI systems that can accurately answer questions about your personal data (“tell me when my mother is landing”), OpenAI wants to build autonomous agents that would complete complex tasks for you (“You know those emails about the new business I want to start, could you figure out what I should do to register it so that it is best for my taxes and do that.”). The first is, as Apple demonstrated, science fact, while the second is science fiction, at least for now. ...
Having companies take many approaches to AI is likely to lead to faster adoption in the long term. And, as companies experiment, we will learn more about which sets of models are correct.
Read the full post here: https://www.oneusefulthing.org/p/what-apples-ai-tells-us-experimental
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio