AI That Can Invent AI Is Coming. Buckle Up.
Leopold Aschenbrenner’s “Situational Awareness” manifesto made waves when it was published this summer.
In this provocative essay, Aschenbrenner—a 22-year-old wunderkind and former OpenAI researcher—argues that artificial general intelligence (AGI) will be here by 2027, that artificial intelligence will consume 20% of all U.S. electricity by 2029, and that AI will unleash untold powers of destruction that within years will reshape the world geopolitical order.
Aschenbrenner’s startling thesis about exponentially accelerating AI progress rests on one core premise: that AI will soon become powerful enough to carry out AI research itself, leading to recursive self-improvement and runaway superintelligence. ...
At the frontiers of AI science, researchers have begun making tangible progress toward building AI systems that can themselves build better AI systems. ...
If AI systems can do their own AI research, they can come up with superior AI architectures and methods. Via a simple feedback loop, those superior AI architectures can then themselves devise even more powerful architectures—and so on. ...
At first blush, this may sound far-fetched. Isn’t fundamental research on artificial intelligence one of the most cognitively complex activities of which humanity is capable? ...
In the words of Leopold Aschenbrenner: “The job of an AI researcher is fairly straightforward, in the grand scheme of things: read ML literature and come up with new questions or ideas, implement experiments to test those ideas, interpret the results, and repeat.” ...
... research on core AI algorithms and methods can be carried out digitally. Contrast this with research in fields like biology or materials science, which (at least today) require the ability to navigate and manipulate the physical world via complex laboratory setups. ...
Consider, too, that the people developing cutting-edge AI systems are precisely those people who most intimately understand how AI research is done. Because they are deeply familiar with their own jobs, they are particularly well positioned to build systems to automate those activities. ...
Sakana’s “AI Scientist” is an AI system that can carry out the entire lifecycle of artificial intelligence research itself: reading the existing literature, generating novel research ideas, designing experiments to test those ideas, carrying out those experiments, writing up a research paper to report its findings, and then conducting a process of peer review on its work. ...
As the Sakana team summarized: “Overall, we judge the performance of The AI Scientist to be about the level of an early-stage ML researcher who can competently execute an idea but may not have the full background knowledge to fully interpret the reasons behind an algorithm’s success. ...
The most important takeaway from Sakana’s AI Scientist work, therefore, is not what the system is capable of today. It is what systems like this might soon be capable of. ...
OpenAI’s GPT-1 paper, published in 2018, was noticed by almost no one. A few short years later, GPT-3 (2020) and then GPT-4 (2023) changed the world. ...
Just last month, Anthropic updated its risk governance framework to emphasize two particular sources of risk from AI: (1) AI models that can assist a human user in creating chemical, biological, radiological or nuclear weapons; and (2) AI models that can “independently conduct complex AI research tasks typically requiring human expertise—potentially significantly accelerating AI development in an unpredictable way.”
Consider it a sign of things to come. ...
The most limited and precious resource in the world of artificial intelligence is talent. Despite the fervor around AI today, there are still no more than a few thousand individuals in the entire world who have the training and skillset to carry out frontier AI research. Imagine if there were a way to multiply that number a thousandfold, or a millionfold, using AI. OpenAI and Anthropic cannot afford not to take this seriously, lest they be left behind. ...
See the full story here: https://www.forbes.com/sites/robtoews/2024/11/03/ai-that-can-invent-ai-is-coming-buckle-up/
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio