AI That Can Invent AI Is Coming. Buckle Up.
Leopold Aschenbrenner’s “Situational Awareness” manifesto made waves when it was published this summer.
In this provocative essay, Aschenbrenner—a 22-year-old wunderkind and former OpenAI researcher—argues that artificial general intelligence (AGI) will be here by 2027, that artificial intelligence will consume 20% of all U.S. electricity by 2029, and that AI will unleash untold powers of destruction that within years will reshape the world geopolitical order.
Aschenbrenner’s startling thesis about exponentially accelerating AI progress rests on one core premise: that AI will soon become powerful enough to carry out AI research itself, leading to recursive self-improvement and runaway superintelligence. ...
At the frontiers of AI science, researchers have begun making tangible progress toward building AI systems that can themselves build better AI systems. ...
If AI systems can do their own AI research, they can come up with superior AI architectures and methods. Via a simple feedback loop, those superior AI architectures can then themselves devise even more powerful architectures—and so on. ...
At first blush, this may sound far-fetched. Isn’t fundamental research on artificial intelligence one of the most cognitively complex activities of which humanity is capable? ...
In the words of Leopold Aschenbrenner: “The job of an AI researcher is fairly straightforward, in the grand scheme of things: read ML literature and come up with new questions or ideas, implement experiments to test those ideas, interpret the results, and repeat.” ...
... research on core AI algorithms and methods can be carried out digitally. Contrast this with research in fields like biology or materials science, which (at least today) require the ability to navigate and manipulate the physical world via complex laboratory setups. ...
Consider, too, that the people developing cutting-edge AI systems are precisely those people who most intimately understand how AI research is done. Because they are deeply familiar with their own jobs, they are particularly well positioned to build systems to automate those activities. ...
Sakana’s “AI Scientist” is an AI system that can carry out the entire lifecycle of artificial intelligence research itself: reading the existing literature, generating novel research ideas, designing experiments to test those ideas, carrying out those experiments, writing up a research paper to report its findings, and then conducting a process of peer review on its work. ...
As the Sakana team summarized: “Overall, we judge the performance of The AI Scientist to be about the level of an early-stage ML researcher who can competently execute an idea but may not have the full background knowledge to fully interpret the reasons behind an algorithm’s success. ...
The most important takeaway from Sakana’s AI Scientist work, therefore, is not what the system is capable of today. It is what systems like this might soon be capable of. ...
OpenAI’s GPT-1 paper, published in 2018, was noticed by almost no one. A few short years later, GPT-3 (2020) and then GPT-4 (2023) changed the world. ...
Just last month, Anthropic updated its risk governance framework to emphasize two particular sources of risk from AI: (1) AI models that can assist a human user in creating chemical, biological, radiological or nuclear weapons; and (2) AI models that can “independently conduct complex AI research tasks typically requiring human expertise—potentially significantly accelerating AI development in an unpredictable way.”
Consider it a sign of things to come. ...
The most limited and precious resource in the world of artificial intelligence is talent. Despite the fervor around AI today, there are still no more than a few thousand individuals in the entire world who have the training and skillset to carry out frontier AI research. Imagine if there were a way to multiply that number a thousandfold, or a millionfold, using AI. OpenAI and Anthropic cannot afford not to take this seriously, lest they be left behind. ...
See the full story here: https://www.forbes.com/sites/robtoews/2024/11/03/ai-that-can-invent-ai-is-coming-buckle-up/
Exclusive: Chinese researchers develop AI model for military use on back of Meta’s Llama
- Papers show China reworked Llama model for military tool
- China's top PLA-linked Academy of Military Science involved
- Meta says PLA 'unauthorised' to use Llama model
- Pentagon says it is monitoring competitors' AI capabilities
See the full article here: https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/
What if A.I. Is Actually Good for Hollywood?
... “The difference here is that A.I. has the potential to disrupt many, many places in our pipeline,” says Lori McCreary, the chief executive of Revelations Entertainment, a production company she owns with Morgan Freeman, and a board member of the Producers Guild of America. “This one feels like it could be an entire industry disrupter.” ...
A.I. applications are often divided into two broader categories. The first is generative A.I., which helps artists and studios create things. Then there is “agentic” A.I., which helps them get things done. A new A.I. tool called Callaia, for instance, reads scripts and generates 35-page coverage reports, along with historical comparisons and suggested theatrical release patterns — the core duty of countless junior studio executives’ daily work life, though perhaps not for long. ...
Filmmaking is often described as the most collaborative art form, and Metaphysic was just one among many creative contributors to the trickiest scenes of Hanks and Wright as young lovebirds in “Here.” The actors performed in full period costume, not in green suits covered with Ping-Pong balls. The makeup department taped back the loose skin around Hanks’s neck and pulled up his droopy ears, so Hanks’s A.I.-generated young face would match Hanks’s real-life old head. And, of course, they had award-winning actors to deliver all the lines. “You still need the warmth of the human performance,” Zemeckis told me. “The illusion only works because my actors are using the tool just like they use their wardrobe, just like they’d use a bald skull cap.” It was the future of Hollywood, and it looked uncannily like its past.
See the full story here: https://www.nytimes.com/2024/11/01/magazine/ai-hollywood-movies-cgi.html?smid=fb-share&fbclid=IwY2xjawGR-ghleHRuA2FlbQIxMQABHcxK6Z4OMMxfylRfp7bBbxk13vjUeR1bT55xLS4cor3vA1IxAuoSthftgA_aem_JiH_RPbU5NxO4815k0kqlQ
Fox News Plans a “Super Desk” and Virtual Set For Election Night Coverage (Exclusive)
... Whereas Hemmer (and King and Kornacki) previously relied on a touchscreen that is more akin to a giant iPhone, Hemmer will now be able to display data in 3D space on set, interacting with the data thanks to infrared sensors that can track him. The company is also using a voice to text language model, allowing Hemmer to call up data just by speaking it.
Among other bells and whistles are a “path to 270 map,” which will place a 3D map in the studio letting anchors explain how the electoral votes are stacking up, as well as a “top 5 closest races” tool, which will visualize certain key close races.
And Studio M itself is getting an upgrade, including a “Super Desk” which will be the home base for anchors, correspondents and analysts over the course of election night. Fox also built a two-story tall accent wall, as well as a 20-foot long ultra HD video wall that will display photos and video over the course of the evening. ...
See the full story here: https://www.hollywoodreporter.com/news/politics-news/fox-news-election-night-super-desk-augmented-reality-1236045655/
Michael Parkinson’s son defends new AI podcast
...
In 2022, the union Equity launched a "Stop AI Stealing the Show" campaign. The use of AI was a major factor in the strikes that brought Hollywood to a standstill last year.
However, as Sir Michael is dead and therefore no longer has a livelihood to protect, the debate in this case is more about whether or not it is ethical to have him say things he never said in real life, and also whether AI versions of real hosts is something listeners even want. ...
Explaining how the podcast would work, Anderson said: "These are brand new interviews, and the AI we’ve created is as close to the late Sir Michael as we could possibly get it.
"He is autonomous, so we let him start the interview and after that it is up to AI Sir Michael, who is trained on Sir Michael’s style and the interview questions.
He added: "We can’t tell you the guests yet, we have a few slots remaining, but they are notable, noteworthy people." ...
See the full story here: https://www.bbc.com/news/articles/cn42x2gxl0jo
Haptic artificial muscle skin for extended reality
Abstract
Existing haptic actuators are often rigid and limited in their ability to replicate real-world tactile sensations. We present a wearable haptic artificial muscle skin (HAMS) based on fully soft, millimeter-scale, multilayer dielectric elastomer actuators (DEAs) capable of significant out-of-plane deformation, a capability that typically requires rigid or liquid biasing. The DEAs use a thickness-varying multilayer structure to achieve large out-of-plane displacement and force, maintaining comfort and wearability. Experimental results demonstrate that HAMS can produce complex tactile feedback with high perception accuracy. Moreover, we show that HAMS can be integrated into extended reality (XR) systems, enhancing immersion and offering potential applications in entertainment, education, and assistive technologies.
See the full paper here: https://www.science.org/doi/10.1126/sciadv.adr1765
CAA Inks Deal With AI Company Veritone To Store Talent’s Digital Assets
In a sign of the times, Creative Artists Agency has inked a deal with an artificial intelligence company to store clients’ digital assets.
CAA and Veritone announced their partnership on Tuesday, leading to the creation of the “CAAVault” - a synthetic media vault that will store all intellectual property related to all CAA talent’s name, image and likeness. This includes digital scans and voice recordings. ...See the full story here: https://www.msn.com/en-us/money/companies/caa-inks-deal-with-ai-company-veritone-to-store-talent-s-digital-assets/ar-BB1mmFiU?apiversion=v2&noservercache=1&domshim=1&renderwebcomponents=1&wcseo=1&batchservertelemetry=1&noservertelemetry=1
Disney plans AI launch to transform entertainment production
.... The multinational mass media and entertainment conglomerate is said to be gearing up to “transform its creative output” with its latest venture into the realm of artificial intelligence and will involve “hundreds” of employees in the move, according to The Wrap. ...
“Don’t fixate on its ability to be disruptive — fixate on [tech’s] ability to make us better and tell better stories. Not only better stories, but to reach more people,” the Disney CEO continued. “You’re never going to get in the way of it. There isn’t a generation of human beings that has ever been able to stand the way of technological advancement.” ...
See the full story here: https://rollingout.com/2024/10/25/disney-plansr-ai-launch-entertainment/
Why I’m Leaving OpenAI and What I’m Doing Next
The TL;DR is:
- I want to spend more time working on issues that cut across the whole AI industry, to have more freedom to publish, and to be more independent;
- I will be starting a nonprofit and/or joining an existing one and will focus on AI policy research and advocacy, since I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so;
- Some areas of research interest for me include assessment/forecasting of AI progress, regulation of frontier AI safety and security, economic impacts of AI, acceleration of beneficial AI applications, compute governance, and overall “AI grand strategy”;
- I think OpenAI remains an exciting place for many kinds of work to happen, and I’m excited to see the team continue to ramp up investment in safety culture and processes;
- I’m interested in talking to folks who might want to advise or collaborate on my next steps.
See the full post here: https://milesbrundage.substack.com/p/why-im-leaving-openai-and-what-im?utm_source=substack&utm_medium=email
Using AI for Political Polling
... Despite these frailties, obsessive interest in polling nonetheless consumes our politics. Headlines more likely tout the latest changes in polling numbers than the policy issues at stake in the campaign. This is a tragedy for a democracy. We should treat elections like choices that have consequences for our lives and well-being, not contests to decide who gets which cushy job. ...
Large language models, the AI foundations behind tools like ChatGPT, are built on top of huge corpuses of data culled from the Internet. These are models trained to recapitulate what millions of real people have written in response to endless topics, contexts, and scenarios. For a decade or more, campaigns have trawled social media, looking for hints and glimmers of how people are reacting to the latest political news. This makes asking questions of an AI chatbot similar in spirit to doing analytics on social media, except that they are generative: you can ask them new questions that no one has ever posted about before, you can generate more data from populations too small to measure robustly, and you can immediately ask clarifying questions of your simulated constituents to better understand their reasoning. ...
Our major systemic failure happened on a question about US intervention in the Ukraine war. In our experiments, the AI agents conditioned to be liberal were predominantly opposed to US intervention in Ukraine and likened it to the Iraq war. Conservative AI agents gave hawkish responses supportive of US intervention. This is pretty much what most political experts would have expected of the political equilibrium in US foreign policy at the start of the decade but was exactly wrong in the politics of today. ...
While AI models are dependent on the data they are trained with, and all the limitations inherent in that, what makes AI agents special is that they can automatically source and incorporate new data at the time they are asked a question. AI models can update the context in which they generate opinions by learning from the same sources that humans do. ...
For all the hand-wringing and consternation over the accuracy of US political polling, national issue surveys still tend to be accurate to within a few percentage points. ...
Where AI will work best is as an augmentation of more traditional human polls. Over time, AI tools will get better at anticipating human responses, and also at knowing when they will be most wrong or uncertain. They will recognize which issues and human communities are in the most flux, where the model’s training data is liable to steer it in the wrong direction. In those cases, AI models can send up a white flag and indicate that they need to engage human respondents to calibrate to real people’s perspectives. ...
We expect these AI-assisted polls will be initially used internally by campaigns, with news organizations relying on more traditional techniques. It will take a major election where AI is right and humans are wrong to change that.
See the full story here: https://ash.harvard.edu/articles/using-ai-for-political-polling/
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio