Artificial General Intelligence in Competition and War
PhilNote: Unfortunately this is why DOGE is eliminating gov't agencies and positioning the gov't to outsource national policy to the Xai / Palintar / Anduril collective. The biproduct of DOGE is a huge transfer of tax dollars to Elon, Palmer Lucky, and Peter Thiel.
...
In case it escaped anyone’s notice the "third offset" was a miserable failure. We decisively lost, and our potential enemies won. The United States kept building expensive weapons systems while our enemies built up an asymmetric advantage by focusing on relatively inexpensive missiles, drones, cyber, etc. But there is good news, an AGI-powered economy and military forces are going to give us another bite at the apple. If the United States first wins the race to deploy agentic warfare and then couples that with AGI, our past mistakes and failures will become meaningless, as the new AGI paradigm will guarantee freedom, security, and prosperity. This time, failure is not an option.
See the full story here: https://www.realcleardefense.com/articles/2025/05/07/artificial_general_intelligence_in_competition_and_war_1108660.html
OpenAI launches global push for democratic AI
PhilNote: I currently agree with the idea put forth in Chris Dixon's book Read Write Own that as long as you have centralized control of a network (as proposed by Sam Altman here) you face the risk of arbitrary non-democratic changes by the network's leadership. The next generation of networks, including those that incorporate AI, could very well be decentralized and federated.
OpenAI announced a push to help countries build AI Infrastructure and promote AI rooted in democratic, rather than authoritarian, values.
Why it matters: Global expansion will be one key to ensuring that OpenAI's massive investments pay off — and the company is arguing that it will help the U.S. counter China's influence, too.
How it works: OpenAI chief global affairs officer Chris Lehane said the new "OpenAI for Countries" effort, announced Wednesday, aims to partner with countries or regions to build and operate data centers that would serve up localized versions of ChatGPT for their citizens, with particular focus on health care and education.
- Countries that take part would help fund infrastructure as part of a broadening of the Project Stargate effort that OpenAI announced with Oracle and SoftBank earlier this year.
- OpenAI will be working closely with the U.S. government, which has export control powers, to determine where OpenAI technology can be deployed.
...
The big picture: OpenAI's announcement comes a day before CEO Sam Altman is set to testify before the Senate Commerce Committee at a hearing on "Winning the AI race."
See the full story here: https://www.axios.com/2025/05/07/openai-democratic-ai-expansion?utm_source=substack&utm_medium=email
Saudi Arabia: Neom climate adviser warns megacity could alter weather systems
A climate scientist working as an adviser on Saudi Arabia's Neom project has warned that the new city could change local environments and weather systems, including the path of wind and sand storms. ...
He said the sustainability advisory committee, which he sits on, was told during a recent meeting that the climate concerns were escalated to a “higher priority” since the abrupt departure of Nadhmi al-Nasr, the former chief of Neom. ...
See the full story here: https://www.middleeasteye.net/news/saudi-arabia-neom-climate-adviser-warns-megacity-alter-weather-system?utm_source=substack&utm_medium=email
AI developers should counter misinformation and protect fact-based news, global media groups say
...
The group says thousands of public and private media in broadcast, print and online formats have joined the “News Integrity in the Age of AI” initiative, whose five core steps were announced Monday at the World News Media Congress in Krakow, Poland.
The initiative is calling for news content to only be used in generative AI models with the authorization of the content originator, and for clarity about attribution and accuracy. It says the original news source behind AI-generated material must be “apparent and accessible."
“Organizations and institutions that see truth and facts as the desirable core of a democracy and the foundation of an empowered society should now come together at one table to shape the next era,” said Ladina Heimgartner, president of the publishers association and CEO of Switzerland’s Ringier Media. ...
See the full story here; https://www.ajc.com/news/nation-world/ai-developers-should-counter-misinformation-and-protect-fact-based-news-global-media-groups-say/72EGCLVDXJAPNKKC6R6GE3MXWM/
A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse
... More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information. ...
The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why. ...
“You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.” ...
“The way these systems are trained, they will start focusing on one task — and start forgetting about others,” said Laura Perez-Beltrachini, a researcher at the University of Edinburgh who is among a team closely examining the hallucination problem.
Another issue is that reasoning models are designed to spend time “thinking” through complex problems before settling on an answer. As they try to tackle a problem step by step, they run the risk of hallucinating at each step. The errors can compound as they spend more time thinking.
The latest bots reveal each step to users, which means the users may see each error, too. Researchers have also found that in many cases, the steps displayed by a bot are unrelated to the answer it eventually delivers ...
See the full story here: https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html
Anthropic CEO Admits We Have No Idea How AI Works
In an essay published to his personal website, Anthropic CEO Dario Amodei announced plans to create a robust "MRI on AI" within the next decade. The goal is not only to figure out what makes the technology tick, but also to head off any unforeseen dangers associated with what he says remains its currently enigmatic nature.
"When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," the Anthropic CEO admitted. ...
The whole thing is driven by ingested human creative works, not from first principles of machine intelligence.
"This lack of understanding," Amodei wrote, "is essentially unprecedented in the history of technology." ...
See the full story here; https://futurism.com/anthropic-ceo-admits-ai-ignorance
UBER Expands Its Robotaxi Network with Momenta to Launch Service in Europe
Uber announced on Friday that it is teaming up with Chinese self-driving startup Momenta in order to launch robotaxi services outside the U.S. and China. The first rollout is planned for Europe in early 2026 and will start with safety operators inside the vehicles. Uber’s goal is to combine its global ridesharing network with Momenta’s autonomous driving technology to offer safe and efficient robotaxi rides. CEO Dara Khosrowshahi said that this partnership will help make autonomous rides more reliable and affordable worldwide. ...
See the full story here: https://www.theglobeandmail.com/investing/markets/stocks/AMZN/pressreleases/32199578/uber-expands-its-robotaxi-network-with-momenta-to-launch-service-in-europe/
DOGE Is in Its AI Era
...
Wherever DOGE has gone, AI has been in tow. Given the opacity of the organization, a lot remains unknown about how exactly it’s being used and where. But two revelations this week show just how extensive—and potentially misguided—DOGE’s AI aspirations are.
...
Wherever DOGE has gone, AI has been in tow. Given the opacity of the organization, a lot remains unknown about how exactly it’s being used and where. But two revelations this week show just how extensive—and potentially misguided—DOGE’s AI aspirations are. ...
If nothing else, it’s the shortest path to a maximalist gutting of a major agency’s authority, with the chance of scattered bullshit thrown in for good measure.
At least it’s an understandable use case. The same can’t be said for another AI effort associated with DOGE. As WIRED reported Friday, an early DOGE recruiter is once again looking for engineers, this time to “design benchmarks and deploy AI agents across live workflows in federal agencies.” His aim is to eliminate tens of thousands of government positions, replacing them with agentic AI and “freeing up” workers for ostensibly “higher impact” duties. ... AI agents are still in the early stages; they’re not nearly cut out for this. They may not ever be. It’s like asking a toddler to operate heavy machinery. ...
The problem isn’t artificial intelligence in and of itself. It’s the full-throttle deployment in contexts where mistakes can have devastating consequences. It’s the lack of clarity around what data is being fed where and with what safeguards. ...
AI is neither a bogeyman nor a panacea. It’s good at some things and bad at others. But DOGE is using it as an imperfect means to destructive ends. It’s prompting its way toward a hollowed-out US government, essential functions of which will almost inevitably have to be assumed by—surprise!—connected Silicon Valley contractors.
See the full story here: https://www.wired.com/story/doge-is-in-its-ai-era/
Nvidia CEO Jensen Huang Sounds Alarm As 50% Of AI Researchers Are Chinese, Urges America To Reskill Amid ‘Infinite Game’
CEO Jensen Huang has urged American policymakers on Thursday to fully embrace artificial intelligence as a long-term strategic priority that demands national investment in workforce development.
What Happened: Huang, speaking at Hill & Valley Forum in Washington, DC, said, "To lead, the U.S. must embrace the technology, invest in reskilling, and equip every worker to build with it."
Huang stressed the importance of understanding competitive advantages in the AI race, noting that “50% of the world’s AI researchers are Chinese” — a factor he says must “play into how we think about the game.” ...
See the full story here; https://finance.yahoo.com/news/nvidia-ceo-jensen-huang-sounds-035916833.html
Shelly Palmer / Decode – Visa/MC/PayPal Agent
... Technology is meaningless unless it changes the way we behave. AI agents turn intent ("find me running shoes under $120, deliver by Friday") into action – without a human clicking "buy." This collapses the entire purchase funnel into a single prompt. ...
Privacy and Security by Design - Both systems ensure agents act only with user consent, using tokenized credentials and authentication layers. Agents must be verified before making transactions. Real-time fraud protection and dispute tools are built in. ...
See the full story here: https://decodeai.ghost.io/ai-agents-can-now-shop-for-you/
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Trustworthy AI – A Market-Driven approach
- Tufts Alumni Bio