None of the leading AI companies have adequate guardrails in place to prevent catastrophic misuse or loss of control of their models, according to the Winter 2025 AI Safety Index, out Wednesday from the Future of Life Institute. ...
The big picture: The Future of Life Institute is a nonprofit that releases regular safety assessments of leading AI companies.
- Anthropic had the highest overall score, but still received a grade of "D" for existential safety, meaning the company doesn't have an adequate strategy in place to prevent catastrophic misuse or loss of control. ...
What they're saying: Leaders at many of the companies have spoken about addressing existential risks, per the report.
- This "rhetoric has not yet translated into quantitative safety plans, concrete alignment-failure mitigation strategies, or credible internal monitoring and control interventions," researchers wrote.
See the full story here: https://www.axios.com/2025/12/03/ai-risks-agi-anthropic-google-openai