philip lelyveld The world of entertainment technology

14Dec/24Off

Leading AI Companies Get Lousy Grades on Safety 

A new report from the Future of Life Institute gave mostly Ds and Fs

The just-released AI Safety Index graded six leading AI companies on their risk assessment efforts and safety procedures... and the top of class was Anthropic, with an overall score of C. The other five companies—Google DeepMind, MetaOpenAI, xAI, and Zhipu AI—received grades of D+ or lower, with Meta flat out failing. 

“The purpose of this is not to shame anybody,” says Max Tegmark, an MIT physics professor and president of the Future of Life Institute, which put out the report. “It’s to provide incentives for companies to improve.” ...

The Future of Life Institute is a nonprofit dedicated to helping humanity ward off truly bad outcomes from powerful technologies, and in recent years it has focused on AI. In 2023, the group put out what came to be known as “the pause letter,” which called on AI labs to pause development of advanced models for six months, and to use that time to develop safety standards. Big names like Elon Musk and Steve Wozniak signed the letter (and to date, a total of 33,707 have signed), but the companies did not pause. ...

All six companies scaled particularly badly on their existential safety strategies. The reviewers noted that all of the companies have declared their intention to build artificial general intelligence(AGI), but only Anthropic, Google DeepMind, and OpenAI have articulated any kind of strategy for ensuring that the AGI remains aligned with human values. ...

“I feel that the leaders of these companies are trapped in a race to the bottom that none of them can get out of, no matter how kind-hearted they are,” Tegmark says. ...

https://spectrum.ieee.org/ai-safety

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.