...
While almost every company developing advanced AI models has their own internal policiesand procedures around safety—and most have made voluntary commitments to the U.S. government regarding issues of trust, safety, and allowing third parties to evaluate their models—none of this is backed by the force of law. Tegmark is optimistic that if the U.S. national security establishment accepts the seriousness of the threat, safety standards will follow. “Safety standard number one,” he says, will be requiring companies to demonstrate how they plan to keep their models under control. ...
Mitchell says that AI’s corporate leaders bring “different levels of their own human concerns and thoughts” to these discussions. Tegmark fears, however, that some of these leaders are “falling prey to wishful thinking” by believing they’re going to be able to control superintelligence, and that many are now facing their own “Oppenheimer moment." ...
See the full story here: https://time.com/7267797/ai-leaders-oppenheimer-moment-musk-altman/