How Do the White House’s A.I. Commitments Stack Up?
Overall, the White House’s deal with A.I. companies seems more symbolic than substantive. There is no enforcement mechanism to make sure companies follow these commitments, and many of them reflect precautions that A.I. companies are already taking.
Still, it’s a reasonable first step. And agreeing to follow these rules shows that the A.I. companies have learned from the failures of earlier tech companies, which waited to engage with the government until they got into trouble. In Washington, at least where tech regulation is concerned, it pays to show up early.
Commitment 1: The companies commit to internal and external security testing of their A.I. systems before their release.
Commitment 2: The companies commit to sharing information across the industry and with governments, civil society and academia on managing A.I. risks.
Commitment 3: The companies commit to investing in cybersecurity and insider-threat safeguards to protect proprietary and unreleased model weights.
Commitment 4: The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their A.I. systems.
Commitment 5: The companies commit to developing robust technical mechanisms to ensure that users know when content is A.I. generated, such as a watermarking system.
Commitment 6: The companies commit to publicly reporting their A.I. systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
Commitment 7: The companies commit to prioritizing research on the societal risks that A.I. systems can pose, including on avoiding harmful bias and discrimination and protecting privacy.
Commitment 8: The companies commit to develop and deploy advanced A.I. systems to help address society’s greatest challenges.
See the full story here: https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio