The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023
PhilNote: in order to get support from China, the EU, and the US, this agreement is all about discussing issues without actually defining them (ex. "risk") or developing a plan for deployment and enforcement.
... This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. We also note the relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct. ...
In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:
- identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
- building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AIcapabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.
...
See the full story here: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio