The Week in Examples #11 [4 November]
Reaching the summit, US executive action, and a syllabus for AI governance
This week has felt like a bit of a turning point for AI policy. Amidst the Safety Summit in the UK, the Executive Order in the US, and a whole bunch of other announcements squeezed in between, the sheer number of reports, documents, proclamations, and communiqués has been dizzying.
I’ve done my best to impose a bit of order on the chaos, with thoughts on a few of the major beats and a whole bunch of links to boot. Once again, please don’t hesitate to write to me at hp464@cam.ac.uk with feedback, ideas for future editions, or to say hello.
Three things
1. Summit to write home about
What happened? Prime Minister Rishi Sunak convened the long awaited AI Safety Summit, with representatives from nation-states, industry, and (despite a bit of noise in the run up to the event) civil society. Amidst two days of keynotes, workshops, and demos, the summit culminated in the ‘Bletchley declaration’, an agreement to work together on safety standards in order to maximise the upside and minimise the risks posed by frontier AI systems. While the US Secretary of Commerce Gina Raimondo used the Summit as an opportunity to highlight new policy interventions from the Biden administration (more on that below), Chinese Vice Minister Wu Zhaohui urged attendees to “ensure AI always remains under human control” and that governments should work to “build trustworthy AI technologies that can be monitored and traced.”
What’s interesting? Perhaps the most significant output of the Summit (other than, of course, the reality-bending spectacle of King Charles weighing in on AI safety) was an agreement to run regular summits on an ongoing basis. The first of these, which is set to take place in South Korea in six months time, will provide another opportunity for states to discuss the risks posed by AI, share information, and – say it quietly – introduce solutions. This, combined with the multilateral backing of a report to identify emerging risks associated with frontier AI, pretty much does away with the idea that the UK event represented a once-in-a-lifetime moment to secure an international agreement on AI governance.
What else? While the Summit was probably more successful than many predicted, it was not without its critics. Civil society groups, for example, argued for a broader definition of AI safety (which was something Vice President Harris alluded to in her speech). While I liked that this
press releasecommuniqué was signed by a fairly broad range of groups, it’s also worth saying that I suspect the tight focus on safety was directly responsible for enabling attendees to throw their weight behind the declaration and align on like-minded initiatives.
2. US AI policy goes into overdrive
What happened? Only a few days before the UK’s AI Safety Summit was set to begin, the US revealed its much hyped Executive Order on AI. The 63 page document is a bit of a beast, spanning standards for biological synthesis screening, guidance for watermarking to clearly label AI-generated content, cybersecurity measures, and a host of efforts focused on privacy, civil rights, and consumer protection. Perhaps the most eye-catching part of the announcement, though, was news that the US will require companies to report training runs above a certain size (in this case, 1e26 FLOP). To make some sense of that figure, the threshold is about 5x that of GPT-4 according to some estimates.
What’s interesting? The Executive Order was just the tip of the iceberg as far as US AI governance news is concerned. The Biden administration released for public comment its first-ever draft policy guidance on the use of AI by the U.S. government, announced that 31 nations have joined the United States in endorsing its declaration on use of AI in the military, and unveiled $200 million in funding toward public interest efforts to mitigate AI harms and promote responsible use and innovation. Finally, the Executive Order also announced that the Department of Commerce is establishing the United States AI Safety Institute (US AISI) inside NIST, which will mirror the function of the UK AI Safety Institute announced at the summit.. The US AISI will, according to the government, operationalise NIST’s AI Risk Management Framework by creating guidelines, tools, benchmarks, and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations including red-teaming to identify and mitigate AI risk.
What else? It is a cliche to say that years worth of progress happens in some weeks. Nonetheless, cliches exist for a reason, and the US AI policy environment seems to be a good example that this one probably holds true. The central question here is what comes next: whether we are likely to see primary legislation remains uncertain, and it is possible that some aspects of the Order may overstep the bounds of Executive authority. Nonetheless, I generally believe that there is a lot of opportunity to better support existing regulators as they grapple with AI (after all, they are the ones who tend to know their sector best). I expect that the Executive Order will play an important galvanising role in that regard, though – as Vice President Harris made clear – frontier model developers should expect more binding commitments in the future.
3. A template syllabus for AI governance
What happened? The final item I want to bring your attention to is a bit different (partly because I like to mix things up, and partly because I wanted to give you all a bit of a break from the drama of high diplomacy and the corridors of power). To that end, Kevin Frazier, an assistant professor at St. Thomas University College of Law in Miami Gardens, Florida, and a research affiliate at the Legal Priorities Project, published a template syllabus for a course on Legal and Regulatory Means to Mitigate AI Risk.
What’s interesting? The course looks at the technical basics of AI, assessments of some of the key risks posed by AI development and deployment, and analysis of legal and policy levers to mitigate those risks. Frazier wants this template to serve as the basis for the formation of an AI, Emerging Tech, and Risk Reduction Law community that would include scholars, researchers, and practitioners dedicated to updating and improving the syllabus.
What else? The plan for the project is for members of the community to record video lectures that can supplement the readings and ease adoption of the course by professors still working on getting the basics of the field down. The syllabus currently features lectures from Juliette Kayyem of Harvard University and Rebecca Crootof of Richmond University, with a few more interesting names signed up for the near future. If you’re keen to learn more about the effort or want to lend a hand in developing content and building a community of AI and Law scholars, reach out to Kevin at kevintfrazier@gmail.com.
Best of the rest
Friday 3 November
Elon Musk speaks to Rishi Sunak (X)
Anthropic’s Jack Clark speaks to BBC Radio 4 (BBC Sounds)
Getting aligned on representational alignment (arXiv)
AI Policy Perspectives October 2023 (Substack)
Human participants in AI research: Ethics and transparency in practice (arXiv)
Thursday 2 November
Apollo demos deception research (Apollo)
British deputy PM throws backing behind open source AI (POLITICO)
All-Hazards Policy for Global Catastrophic Risk (GCR)
Propaganda or Science: Open Source AI and Bioterrorism Risk (1A3ORN)
The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis (WIRED)
Wednesday 1 November
Mathematics and modelling are the keys we need to safely unlock transformative AI (ARIA)
Dario Amodei’s prepared remarks from the AI Safety Summit on Anthropic’s Responsible Scaling Policy (Anthropic)
Securing Artificial Intelligence Model Weights (RAND)
YouGov poll: 83% of Brits demand companies prove AI systems are safe before release (AI Safety Communications Centre)
Tuesday 31 October
Joint Statement on AI Safety and Openness (Mozilla)
What the executive order means for openness in AI (AI Snake Oil)
Do Companies’ Ai Safety Policies Meet Government Best Practice? (CFI)
Vast Majority of US voters of All Political Affiliations Support President Biden's Executive Order on AI (AIPI)
Paul Christiano - Preventing an AI Takeover (Dwarkesh Podcast)
Monday 30 October
AI Alignment: A Comprehensive Survey (arXiv)
Will releasing the weights of future large language models grant widespread access to pandemic agents? (arXiv)
Cambridge-Huawei Report (UK-China Transparency)
Evaluating Large Language Models: A Comprehensive Survey (arXiv)
Urging an International AI Treaty: An Open Letter (AI Treaty)