The Week in Examples #1 [19 August 23]
Including 'openwashing', UK AI regulation, and international institutions
I’m trialling a new regular addition to the Learning From Examples roster. Please read on as I bring to you an end of week roundup of different news from industry, government and civil society as well as opinion and analysis of all things AI. We’re going to kick off with three sections:
Three things: My thoughts on the most interesting three pieces I’ve seen this week (determined by how much time I spent thinking about them amidst the regular avalanche of AI news). Usually I’ll go for a mixture of papers, essays and news stories.
Best of the rest: Links to other interesting AI resources that I’ve seen, presented without analysis. The idea here is to go for breadth rather than depth, so I have organised by date for skimmability. Primarily news stories.
Vignette of the week: This could be anything from a passage I liked to a historical curiosity that I stumbled on (usually in the form of something visually appealing). A bit indulgent, yes, but consider it something to keep me afloat in a sea of links. This week we start small and simple.
I want The Week In Examples to be a useful resource, so I’ll need your help to overcome the cold start problem and prevent it from freezing to death. Please don’t hesitate to write to me at hp464@cam.ac.uk with feedback. (No really, tell me what you think. It helps a tonne!)
Three things
1. Someone is finally trying to stop the overuse of ‘open source’
What happened? Signal president Meredith Whittaker released a new paper arguing that “the terms ‘open’ & ‘open source’ are often more marketing than technical descriptor, and that even the most 'open' systems don't alone democratize AI.”
What’s interesting? The paper examines 'open' AI systems, finding that while some offer transparency and reusability, the resources needed to build AI remain concentrated amongst a handful of major firms. It traces the history of open source software (discussing moves to delineate between ‘free software’ and ‘open source’ in the 1990s) and its relationship to power, arguing that openness alone does not ensure democratic access to AI. Ultimately, the paper makes the case that the rhetoric of openness is used by tech companies to favourably shape the external environment in service of commercial objectives.
What else? Meta, of course, recently allowed users to download its flagship Llama 2 model, which it described as an ‘open source’ release. This decision sparked criticism about what was perceived as prohibitive licensing by those favouring spreading the benefits of powerful models. My own view here is that ‘open source’ has an excellent brand, sounds catchy, and has long been entrenched in the AI discourse. Despite the inaccuracies, it’s hard to see that changing any time soon.
2. The UK’s approach to AI regulation is good, actually?
What happened? Alex Chalmers and Nathan Benaich of Air Street Capital (the firm behind the annual State of AI Report) wrote a blog defending the UK's sector-by-sector approach to regulation ahead of the AI Safety Summit later this year.
What’s interesting? The blog makes three key points. First, that risk is context-dependent and regulators have a track record of adapting to new risks and technology (they cite the Information Commissioner's Office, which was founded in 1984 but today works on social media privacy concerns). Second, that critics of the model often mistake regulation for a cost-free intervention (they talk a bit about contrasting perfect states with imperfect markets). Third, specific moves focused on safety (e.g. evals and red-teaming) are not incompatible with a sector-by-sector approach.
What else? If you had asked me a couple of months ago whether the UK was likely to change its approach to AI regulation I would have said absolutely not. But given that I can’t seem to remember many other recent defences of the approach (including from the government) I’m not so sure these days. The introduction of the Foundation Model Task Force demonstrates how seriously the UK is taking the development of frontier AI, so I wouldn’t rule out a ‘horizontal’ style piece of legislation in the future.
3. International institutions to solve the AI ‘power paradox’
What happened? Inflection founder Mustafa Suleyman and Ian Bremmer of the Eurasia Group argue that, no, it really is different this time in an opinion piece for Foreign Affairs. They write that “It [AI] does not just pose policy challenges; its hyper-evolutionary nature also makes solving those challenges progressively harder. That is the AI power paradox.” (Personally, I am not sure whether a cycle in which X makes Y increasingly difficult counts as a paradox.)
What’s interesting? They make the case that AI is unique in its ability to diffuse rapidly while remaining in the hands of a few private firms. Broad applications make frontier models unpredictable, while—unlike nuclear weapons—AI systems proliferate easily and require relatively few resources to reuse and remix once created. There’s a lot more to say (do read the whole article if you’re interested) but the final points I’d like to draw out are the three governance recommendations they make: a global scientific body like the IPCC, arms control-inspired approaches to prevent proliferation, and a financial stability-inspired organisation to coordinate responses when AI disruptions occur.
What else? Everyone has their own idea for what the international governance of AI ought to look like. My colleagues at Google DeepMind have done a great job of mapping what the option space looks like, which you should take a look at if this sort of thing interests you. Broadly speaking, I like the Foreign Affairs piece because it is explicit that there is not a one-size-fits-all solution to the international governance question. That being said, one organisation can have different functions—like the IAEA, which aims to stop proliferation of nuclear weapons while also spreading peaceful applications. Creating a web of new institutions poses challenges around resource allocation (both in terms of cash and talent) and coordination (for example, a risk evaluated by a scientific body might also be of concern from a financial stability or arms control perspective).
Best of the rest
Friday 18 August
AI Snake Oil: Does ChatGPT have a liberal bias? (Substack)
Vox: What most Americans want for AI
The Register: 'AI-written history' of Maui wildfire becomes Amazon bestseller, fuels conspiracies
Financial Times: Saudi Arabia and UAE race to buy computer chips
The Wall Street Journal: AI Expert Max Tegmark Warns That Humanity Is Failing the New Technology’s Challenge
CNBC: The scientist behind IBM Watson has raised $60 million for his AI startup in New York
PYMNTS: AI Policy Group Says Promising Self-Regulation Is Same Thing as No Regulation
Thursday 17 August
The Information: Meta’s Next AI Attack on OpenAI: Free Code-Generating Software
MIT Tech Review: The future of open source is still very much in flux
Forbes: U.K. Announces AI Safety Summit
Reuters: AI use rising in influence campaigns online, but impact limited - US cyber firm
Financial Times: Opinion: The sceptical case on generative AI
Financial Times: Top Google AI experts pick Japan to set up on their own
CNBC: Meta, OpenAI, Anthropic and Cohere A.I. models all make stuff up — here’s which is worst – CNBC
The Wall Street Journal: Even AI Hasn’t Helped Microsoft’s Bing Chip Away at Google’s Search Dominance
Venture Beat: Arthur unveils Bench, an open-source AI model evaluator
The Wall Street Journal: Companies Increasingly Fear Backlash Over Their AI Work
CNBC: This old-school tech name’s consulting business could get a huge boost from AI
Axios: AI threatens the billable hour revenue model –
TechMonitor: UK AI safety summit should not just ‘focus on big players’ as first details are revealed
Wednesday 16 August
The Verge: The Associated Press sets AI guidelines for journalists
TechCrunch: OpenAI acquires AI design studio Global Illumination
TechCrunch: Meet Marqo, an open source vector search engine for AI applications
The New York Times: When Hackers Descended to Test A.I., They Found Flaws Aplenty
Bloomberg: AI Won’t Supercharge the US Economy (Opinion)
Variety: ‘This Is an Existential Threat’: Will AI Really Eliminate Actors and Ruin Hollywood? Insiders Sound Off
Axios: AI-generated books are infiltrating online bookstores
The Hill: To regulate AI, start with data privacy (Opinion)
Financial Times: UK to host AI safety summit at start of November
Bloomberg: Sunak Eyes AI Summit as Chance to Reclaim Pioneering Role for UK
Axios: State lawmakers want tougher regulation of AI technology
Tuesday 15 August
TechCrunch: Google’s AI search experience adds AI-powered summaries, definitions and coding improvements
TechCrunch: OpenAI proposes a new way to use GPT-4 for content moderation
The Verge: AI companies must prove their AI is safe, says nonprofit group
Politico: Hackers in Vegas take on AI –
The New York Times: A.I. Can’t Build a High-Rise, but It Can Speed Up the Job
Bloomberg: Do Oppenheimer’s Warnings About Nuclear Weapons Apply to AI? (Opinion)
The New York Times: In the Battle Between Bots and Comedians, A.I. Is Killing
Time: China Wants to Regulate Its Artificial Intelligence Sector Without Crushing It
Monday 14 August
The Verge: The New York Times prohibits using its content to train AI models
Bloomberg: Nvidia unveils faster chip aimed at cementing AI dominance
Axios: Hackers explore ways to misuse AI in major security test
TechCrunch: AI startup Anthropic raises $100M from Korean telco giant SK Telecom
Politico: Rules for AI hiring tools begin to take shape
The Guardian: ‘Only AI made it possible’: scientists hail breakthrough in tracking British wildlife