The Week in Examples #22 [20 January 2024]
Open-sourcing AGI, “40% of jobs” affected by AI, and results from democratic AI projects
For the 22nd edition of The Week in Examples, we have results from OpenAI’s Democratic Inputs to AI programme, news from Meta that it plans to build and open source AGI, and a study from the IMF about the impact of AI on the labour market.
A reminder for new subscribers: every week I run through the most useful papers, stories, and trends for understanding the relationship between AI and the thing we call society. That usually takes the form of an analysis of the three most interesting things I’ve seen, a whole bunch of links to other relevant sources and resources that didn’t quite make the cut, and something aesthetically pleasing to end on.
As always, message me at hp464@cam.ac.uk for comments, ideas for next time, or to say hello.
Three things
1. Meta to move fast and open-source AGI
What happened? In an update shared via Instagram, Meta CEO Mark Zuckerberg announced the merger of the company’s GenAI and FAIR research efforts, confirmed that it had secured 600,000 H100 equivalents of compute, and said that its “long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit.” In a neat bit of choreography, an interview with The Verge published the same day saw the Meta chief say “we’ve come to this view that, in order to build the products that we want to build, we need to build for general intelligence.”
What's interesting? Most watchers already thought that Meta was heading towards AGI. After all, the company has said it is working on ‘autonomous machine intelligence’ or ‘human-level AI’ (AGI by another name) so the move doesn’t really strike me as a major change in strategy. Meta’s motivation seems to be about attracting researchers, with Zuckerberg telling The Verge it’s “important to convey [that the company is working on AGI] because the best researchers want to work on the more ambitious problems.” As for getting there, that’s what the compute is for. It’s tough to say how much this compute buys a hypothetical Llama 4 at the capabilities store, but it’s worth saying that the announced 600,000 of H100 equivalents could cost up $20 billion (though this is based on the full price for H100s, rather than H100 equivalents that make up a bit under half of the 600,000 total). Still, I’d expect some pretty remarkable capabilities with that kind of budget, which by way of some comparison is 30x the recent investment in compute announced by the UK government to turn the country into an “AI powerhouse”.
What else? Meta has long been championing open source approaches for its models, though Llama 2 came in for some stick about whether that label was appropriate to use due to licensing concerns. Nomenclature aside, when Meta says open source it means releasing the model weights so anyone can build on top of a hypothetical AGI. The problem, of course, is that AGI could enable bad actors to do pretty much anything in a way that today’s models cannot (for the avoidance of doubt, my view is that open-sourcing the majority of today’s models is generally a good thing). For an open-source AGI, novel bioweapons, self-replicating intelligent malware, surveillance programmes on steroids are all on the table. As Dame Wendy Hall told The Guardian in response to the news: “In the wrong hands technology like this could do a great deal of harm. It is so irresponsible for a company to suggest it.”
2. International Monetary Fund banks on AI adoption
What happened? The International Monetary Fund said that about 40% of global employment is in occupations that are highly “exposed” to AI, which in this context means that the key tasks required in those occupations have significant overlap with the abilities of AI systems. The report makes a distinction between "high exposure, high complementarity" occupations where AI is likely to play a supporting role, versus "high exposure, low complementarity" occupations where there is higher risk of AI replacing human roles and tasks. For advanced economies, the IMF reckons that 27% of employment is in the high-exposure, high-complementarity group (e.g. surgeons, lawyers) while a further 33 percent is in high-exposure, low-complementarity occupations (e.g. telemarketers, clerical workers).
What’s interesting? The report also said that exposure to AI displacement risk is spread across the income distribution, while potential complementarity is positively correlated with income level. In a nutshell, this means that the authors think that high exposure cuts across incomes, but high complementarity skews toward higher incomes. So AI could displace a range of workers while disproportionately boosting the earning potential of higher income ones, with the upshot being that AI may significantly increase inequality as a result.
What else? I would take this reporting with a pinch of salt. To calculate the exposure of various roles, the researchers used an index (Felten, Raj and Seamans, 2021) that measures the overlap between key tasks and abilities required for various occupations and the capabilities of AI. Next, they incorporated another measure accounting for societal factors determining whether AI will likely replace or complement human work in exposed occupations (Pizzinell et al., 2023). This analysis basically tries to account for the contextual factors around AI usage, which sees the authors argue that judges and doctors, for example, “despite high AI exposure, would still likely be human beings.” This example is probably true, but it draws into focus a broader problem with this style of analysis. It shows us that—away from connecting AI capabilities with tasks in a given role—accounting for the contextual, reputational, bureaucratic, and social elements of AI adoption is more or less a guessing game. It reminds me of one of the core issues with model evaluations, which is that they are generally great at determining whether a model can do X but not so good at telling us how well a model does X in the real world.
3. Democratic AI gets vote of confidence
What happened? OpenAI shared early results from its Democratic Inputs to AI grant programme announced in May last year. The ten projects, which were each awarded a grant of $100,000, included work to scale democratic deliberation, elicit values to fine-tune models, and develop AI alignment guidelines with large-scale live participation.
What's interesting? In the blogpost, OpenAI said that the experiments reinforced that finding compromise is (perhaps unsurprisingly) challenging when participants have polarising views, that powerful selection effects can create unrepresentative groups, and that public opinion can change frequently (something I looked at in the context of polling). The company also said that it was forming a Collective Alignment team that will 1) create and implement a system for collecting public input on “model behaviour” into its systems; and 2) work with external advisors and additional grant teams to run pilots with a view to incorporating grant work into model “steering” (i.e. using the work of some of these teams, and possibly others, for product development purposes).
What else? To take a step back, I expect that as more people begin to use large models, the ability to personalise models increases, and individuals begin to deploy agents that interact with other humans, we’ll see widespread and intense scrutiny of AI systems as inherently political artefacts. The reason for this is simple: it is one thing to see a developer’s values embedded into its models when I use them in isolation, and quite another to see someone else’s values encoded into a model that I encounter in the wild. Agents are preference amplification machines, yes, but they’re also value amplification machines too.
Best of the rest
Friday 19 January
Why isn't Multimodality Making Language Models Smarter? (Jake Browning)
Manchester United turn to artificial intelligence in bid to boost performance (The Telegraph)
Cohere in talks to raise as much as $1bn as AI arms race heats up (FT)
The World Health Organization’s AI warning (POLITICO)
Mistral becomes the talk of Davos as business leaders seek AI gains (FT)
Davos 2024: Can – and should – leaders aim to regulate AI directly? (BBC)
Thursday 18 January
Self-Rewarding Language Models (arXiv)
WHO releases AI ethics and governance guidance for large multi-modal models (WHO)
Call for applications to Oxford Group on AI Policy (University of Oxford)
OpenAI announces first partnership with a university (CNBC)
AI heralds the next generation of financial scams (FT)
Wednesday 17 January
Jobs at the AI Safety Institute (AISI)
Misinformation and disinformation are not the top global threats over the next two years (Dan Williams >> Substack)
OpenAI Is Working With US Military on Cybersecurity Tools (Bloomberg)
US FDA clears DermaSensor's AI-powered skin cancer detecting device (Reuters)
Supporting responsible AI: discussion paper (Australian Gov)
UN chief calls for global risk management of AI, warns of ‘serious unintended consequences’ (CNBC)
What does the Post Office scandal teach us about data and AI regulation? (Connected By Data)
Tuesday 16 January
LeftoverLocals: Listening to LLM responses through leaked GPU local memory (Trail of Bits)
Tuning Language Models by Proxy (arXiv)
Congress Is Trying to Stop AI Nudes and Deepfake Scams Because Celebrities Are Mad (Vice)
On AI, von der Leyen sees glass half full in Davos (Politico)
Singapore seeks expanded governance framework for generative AI (ZDNet)
Nvidia Catches Feelings With its Emotion Reading AI Patent (The Daily Upside)
Why everyone’s excited about household robots again (MITTR)
Monday 15 January
How OpenAI is approaching 2024 worldwide elections (OpenAI)
Generative AI first call for evidence: The lawful basis for web scraping to train generative AI models (ICO)
Policy implications of artificial intelligence (UK Parliament)
There’s an app for that: How AI is ploughing a farming revolution (Reuters)
Opinion | It’s Time for the Government to Regulate AI. Here’s How. (Politico)
Conflict, climate change and AI get top billing as leaders converge for elite meeting in Davos (Independent)
OpenAI: ChatGPT company quietly softens ban on using AI for military (Independent)