The Week in Examples #20 [6 January]
The three Rs: Radicalisation, robotics, researcher surveys
And just like that it was 2024, the year that boasts the setting for the 1960 pulp sci-fi flick Beyond the Time Barrier. It's a film that imagines plummeting birth rates caused by a ‘solar plague’, which I suppose makes it a half accurate prediction of the future.
To kick us off, we have calls for laws to protect people from ‘radicalisation’ when using large models, a survey of over 2,700 AI researchers on the future of the field, and a look at recent robotics research. As always, message me at hp464@cam.ac.uk for comments, ideas for next time, or to say hello.
Three things
1. Persuasive AI sees calls for radical action
What happened? The UK government’s independent advisor on terror legislation wrote about the need for new laws to counteract what he described as models that provide “inspiration to real life attackers”. Jonathan Hall, an independent reviewer tasked with informing the public and political debate on anti-terrorism law, called for the government to introduce new legislation that holds AI firms like CharacterAI (mentioned specifically in his piece) responsible for content that promotes or glorifies terrorism.
What's interesting? Hall said that the UK’s Online Safety Act is unsuited to generative AI because although “the new legislation does refer to content generated by ‘bots’ these appear to be the old-fashioned kind, churning out material that is pre-scripted by humans.” There are specific opt-outs for so-called limited functionality services in the Act, which includes “software or an automated tool or algorithm applied by the provider,” though (to add a final degree of uncertainty) this provision is connected to content that is “effected or controlled” rather than generated. The upshot is that generative AI services, which aren’t mentioned by name, probably aren’t scoped in.
What else? These calls, which I expect to see become more frequent over the next year or so, prove that time really is a flat circle. Who could have guessed that AI platforms with millions of users would provoke the same kind of concerns as good old fashioned social media? Well, everyone, clearly—but the problem is of course that AI is a very different beast to social media. Major platforms, and regulations like the Online Safety Act, are primarily designed around user-user interaction rather than user-machine interaction. The issues at hand are ultimately about online content. It’s not clear to me 1) how appropriate or 2) how effective it would be to create regulation that prevents anyone from, say, finetuning a model to remove safeguards and using it in their own home.
2. AI researchers predict the future
What happened? AI Impacts, a project that aims to work with researchers, philanthropists, and policymakers to improve “understanding of the likely impacts of human-level artificial intelligence,” released the results of what the group said was the largest ever survey of AI researchers. To do that, the team asked 2,778 from six influential AI conferences (e.g. NeurIPS, ICLR) for “predictions on the pace of AI progress and the nature and impacts of advanced AI systems.”
What’s interesting? The survey found that researchers thought the chance of unaided machines outperforming humans in every possible task was 10% by 2027 and 50% by 2047, though it also seems to suggest that (on aggregate) respondents thought AI would only achieve the 'full automation of labour' by 2113. Clearly, there is some gap between the pace that technology moves at and the pace that society moves at, but a 50+ year gap seems to be on the large side. Otherwise, the survey also suggested that researchers thought “the median prediction for ‘extremely bad’ outcomes such as human extinction was 5% with a mean of 9%.
What else? I often write about the problems with surveys. Things like question wording and order, the dynamic nature of perceptions, oversimplification of complex ideas, inherent noise in data, and social desirability bias all play a part in making these sorts of efforts the kind of thing we should handle with care. That being said, the headline findings—namely the stuff on the timelines for human level machine intelligence and the associated likelihood of human extinction—are worth taking notice of because they tell us something about where capabilities are right now. That is to say, the future is almost always about the present: describing what it could look like often means describing what it should look like. For that reason, we can read the bullish answers to mean that today's models are going to get significantly better in the short term as existing research gets rolled out to the public.
3. Say ALOHA to general purpose robots
What happened? One of my predictions for 2024 was that robotics is going to be an area with a lot of progress, primarily thanks to large, generalist models finetuned on robotics data. We are obviously in the earliest days of the year on this one, but already there have been a few signs (sparks, if you will) that I wanted to discuss here. The most significant of those was Mobile ALOHA from a group at Stanford, which demonstrated 1) a teleoperated robotics platform and 2) an automated version of the same system.
What's interesting? The ALOHA project caused a bit of a stir on X with a video that showed it doing laundry, watering plants, and using a vacuum. Of course, as the authors explained, this was the teleoperated version of the system (though I think that did little to quell the hype given the demo video has 5M views and the clarification 70,000). But the system, which costs $32,000, can also operate autonomously, with the group releasing a video of ALOHA doing things like cleaning up spills, cooking shrimp, and calling an elevator. The interesting thing here is the connection between the teleoperated and autonomous versions, with the researchers using data collected by the former to boost the performance of tasks completed by the latter by up to 90%.
What else? As hardware continues to become more sophisticated (and, as we have seen, cheaper) the main bottleneck for autonomous robotics is likely to be the software running on a given platform. As I mentioned above, the preferred paradigm at the moment is based on the collision of large models and specialised data with which to fine-tune them. We’ve already seen GPT-4 integrated into robotics with various degrees of success, and if I had to guess, I’d expect to see this approach generate some very interesting outputs over the next twelve months.
Best of the rest
Friday 5 January
What Generative AI Reveals About the Human Mind (Time)
Baidu’s bet on AI could make or break China’s fallen tech group (Financial TImes)
Character.ai: Young people turning to AI therapist bots (BBC)
Leaked: the names of more than 16,000 non-consenting artists allegedly used to train Midjourney's AI (The Art Newspaper)
Everyone wants better web search – is Perplexity's AI the answer? (The Register)
Where 2024’s “open GPT4” can’t match OpenAI’s (Interconnects >> Substack)
Thursday 4 January
Microsoft Adds AI Key in First Change to PC Keyboard in Decades (Bloomberg)
There's a 5% chance of AI causing humans to go extinct, say scientists (New Scientist)
What’s next for AI in 2024 (MIT Tech Review)
OpenAI Offers Publishers as Little as $1 Million a Year (The Information)
Open-Source AI Isn’t Always ‘Open’ and Free (Bloomberg)
AI’s future could hinge on one thorny legal question (The Washington Post)
Commentary: AI becomes a key audience (Axios)
Wednesday 3 January
It's 2024 and they just want to learn (Substack >> Interconnects)
AI’s regulatory battle lines form (Politico)
Episode 56: Talking about AI Regulation With Harry Law (Spotify >> shameless self promotion)
California senator files bill prohibiting agencies from working with unethical AI companies (Axios)
AI Safety in China #8 (Substack >> AI Safety in China)
Tuesday 2 January
Category creation & capital in technology (Michael Dempsey)
Congress warns science agency over AI grant to tech-linked think tank (Politico)
EU competition chief defends Artificial Intelligence Act after Macron’s attack (Financial Times)
Copyright law is AI's 2024 battlefield (Axios)
Response to the Interim Report of the UN Secretary-General’s High-Level Advisory Body on Artificial Intelligence (Simon Institute for Longterm Governance)
When Silicon Valley’s AI warriors came to Washington (Politico)
Monday 1 January
My thoughts on AI safety (Substack >> Musings on AI)
Can technology’s ‘zoomers’ outrun the ‘doomers’? (Financial Times)
What does 2024 hold for AI in Europe? Founders and investors’ predictions (Sifted)
Markets, elections and AI in 2024 (Financial Times)
How to think (better) about the 'AI moment' (The Pillar)