Raising a glass to robotics [TWIE]
The Week In Examples #56 | 12 October 2024
Another week, another roundup of things that say something interesting about AI and society. I am battling a particularly stubborn cold as I piece together this edition – so please forgive a slightly less research focused TWIE just this once. Nonetheless, the usual public service announcement: if you want to send something my way, you can do that by emailing me at hp464@cam.ac.uk.
Three things
1. The State of AI
The fine folks at Air Street Capital dropped what is probably the best known AI report out there: the State of AI Report 2024. I’m not going to run through the report in any detail (the whole thing is worth reading) but I will dwell on the predictions, which are always my favourite bit of the project. You can see last year’s scorecard above, but here are a couple of interesting predictions that the authors have put together for the year ahead:
A $10B+ investment from a sovereign state into a US large AI lab invokes national security review;
Challengers fail to make any meaningful dent in NVIDIA’s market position;
Early EU AI Act implementation ends up softer than anticipated after lawmakers worry they’ve overreached;
An app or website created solely by someone with no coding ability will go viral (e.g. App Store Top-100);
A video game based around interacting with GenAI-based elements will achieve break-out status.
I think these are all fairly likely to come off, though there are others (e.g. less investment in humanoid robotics) that I think are less likely to play out. Either way, fortune favours the brave and all that, so I like to see commentators make predictions about the near future that can (for the most part) clearly be falsified.
2. Raising a glass to robotics
The Tony Blair Institute for Global Change continued in the hardest mission of them all: battling against the long-running slide of the UK into the economic and technological wilderness. This time around the team is tackling robotics, with a new report grappling with the essential role of robotics for the (near) future of society. There are some interesting policy ideas here, like:
Creating a £100m Robotics Investment Programme through British Patient Capital to support robotics start-ups;
Boosting financial support for regulators where new uses of robotics and embodied AI are likely to be most keenly felt (e.g. transport); and
Exploring the creation of new NHS data sets around the use of robots in health care.
Also this week came a flashy show from Tesla, featuring a new driverless Cybercab (though given details were light, it’s not all that clear to what extent the self-driving capabilities represent an uplift on existing systems). More eye-catching, though, were the demos of the firm’s new Optimus robots that were serving drinks, dancing, and simply hanging out at the event. It seems like there was a degree of teleoperation involved (i.e. the robots were being helped out by the proverbial man behind the curtain) but I do think we’re not too far away from these sorts of capabilities working without too much human intervention.
The reason for that is a new-ish paradigm in the robotics world that is dramatically improving the ability of robots to walk, talk, and, uh, actually do useful things that require a bit of dexterity. Essentially, researchers take a large model (e.g. GPT-4 or Gemini), fiddle around with it a bit, and run it on an embodied platform (in this case, a humanoid robot). This approach is making a lot of headway in a very short period of time, so expect more impressive demos—and not too long after that—robots that actually work as we’d like them to.
3. AIs on the prize
It’s been a big week for AI and science. John Hopfield and Geoffrey Hinton won the Nobel prize for physics (more on that in a moment) while Demis Hassabis and John Jumper (disclosure: colleagues of mine at Google DeepMind) split the Nobel prize for chemistry with David Baker. Cue lots of jokes about whether ChatGPT would get the Nobel prize for literature or whether Eliezer Yudkowsky would win the peace award.
The prizes basically reflect two different ways of reckoning with the relationship between AI and science: as a tool for studying the world and as a worthy pursuit in its own right. In other words, they broadly correspond with the ‘science of AI’ and the ‘use of AI in science’. But, as is often the case in the history of science, the boundaries between epistemic camps aren't always as neat as we might like. With respect to Hopfield, for example, work on the network that bears his name was tightly connected to the study of the famous Ising model (an important mathematical model in statistical mechanics).
Despite others producing similar results years before Hopfield’s popular 1982 paper, the work nonetheless energised the resurgent field of connectionism (the ancestor of systems like ChatGPT). The example encourages us to acknowledge that many scientific concepts—from Hebbian learning to linear discriminant analysis—have long informed the development of AI.
As the contribution of AI to scientific practice begins to shift gears, these awards remind us that AI’s relationship with science is not a one way street.
Best of the rest
Friday 11 October
A Closer Look at Machine Unlearning for Large Language Models (arXiv)
Russia says it is ramping up AI-powered drone deployments in Ukraine (Reuters)
Human and LLM Biases in Hate Speech Annotations: A Socio-Demographic Analysis of Annotators and Target (arXiv)
ByteDance's TikTok cuts hundreds of jobs in shift towards AI content moderation (Reuters)
Biased AI can Influence Political Decision-Making (arXiv)
Thursday 10 October
MLE-bench (OpenAI)
I Want to Break Free! Anti-Social Behavior and Persuasion Ability of LLMs in Multi-Agent Settings with Social Hierarchy (arXiv)
State of AI Report (X)
A Trilogy of AI Safety Frameworks: Paths from Facts and Knowledge Gaps to Reliable Predictions and New Knowledge (arXiv)
Wednesday 9 October
Pixtral 12B (arXiv)
From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis (arXiv)
Thread on compute capacity by Epoch (X)
Strategic AI Governance: Insights from Leading Nations (arXiv)
Tuesday 8 October
Hacked ‘AI Girlfriend’ Data Shows Prompts Describing Child Sexual Abuse (404 Media)
OpenAI and Hearst Content Partnership (OpenAI)
OpenAI Leaders Say Microsoft Isn’t Moving Fast Enough to Supply Servers (The Information)
Scaling Core Earnings Measurement with Large Language Models (SSRN)
Scrap REF to ‘save £450m’, says think tank (RPN)
Monday 7 October (and things I missed)
SI’s Post Summit of the Future Plans (SI)
Are you smarter than an LLM (web)
Why I joined AISI by Geoffrey Irving (UK AISI)
Introducing canvas (OpenAI)
Job picks
Some of the interesting (mostly) AI governance roles that I’ve seen advertised in the last week. As usual, it only includes new positions that have been posted since the last TWIE (but lots of the jobs from the previous edition are still open).
Head of Information Security, UK Government, AI Safety Institute (London)
Research Scientist, Frontier Safety and Governance, Google DeepMind (London or New York)
Communications Contractor, Centre for Long-Term Resilience (London)
Operational Ethics and Safety Manager, Google DeepMind (London)
News Editor, UK, The Verge (London)