The UK expects AGI in four years. Why doesn’t it act like it?
Trust me bro Westminster edition
Peter Kyle is the UK Secretary of State for Science, Innovation and Technology. Responsible for asking the machinery of government to foster, develop, and react to important developments in these industries, Kyle is a senior politician whose voice carries weight in Westminster.
He also thinks that artificial general intelligence (AGI) is coming. And soon. On a recent podcast, Kyle said: “I think by the end of this parliament we're going to be knocking on artificial general intelligence.” For those not familiar with the timetable of the British political system, that puts the arrival of AGI in 2029.
It’s a strange statement that can be read in a few ways. Maybe Kyle doesn’t really believe what he’s saying or maybe he’s parroting lines that he’s heard from American politicians. I take Kyle to be intelligent and I don’t know what he has to gain by putting these remarks out there if he doesn’t believe them, so I’d be minded to give him the benefit of the doubt and grant that he believes his own forecast.
Another reading might stress that Kyle has a very specific view of what ‘AGI’ means that doesn’t really correspond to how people generally think about the technology. He followed up his statement with the suitably cryptic: “I think in certain areas, it will have been achieved,” which suggests he’s talking about systems with human-level performance in some circumstances but not in others. The problem with this interpretation is that it glosses over the ‘general' in artificial general intelligence (not to mention that today’s systems already exceed humans in some domains).
Option number three is that Kyle thinks of AGI broadly like most other people in the field — a system capable of conducting the majority of cognitive tasks that a human can — and does indeed think we should expect a system like this in just a few years.
For the purposes of this post, I am going to assume that this interpretation is correct. Kyle knows what AGI is and he believes in what he’s saying. And as the person responsible for government technology policy, we should probably treat these remarks with the respect they deserve.
One way to do that is to ask an obvious but important question. If the UK government thinks AGI is coming within the next five years, is it behaving with the seriousness we should expect to prepare for its arrival?
Of course not.
That’s not to dismiss the good work done by the AI Security Institute (AISI) or those behind the AI Opportunities Action Plan, but rather to point out that even efforts that move faster than the glacial speed of government aren’t enough if the Secretary of State’s timelines are correct.
Five tests
What follows are some loose thoughts about how a UK government would behave if it actually believed that AGI was about to arrive. I’ve structured these as ‘tests’ around a few of the policy areas that I expect to matter for the successful deployment of AGI. Compute to make the models tick, a national security posture that reflects the reality of a world with AGI, efforts to harden the country against economic and social shocks, moves to bolster state capacity, and regulations mandating various governance requirements.
It should go without saying that pretty much all of these have an extremely low chance of happening. The point of this exercise isn’t to blackpill anyone but to show the extent of the disconnect between (a) believing AGI is just around the corner and (b) the policies adopted by the government.
Compute
If AGI appears anytime soon, it’s going to be based on a version of the large model paradigm. Whatever specific form it takes, at the very least we’re talking about a massive connectionist model that needs a great deal of compute to develop and serve to users.
These both matter for the UK if AGI is as close as Kyle thinks. While there are no national champions like Mistral in France to compete with American or Chinese labs, we should expect that ‘BritGPT’ will make an almighty comeback if the government feels confident it could eventually become BritAGI.
But the main ticket is the juice for usage, which is sometimes referred to as ‘test time’ or ‘inference’ compute. If we have systems that can do pretty much anything a remote worker can, the main factor constraining their use is access to compute. Some of that will come from overseas, but any government that really believed compute was about to become king would want to have a supply at home for a bunch of economic and political reasons.
How’s the UK doing on that front? Somewhere between terrible and badly. Despite the fact the powers that be have accepted the proposal of the AI Opportunities Action Plan to increase compute by a factor of 20 by 2030, a quick back of the envelope calculation suggests that would still leave Britain with well under 4% of the total raw horsepower the American public and private sector already has on the books. Even the Compute Roadmap, which sounds impressive on first blush, talks up total investment that is about half of what Microsoft plans to spend independently in 2025.
If the government really thought AGI was five years away, it would be looking to increase compute by 100x from the current floor. This would still be on the light side, but might take us from ‘bad’ to ‘somewhat bad’ in the context of the UK’s size relative to Uncle Sam. Clearly that’s easier said than done, but one way forward would probably include a combination of tax breaks, accelerating the rollout of proposed AI Growth Zones, and good old-fashioned state investment.
Defence
If AGI really was just around the corner, today’s geopolitical settlement may be in for a shock. As I recently wrote about for Time, complexity is the strategic currency of war in the information age — and AGI is a complexity accelerator. But how this shakes out in practice depends on who makes the technology and where it lives.
A recent report from the RAND Corporation explores this idea in more detail. The authors sketch eight different geopolitical futures including one where the United States uses AGI to usher in a moment of unipolar power, one where China does the same, one where AGI is shared amongst liberal democratic powers, and one where the machine goes loco and takes over.
These scenarios are based on the idea that AGI could be a powerful tool for organising warfare, organising material and controlling robotic hardware, and decoding enemy plans. If you honestly believed AGI was just around the corner, it is safe to say the 2025 Strategic Defence Review is already a shade out of date. Concepts like ‘digital targeting web’ or a ‘Digital Warfighter group’ all incorrectly presuppose that humans remain the ultimate decision-makers, strategists, and actors in the age of AGI.
Preparedness
A government that really believed AGI was set to arrive before the decade is out would be treating it like a national emergency. The first order of business would be to pin AGI to the mast of state in a way that survives elections. Westminster has done this before (e.g. granting the Bank of England its independence), so we’re not exactly in uncharted waters. We might imagine an AGI Commission that would:
Regularly report to Parliament on how close we are to certain thresholds by assessing capability evaluations, lab disclosures, and macro trends.
Licence and inspect anything above a defined compute or capability threshold, probably working in combination with an AISI with teeth (more below).
Plan for the downstream consequences — labour shocks, social changes, threat acceleration — and hand government a menu of possible responses.
Every serious scenario in which AGI can do ‘the majority of cognitive tasks a human can’ ends with a labour market that looks as if a neutron bomb has gone off. At the very least, the UK should probably be trialling regional universal basic income schemes and looking long and hard at corporate tax so some income from frontier models flows to the exchequer.
This is to say nothing of the many other unexpected ways that a world with AGI could spell trouble for the state. Maybe it’s cyber or bio attacks or maybe it’s an algorithmic arbitrage engine that shorts the pound into free-fall. I don’t know. No one does for sure. The point is that once you build AGI, the menu of unpleasant surprises may multiply faster than any Whitehall risk register can keep up.
Planning is already happening inside government, but right now no-one really cares or is paying attention. My understanding from those near the action is that the civil service has concluded that if any of these violent risks materialise there’s nothing the state can do.
State capacity
But in some ways, the UK is doing more than most. The AI Security Institute hands out grants, tests models, and has spun up independent safety and interpretability programmes staffed by some impressive CVs. Alas, it’s still small change. We are talking about sums that are similar to what Google spends on catering over the same period.
If we buy Kyle’s timelines, we should be increasing AISI’s budget by between a factor of 10 and 100. That kind of jump for a group with a £240M budget sounds crazy until you remember the frontier labs burn through billions of dollars of cash every year. If ministers are serious about peering inside a system that may shortly outsmart them, billions, not millions, have to be the unit of account.
Likewise, the UK has its Advanced Research and Invention Agency (ARIA) based on the American DARPA model. The agency has about £1bn to spend over a couple of years, some of which already goes towards promising approaches for making AI safe. Multiply that by the same order of magnitude and that is exactly the territory you need to be in if you want the state to steer the terms on which AGI arrives.
Could the Treasury stomach numbers like that? Not really, unless they pulled the old trick of treating the cash as defence spending. But that won’t happen, because the headline figures are really a referendum on belief. If you keep budgets in the tens of millions, you tacitly confess you do not in fact expect AGI by 2029.
Regulation
The AI Security Institute is doing good work. Its budget is bigger than its American counterpart, which it has agreements with to co-test models, and it has inspired the formation of AISIs the world over. But we should remember that while labs let AISI test their models, they aren’t compelled to by law.
They don’t give AISI access to every model, and they don’t have any responsibility to make changes even if testing finds something wrong. If ministers really believe an AGI is about to walk through the door, we might expect them to do something other than leaving safety to goodwill.
If they were treating governance seriously, they would give AISI a Royal Charter with powers to match. A charter makes it harder for a future government to quietly trim its wings; inspector powers let it enter labs uninvited, run its own scripts, and — if the model fails certain tests — issue a stop order. Of course, there is simply no way these actions will happen without American backing, but a charter would still be a signal of intent to back-up short timelines.
Ministers had drafted a ‘frontier-model’ safety bill for earlier this year, but they shelved it in February to better align with the new US administration. Officials now talk about a broader ‘AI Bill’ to be introduced in the future, but no one really knows what that is likely to include.
Honesty is the best policy
For the record, my own timelines for AGI are longer than four years. But if I was running the country, and if my timelines were as short as Kyle’s, you can bet I’d be implementing some policies that reflected the logical consequences of my beliefs.
The UK government is forecasting a technological event on the scale of the steam engine, then responding as if it were a smartphone upgrade. Either the state’s machinery must accelerate to match the timetable, or the timetable is made up.
If ministers truly expect to be ‘knocking on AGI’ within one parliamentary term, then the compute, safety science, resilience measures, and legal guardrails have to scale accordingly. If they can’t or won’t do that, maybe they should admit that four years is a fantasy.
> I take Kyle to be intelligent and I don’t know what he has to gain by putting these remarks out there if he doesn’t believe them
I think the mundane reality is that the government is in a mad scramble to promise economic growth, though they have absolutely no ability to deliver it within the system as it stands. In this state of desperation, AGI is a lottery ticket - they know they're not going to win but the reason they say they will is that they've simply got nothing else to offer.
In the name of contrarianism:
- What if AGI is distillable to a small model, and thus you don't need much compute to automate an economy? What if it doesn't matter that much from a sovereignty perspective if you are inference-autarkic—maybe you just want some critical uses (defense, healthcare, etc) to be doing inference on sovereign territory (counterargument: demand might be such that it's hard to get compute abroad).
- Agreed directionally on defense but I don't think AGI changes the dynamics of warfare as much as the nuclear bomb did. It's not a decisive advantage militarily.
- Directionally seems right but I don't think threshold-based governance works because there are in fact no clear thresholds (both under evaluation and as ideal decision points); they are ephemeral.
- I don't think AISI can do much more with 5x more budget, let alone 10x (or if it had that, it probably gets spent mostly on grants). Money isn't really the bottleneck for most state capacity. ARIA is focused on long-term fundamental research, for a <5y AGI timeline you need the thing after DARPA in the pipeline, perhaps "British In-Q-Tel". Coincidentally: https://www.politico.eu/article/britain-secretive-spy-fund-out-shadows-nssif-gchq-mi5-mi6/
- Not sure regulation works even under <5y AGI timelines, but regardless giving AISI a Royal Charter entails a spin-out into an Arms-Length Body. But under such short AGI timelines, perhaps we just want AISI to be in DSIT where it can more easily connect to Ministers?