8 Comments
User's avatar
Callum Hackett's avatar

> I take Kyle to be intelligent and I don’t know what he has to gain by putting these remarks out there if he doesn’t believe them

I think the mundane reality is that the government is in a mad scramble to promise economic growth, though they have absolutely no ability to deliver it within the system as it stands. In this state of desperation, AGI is a lottery ticket - they know they're not going to win but the reason they say they will is that they've simply got nothing else to offer.

Expand full comment
Herbie Bradley's avatar

In the name of contrarianism:

- What if AGI is distillable to a small model, and thus you don't need much compute to automate an economy? What if it doesn't matter that much from a sovereignty perspective if you are inference-autarkic—maybe you just want some critical uses (defense, healthcare, etc) to be doing inference on sovereign territory (counterargument: demand might be such that it's hard to get compute abroad).

- Agreed directionally on defense but I don't think AGI changes the dynamics of warfare as much as the nuclear bomb did. It's not a decisive advantage militarily.

- Directionally seems right but I don't think threshold-based governance works because there are in fact no clear thresholds (both under evaluation and as ideal decision points); they are ephemeral.

- I don't think AISI can do much more with 5x more budget, let alone 10x (or if it had that, it probably gets spent mostly on grants). Money isn't really the bottleneck for most state capacity. ARIA is focused on long-term fundamental research, for a <5y AGI timeline you need the thing after DARPA in the pipeline, perhaps "British In-Q-Tel". Coincidentally: https://www.politico.eu/article/britain-secretive-spy-fund-out-shadows-nssif-gchq-mi5-mi6/

- Not sure regulation works even under <5y AGI timelines, but regardless giving AISI a Royal Charter entails a spin-out into an Arms-Length Body. But under such short AGI timelines, perhaps we just want AISI to be in DSIT where it can more easily connect to Ministers?

Expand full comment
Sam Hogg's avatar

Very good piece. Foreign Secretary has also made some comments regarding AGI and arrival of AI in foreign policy space too, but also a little vague for my liking. https://x.com/Discoplomacy/status/1902841053931078024

Thanks for crediting my tweet in the intro too.

Expand full comment
Loic Fremond's avatar

Don't forget planning reform. All of this is predicated on us also being able to meet the exponentially growing energy needs to run all of this compute. Given the recently accepted amendments for the Planning and Infrastructure Bill, I'm not quite sure how that will happen.

Expand full comment
Andrew Sabisky's avatar

I think it's fairly plausible Kyle thinks we will have some kind of AGI by 2030, but is very skeptical about the mass labour market and societal impacts that some people like Dario predict. This is a perfectly coherent worldview, if so. After all, Old Man GPT-4 was very plausibly "AGI": it's very very general, and somewhat intelligent (obviously much less so than O3, Kimi2, R1, but that's as you would expect). But GPT-4 was and is basically economically useless, for reasons that are perfectly obvious to everyone (that's why everyone is pouring so much time and money into making smarter models!).

Expand full comment
Jack Wiseman's avatar

great piece

Expand full comment
Lawrence Lundy-Bryan's avatar

The labour market dislocation is the real quiz for the 2029 election, if large section of population feel underemployed (esp grads in white collar work), or like their job is always under threat, guess who they will turn to?

Expand full comment
John Giudice's avatar

Harry Law - When discussing AGI, it would be helpful for you to share what is your definition of a milestone that says we have AGI, by your definition. I have come to believe that using the term AGI is counter productive to a meaningful dialog and planning. Rather, I think of it as AI implementations are continually getting smarter and more capabile, on an incremental basis. It will never be a "light switch" moment where AGI suddenly appears.

For me that means we continually need to be working out approaches and plans to benefit from smarter AI implementations, as well as continually enhancing the "safety" of these implementations. It is an ongoing and forever situation.

Also it is very unlikely we will have a moment in global societies where we will have uniform agreement on anything related to AI implementations and safety. This is also a critical challenge all societies and countries need to be prepared for.

Happy to discuss with others if this is an interesting topic...

Expand full comment