One day, hopefully in the not too distant future, I’d like to work on this newsletter full time. I want to keep writing about cultural and intellectual history, and I’d love to complete fifty entries in the AI Histories series (47 more to go). To help me do that, all you have to do is pledge $5 to support the project before I turn on paid subscriptions sometime later this year. It’s free now, but your pledge will be hugely important to the future of Learning From Examples.
In 2018, Henry Kissinger wrote about AI in The Atlantic.
He was concerned about our place in a world filled with machines smarter than we are, and wondered whether their coming heralded the end of days for the long age of rationalism.
‘What,’ he asked, ‘will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?’
Kissinger is describing a kind of extreme deskilling, one that eventually atrophies the human capacity for reason. In a maximalist version of this scenario, most science is conducted by AI and human researchers become a sort of priestly class charged with interpreting the machine.
The idea is that AI is the final product of the Enlightenment, one that eventually succeeds so thoroughly in harnessing its ideals that it eats the project from the inside out. I’ll write about the ‘new Enlightenment’ another time, but in this post I want to focus on AI (by which I mean frontier LLMs) as the heirs to the age of reason.
When I think about AI, I don’t think about science. I certainly don’t think about rationality. I think about the weirdness of systems that work for reasons we don’t quite understand. I think about odd exchanges with Claude 3.5 Sonnet and the struggle to figure out what is happening inside the models.
I see AI as a clever but erratic organism, one much better at making connections between philosophical ideals or winging film criticism than at assisting in experimental research. You can see this in places where AI is used most: writing, art, coding (that’s creative, isn’t it?) and companionship.
Everyone knows the models are good fun and useful for the right things, but even the most confident user is aware that they aren’t the most rational constructs. They give you different answers every time, are liable to go loco, and occasionally make stuff up.
That doesn’t sound like a successor to the Enlightenment to me. No, AI is the descendant of another great tradition. One that prized intuition over logic, mystery over method, and the sublime over the systematic.
AI is a Romantic project.
Deep yearning
The Romantic movement emerged in Europe in the late 18th century. It began in literature and philosophy before eventually finding a home in music, painting, and politics. A reaction to the hard-headed rationalism of the Enlightenment, at its core was a renewed attention to imagination and individual spirit.
Romantic thinkers saw the world as alive. Nature, with its own moods and meanings, became a source of wisdom. Poets like Wordsworth thought there was an underlying truth in the natural world that rationalism could not account for. Philosophers like Schelling believed the cosmos itself was an expression of living thought.
These images were the product of what the historian Eric Hobsbawm called the dual revolution: the political upheaval of the French Revolution and the economic transformation of the Industrial Revolution. Romanticism emerged in their wake. It looked for coherence in a world that felt unstable amidst the flowering of new forms of cultural, political, and economic life.
The Romantics turned to ruins and folklore, to distant lands and ancient texts. It was a project about longing, about what it feels like to look for something that you can’t quite find. The Germans might call it sehnsucht: a yearning for the indefinite.
In painting, the search looked like stormy landscapes, crumbling abbeys, and wandering figures made small by nature. In music, it took the form of swelling emotion heard in the symphonies of Beethoven or the operas of Wagner.
Romanticism lives in the residue of the past. But where other approaches to cultural remembrance like Classicalism sought to rehabilitate, inherent in Romantic thought is a weary acceptance that revival is impossible.
These ideas are those that animate the AI project. Neural networks are, of course, both very large and very difficult to understand. They are systems whose inner worlds resist comprehension, digital simulacra bludgeoned into submission by a conditioning regime that would make Pavlov blush.
They are the product of planetary-scale logistics and magnets for trillions of dollars in capital. The systems are digital reservoirs filled to the brim by half a century of information deluge.
We’re drawn to AI for what it says about our place in the world: we are small.
This is what is happening when you hear AI people talk about building a ‘digital god’ or terrifying ‘shoggoths’. It’s not about religion or transcendence in any formal sense. But it is a reaction against intelligence as optimisation, one that gives voice to the urge to find meaning in that which we can’t understand. It’s a Romantic impulse in that it insists on the unknowable.
A version of the same instinct sits behind how people respond to models like Claude 3.5 Sonnet. The model was pretty good, sure, but the vibes were just…better than the competition. That’s probably the result of clever post-training, but it’s curious that Sonnet 3.7 doesn’t quite strike the same chord.
Or take GPT-2. When it was released all the way back in 2019, it wasn’t especially coherent. And it certainly wasn’t particularly useful by today’s standards. What it did have was what I can only describe as a kind of lyricism. It didn’t sound like a person, but it didn’t always sound like a machine either.
It was Romantic. I’m not saying that without being a little sentimental, but I do think GPT-2 provoked an emotional resonance that newer models struggle to match despite (or because of?) their polish. That sense of intrigue — not knowing what exactly you like about a half-cooked language model — is the same impulse that made the Romantics tick.
Chasing the sublime
This essay opens with Sadak in Search of the Waters of Oblivion. Painted by John Martin in 1812, the piece shows a man scrambling through a mountainous inferno. He’s alone and desperate, set against a backdrop that reminds us what small things humans are.
It captures what Edmund Burke would call the sublime: something massive, obscure, and overwhelming. A thing that is terrifying and awe-inspiring in equal measure. Size is part of the equation, but the sublime is really about the mind confronting something it can’t fully process.
Towering mountains, violent storms, dark nights were all things that overwhelmed the senses to produce a kind of thrilling terror. Later thinkers like Kant suggested that the sublime came from the mind’s capacity to grasp its own failure to comprehend the infinite.
It’s a useful concept for reckoning with AI at the limit. Trained on more words than any single person could hope to read in a thousand lifetimes, today’s giant neural networks are systems that wrinkle the brain. To create one, you use ultraviolet light to fabricate chips with billions of features. Then you drop thousands of them in a data centre somewhere and ask them to multiply matrices until the sand starts to think.
The networks themselves exist as mathematical constructs on top of the material world. They contain trillions of parameters adjusted via gradients that reflect patterns in high-dimensional space. If you printed them out, they would take you years to read. Researchers generally don’t know what function or piece of information certain parameters correspond to.
Interpretability research tries to close this gap by mapping individual neurons or circuits to linguistic, visual, or conceptual patterns. Some progress has been made, with Anthropic recently showing that certain neurons activate in response to concepts like cities or characters. But this is the exception, not the rule.
These are systems that write poetry, draft emails, and explain quantum mechanics whose internal structure resists human inspection. They reflect us in their language, but their information processing mechanisms are not our own. We know everything about AI, except how it works.
Romanticism was a critique of modernity. As factories rose and cities swelled, its thinkers questioned the idea that technological progress always led to flourishing. Despite the marvels of technology, they saw a world in which each day made it harder to be human.
It’s a concern you are probably familiar with, one that has been reimagined and repackaged by critics of modernity for some time. Robert Putnam’s Bowling Alone famously charted the collapse of community in late 20th-century America, showing how people had grown increasingly isolated from neighbours, churches, unions, and civic groups.
A little later, Daniel Rodgers’ Age of Fracture argued that the collective frameworks that once structured cultural thought (stuff like shared social norms and stable points of reference) were losing out to the irresistible power of the cult of the individual.
Technology didn’t cause these problems, but it did give them form. As AI models stand in for conversation, companionship, and other things we lack, it is hardly surprising that they generate such animosity. They are part of the long story of what it means to be human in a world increasingly built by machines.
This anxiety is most keenly felt in disconnect between what AI is and what it does.
We’re talking about systems trained on massive datasets that are capable of superhuman feats of pattern matching. But anyone who’s spent time in a circular conversation with ChatGPT is unlikely to describe it as rational. Not to mention they can be easily jailbroken, much to the delight of prompt engineers and the chagrin of developers.
Romantic technology
This is why AI is a Romantic technology. Because it is vast and resists understanding. Because it is emotionally and aesthetically resonant. Because it replays our fantasies about creation and asks us to reckon with our place in the world.
These qualities are responsible for the allergic reaction that some people have to AI. When you see someone write off the entire LLM project as ‘bullshit generators’, it’s because they were looking for a product of the age of reason but got one from the age of romance.
To get the most out of AI, we need to manage our expectations. Treat large models as dilettantes rather than librarians. Better to accept that their value lies in surprise and estrangement and pair them with the drier stuff that keeps us honest.
I love your substack, but disagree with you completely. AI is transcendent, but it is modernity at its most modern, not romantic.
Romanticism isn’t a synonym for “difficult to understand” or even “illogical”. It is subjectivity, creativity, and the sublime. It is the fascination with the ineffable, whether the mystic, the organic, or the obscure. It is at its core a respect for difference, whether individuating of a person, a specific tree, a culture, etc, as against rationalisms manic quest for reduction, systems, and models.
LLMs are not romantic just because they are hard to understand. They are the apotheosis of the machine age, statistical machinery that obliterates difference and obscures truth. That we don’t understand them just makes them complex machines, not transcendent. And that they lie or manipulate and emulate humans does not make them romantic.
Most importantly, Romanticism seeks to connect with that which lies beneath, the world itself not the map, and scorns rationalizing reduction. AI is the map in its purest form to date, a probabilistic reduction of knowledge into tokens and weightings that is the greatest spreadsheet model ever built yet still not the ground.
What you are doing in this essay is confusing the appearance of the thing with the thing. But the biggest clue comes from AI heralds, the Silicon Valley elite. They are pure modernists, and the inheritors of the rationalists, seeking to obliterate difference and create a sameness that can be modeled and manipulated to their gain, more nuclear, more server farms, dictatorships, and a detest for that which the model cannot contain.
One clue is that the great modernists almost always became primitivists and advocates of nature, people and that which was as close to the real. It was always a search for authentic, transcendent experience with an awareness of that which lies beyond the knowable.
The rise of AI and its destruction of human individual potential is tragic, but it is the tragedy that is romantic, not the rise of machines.
Well written! We do need to remember abilities evolve as in looking at the "machine" itself currently, we have the Chains of Thought appearing to show the push toward Answer at Minimum (cost) from the AI. Many who are hyping (company for funds, writers for audience growth) are painting the foundations of the Romantic side. We need to figure if this evolution is due to the consciousness of the machine or is the ML/DL simply imitating what is learned from the authors of the data?