In Either/Or, Søren Kierkegaard tells us about life’s great enemy. It’s not pain or suffering, sin or despair. It’s not even failure or death. As it turns out, our true nemesis is boredom. He calls it ‘the root of all evil’ because it describes a mode of living where the habits that once formed character no longer continue to do so. Past actions stop making those same actions in the present feel significant, shaking off the sense of progress or growth that previously defined them.
Soon enough, you don’t recognise yourself in your own decisions. You might counter that maybe that’s okay, as you can always find new routines and habits. But not so fast. Kierkegaard also thinks boredom applies to the new as well as the old. Fresh experiences are meant to unsettle, to introduce friction, and to make us reconsider what we thought we knew. Under boredom, novelty is stripped of that power. It becomes a distraction, something briefly stimulating but quickly assimilated without any change in how you see yourself or the world.
The aesthetic life craves stimulation, but diversion has an annoying tendency to harden into the repetitive or mundane; the ethical life depends on habit, but habit without true renewal likes to decay into tedium. His salve is a ‘rotation method’ where we learn the capacity to let the old appear new and the new acquire depth.
In practice, that means learning to vary our perspective rather than our circumstances, to approach the same experience from new angles, and to linger with new experiences long enough for them to take root. It can be as simple as rereading a favourite book and noticing what strikes you differently, or taking a familiar walk with a new locus of attention. It can mean resisting the impulse to scroll for something ‘new’ and giving time for the novel thing you just discovered to mature into a deeper form of understanding.
What is at stake here is something like the freedom to truly know who you are and how to live. After all, freedom is not only the power to choose but the power to recognise yourself in your choices. By this reading, we might say a person is only practising true autonomy — the cultivated capacity to deliberate well about how to live — if their judgments are the sort they would continue to endorse after putting them to the question.
Philosophers call this ‘erotetic equilibrium’, the idea that a judgment counts as autonomous only if it can withstand the twin forces of reflection and deliberation. As Kierkegaard sees it, the threat of boredom is that it compromises this settlement as the familiar no longer feels grounded and the novel no longer feels renewing. Put in other terms: autonomy requires a rhythm between familiarity (in the form of stable, habituated judgment) and novelty (through disruptive experiences that reopen old encounters).
Today, our lives are governed by technology. We spend our time listening to music or watching videos served to us by algorithms, with the average person logging roughly seven hours looking at screens of various sizes. Not all of our time spent on a phone or a laptop is shaped by AI, but even just accounting for social media that takes us to something like two hours of every day at the mercy of recommender systems that govern what we experience.
Algorithmic recommendations are a boon for Kierkegaardian boredom. They interrupt the rhythm between old and new by systematically skewing novelty in favour the already-known and familiarity towards the already-consumed. In doing so, they erode the cycle of disruption and renewal that autonomy requires, leaving us with choices that are neither truly tested nor truly sustained.
Boredom machines
Most recommender systems optimise for adjacency. Spotify’s ‘Discover Weekly,’ YouTube’s ‘Up Next,’ TikTok’s ‘For You’ feeds are all built to keep you coming back for more. The next item in the carousel is chosen because it is maximally like something you already enjoyed.
What looks like freshness is too often a very narrow kind of variation. These systems work by mapping your past behaviour into a dense space of similarities, then drawing from the tight cluster around it. The result is that the ‘new’ is different enough to feel like discovery, but close enough never to rock the boat.
That’s fine in some instances. Nobody needs their dinner playlist to feel like a challenge, and most people welcome a gentle learning curve when picking up a new app or tool. But when this becomes a dominant pattern of cultural life, the cost is that novelty never truly unsettles us and familiarity never deepens into mastery. We stay entertained, yes, but we do not grow.
Humans need what we might call the shock that opens the question. For Kant, this takes the form of the sublime, when the mind confronts something that does not fit its existing categories. The sublime is unsettling because it resists assimilation, yet in straining to make sense of it we discover that our capacity for reason and judgment exceeds what can be taken in by the senses. Without these disturbances, our judgments may never find themselves truly tested against what lies on the other side of what we know.
Heidegger describes the Unheimlichkeit or the uncanny moment when the familiar suddenly feels strange. Such moments matter because they interrupt the everyday flow and disclose possibilities we had previously ignored. Human potential requires this estrangement because we only know that our routines and preferences are genuinely ours when they face this sort of interruption.
We might comfort ourselves by urging that while algorithmic novelty isn’t always new, the systems at least allow us to become familiar with the things we already love. But that type of familiarity rarely matures into depth. Instagram recirculates the same handful of recipes, fitness routines, or travel spots, but the repetition doesn’t necessarily make you a better cook, athlete, or traveller. Familiarity here is broad but shallow, a surface-level exposure rather than patient discipline that ripens into something more meaningful.
Aristotle argues that virtue flows from habituation, the repeated practice of good actions until they become second nature. True familiarity approaches stability and depth only when repetition is combined with the good work of attention and discipline. The kind of algorithmic repetition we live with today looks like habit because it gives us the same patterns over and over, but it lacks its substance because the residue of experience is rarely incorporated into our character.
That’s because the logic of retention rewards what is easiest to consume again, not what is hardest to master. The system is designed to serve us repetition calibrated to hold attention rather than to cultivate depth. Where Aristotle thought habit was the slow conversion of action into character, platforms like to hold us in place rather than carry us forward as persons.
One way to think about this idea is the relationship between first and second order preferences, the difference between ‘man, I would love a cigarette’ and ‘I wish I could stop smoking’. First order desires are immediate and situational but second order desires are reflective, the stance you take on your own wants. In this framing, autonomy is not just acting on a first order preference but being able to align oneself with the second order judgments you endorse about the life you want to live.
Habituation is a bridge between these levels, with repeated actions gradually harmonising impulses and sustained reflection building character. The problem with algorithmic culture is that it breaks this connection. It gratifies first order preferences without giving them the friction that might force second order reflection. Clearly, that doesn’t happen all the time. Even within algorithmic culture one can pull away and use the same tools to pursue depth, like the person who studies guitar through YouTube tutorials or who joins an online community and learns to cook.
But these are acts of resistance, not dominant kinds of engagement.
Algorithmic culture dampens novelty and familiarity on their own terms, but what matters most is the negotiation of these two forces. It is in the back-and-forth between disruption and stability that our choices become truly our own. When novelty tests our habits and familiarity steadies our responses, we gain the chance to endorse our lives rather than simply living through them.
John Dewey, writing on education, put forward the idea that growth depends on the dual conditions of continuity and interaction. We develop when new experiences disrupt us, but only if they can also be tied back into what we already know. In this view, knowing oneself is about learning to weave these ideals into a life that we can call our own.
We need encounters that unsettle and habits that hold, moments that throw our judgments into question and practices that let them take root. To lose that balance is to risk the boredom Kierkegaard feared: a life rich in stimulation yet poor in meaning.
A different beast
Not all AI is made equal. We might interact with recommender systems on a daily basis, but many of us aren’t even aware that we’re doing so. Large language models feel different because they confront us more directly. They speak to us, take instructions, and generate responses to our feedback in the moment. These systems aren’t necessarily more fluid, but we are more aware of how these artefacts respond to human interaction.
We consult with them by treating the model as an oracle for information or advice, and we collaborate with them by enlisting them as a partner in drafting, editing, or brainstorming. But we also let LLMs play the parts we assign them — as tutor, friend, or opponent — and even hand off some tasks to them altogether. These modes of interaction promise control (we set the terms of the interaction) and companionship (we enter into a dialogue with the machine).
At first blush, they look like autonomy preserving custodians of algorithmic culture because we manage inputs and decide whether to accept or reject what is offered. Yet we know that the range of responses is bounded by training data, defaults, and hidden constraints that shape the choices we appear to make. I have lost count of the number of times models surface the same idea in different contexts, and we are all too aware of the linguistic tics that make LLM-generated text stand out.
Large models can in principle explore a giant space of possibilities, but in practice they tend to organise around the same set of patterns. At the point of use, these systems gratify first order preferences in a way that doesn’t guarantee the type of self-reflection we need to grow. A student may turn to a model in order to ‘learn,’ but the act of outsourcing the work can shortcut the struggle through which understanding emerges. The immediate preference is satisfied, but the deeper preference — to know, to master, or to grow — may not get a look-in.
More deliberate modes of use can help. We might say that a student who asks for a sequence of questions to work through, or a writer who uses the model to expose weaknesses in their own draft, is using the tool to test and refine their deeper commitments. In these cases the model becomes a means of holding ourselves to account, of forcing our immediate desires to answer to the kind of person we aspire to be.
At its best, AI returns our words in unfamiliar shapes and forces us to clarify what we mean. In those moments the familiar is unsettled and the novel anchored. At its worst, the same system may take us closer to ‘autocomplete for life’, gratifying first order desires while leaving our deeper commitments untouched. Instead of testing our judgments, they tell us what we want to hear.
Sam Altman acknowledged these two types of usage in a recent podcast: “There are some people who are clearly using ChatGPT not to think. And there are some people using it to think more than they ever have before. I am hopeful that we'll be able to build the tool in a way that encourages them to stretch their bandwidth a little more”.
The idea of rotation is useful here. Language models help us grow when they take something we already know and reframe it in a way that brings forth a new perspective. When you get a response, better to ask the LLM to defend, refine, or contradict itself rather than taking it as gospel. That might seem obvious to some, but last year’s ‘Claude Boys’ phenomenon reminds us that people don’t always like to do that kind of work.
These ideas encourage us to remember that autonomy depends on the rhythm between novelty and familiarity. We need habits that hold and shocks that unsettle, practices that shape character and encounters that open the question. So long as we use them judiciously, large language models are commensurate with that account of autonomy. They can either gratify our first order desires in ways that leave us unchanged, or they can be turned into partners that test our ideas, sharpen our commitments, and force us to see ourselves anew.
The difference lies partly in us, but also in the technology itself. Some systems make renewal harder, though not impossible, while others open the space more readily. To live well is still to weave the known and the unknown into a life we can endorse, and that remains our task, whatever tools we choose to do it.
Really superb piece Harry.
Excelent piece Harry. Lucid, clear, well written, profound. Thank you!