AI developers have a style. Badge logos and blocky text are a given, but even promotional materials have converged on the same type of vibe. The typical advert uses some combination of a smiling person talking to a phone, knowing replies by the assistant delivered in a friendly patter, and the dum-tsh-dum of a snare drum as the camera skips from bike-shed to bistro.
Anthropic’s recent effort is better than most, jettisoning the happy-go-lucky aesthetic for a mish-mash of falling pianos, country road-trips, neolithic cave paintings, deep sea exploration, and stellar phenomena. The idea is that AI is useful in the places you can imagine and in many others you can’t, that despite the doom and gloom so pervasive in Western societies “there has never been a better time to have a problem”.
That’s certainly true in some respects. Today’s models are more or less the stuff of pulp sci-fi dreams, friendly golems that have something to say about whatever question you pose (so long as you are careful to abide by the usage policy). Notwithstanding the protests of those who think the entire AI project is made of smoke and mirrors, millions of people seem to agree that they have problems that thinking machines can solve.
But in another sense, there has never been a worse time to have a problem. After all, the point of problems is to work them out ourselves rather than have someone solve them for us. As the old saying goes: “If you give a man a fish, he may eat for a day. If you teach a man to fish, he eats for a lifetime”.
Just doing stuff
The aphorism of the moment holds that “you can just do stuff”. A favourite of Silicon Valley, the phrase embodies a strain of thinking that distinguishes between “high agency” people and those who don’t have the right stuff. On one level this is a truism. You can of course do many things, some of which are meaningful and some of which are not. It’s a great time to have a problem, you just need to roll up your sleeves and get stuck in.
Agency here is about freedom, the leeway to do what you want and act on your desires. Freedom, though, has a funny habit of decaying into licence, the doing of whatever comes first to hand without reflection on its worth. If you spend your time “just doing” pointless things, that doesn’t strike me as particularly agentic.
What makes the mantra seductive is that it spares us the labour of asking which things are worth doing. In a culture obsessed with output, action becomes its own reward. Better to do something than nothing, to be in the arena than sitting in the crowd.
The cult of agency is a counterweight to a world of bureaucracies and stasis, a place where “nothing ever happens”. The ability to act at all can feel like a triumph, and so to take action is to stick one in the eye of institutional inertia. The language of agency thrives in certain corners of the internet because it assures us that we have what it takes to strike out somewhere, anywhere, on our own.
A more generous reading of the “just do stuff” meme is that implicit in its logic is that choice alone is not enough, that you do have to pick and choose wisely. A supermarket aisle may present a hundred brands of the same product without making us wiser about what we want or why we want it. The high-agency move would be to pick the perfect item, or better yet start your own grocery chain from first principles.
We might say that action should lead somewhere worth going, that movement is a means not an end. If this is true, then we know why the ideal of agency feels incomplete: it describes the ability to act but not the standard by which action is judged. Without that end, its easy to mistake momentum for direction, novelty for growth, and busyness for a life well-lived.
So agency needs direction, but how do we know where to focus our efforts? You figure it out by knowing the kind of person you are today and the kind of person you want to become tomorrow. This is better, but now we’re no longer talking about agency in a strict sense. We’re in the land of autonomy, the cultivated capacity to live well by reflecting on the type of person we want to be.
Autonomy is about deciding which things are worth doing and then binding yourself to that decision when appetite, novelty, or fatigue threaten to take you somewhere else. It’s about not-doing as much as it is doing. To live with autonomy is to set the rules by which competing desires are brought to order so a person can act for the better.
You practice autonomy by noticing your impulses and testing them against a standard you chose. One way to imagine the split is to think about first and second order preferences, where the former concerns what you want right now and the latter describes the kind of person you want to be. If you “just do stuff” in service of your proximate wants, don’t be surprised when you feel something is still missing even after you founded that company or wrote that book.
The shape of problems
Figuring stuff out for yourself has a practical element (in that it is the condition of knowledge) and a moral element (in that it trains you to become the kind of person you want to be). Plato’s Apology famously gives us Socrates’ claim that “the unexamined life is not worth living”. In this framing, virtue is a product of questioning because it forces us to test our assumptions and to reform our character.
The mathematician George Pólya said solving a problem using “your own means” trains the habits of reason and allows the doer to become more than they were. What he meant was that the value of problem-solving lies in the the struggle, that each attempt at reasoning leaves behind the residue of skill. It gives you a sharper sense of what counts as a good and a clearer picture of what kind of thinker you are, so that the next problem — and the one after that — gets easier.
When we ask an LLM to solve our problems, we get a serviceable answer at the cost of truly understanding how we got there. Knowing, in other words, is not the same as growing. Every time we use ChatGPT to work something out for us we deprive ourselves of the opportunity to become a little bit wiser. People are already outsourcing cognitive labour to large language models with little regard for debates about whether AI can “think” or not.
The rub is that AI doesn’t only help us do things. Clearly in some instances it does, like teaching us a new skill or surfacing sources of information that we might not have seen. But it also proposes what to do, how to do it, and why we should care. This shift moves us from assistance (a tool serving chosen ends) toward deference (something that proposes ends we adopt without thinking).
Models choose what you see first, how options are ordered, which interpretations are offered as “reasonable”, and which are not even offered for consideration in the first place. A recommended route, a suggested reply, or a pre-filled summary frame the terms of engagement by providing the architecture under which we make choices. They don’t always pick what you eat, but they forever set the menu.
Systems infer objectives from us and optimise toward them. Often that takes the form of maximising engagement, even though large language models are not explicitly designed with this goal in mind. Their stickiness in part flows from the post-training procedures designed to turn the base model into a chat assistant. It’s pretty easy for “helpful, honest, and harmless” to become “the kind of thing I quite like talking to all day”.
You might say that this problem is something that all technologies face, that we’ve been here before and the worries were overblown. The pen doesn’t tell us what to write anymore than the calculator tells us what to add or subtract, right? The difference is that while all technologies in some sense structure our actions — the wheel made certain journeys possible and cartography influenced patterns of trade — we don’t outsource the habit of thinking to these artefacts.
It’s also the case that some off-loading is beneficial. Humans have limited cognitive bandwidth, and spending it on memorising every route or re-deriving calculus is probably not the best use of that mental currency. The trick is to distinguish between delegation that clears space for higher forms of judgment and delegation that spells trouble for the work of judgment in the first place.
The classic justified truth belief (JTB) theory of knowledge describes its subject as a mental representation that corresponds to reality, one that is underwritten by a justification. It’s essentially a mental mirror of the world that is true and warranted. Assuming that JTB tells us something useful about how knowledge is made, then the problem that AI poses is clear enough.
AI can deliver a proposition that happens to be true, but if you have not traced the steps, weighed the reasons, and ruled out the alternatives yourself, then that knowledge isn’t really yours. I’m not so worried about machines making mistakes, but I do wonder whether the act of deference erodes the habits that let us truly say we know.
We might even say that autonomy reveals itself most clearly when tested against the temptation of deference. AI endangers self-rule but it also provides the conditions under which it can be tested, offering each of us a chance to practise rejecting the easy answer and favouring the harder work of thinking.
I use ChatGPT or Claude most days, and I’m probably as guilty as anyone for asking the robot about things I could probably have figured out for myself. I don’t try to police my use, but I do try to think deliberately about it. One difference lies in letting it clear space and letting it fill space for me. The temptation is always toward the latter, because it’s easier to accept answers than to wrestle with problems.
But to live well with machines is to insist that they serve our efforts at growth rather than replace them, that they enlarge the field for judgment instead of shrinking it. The task of becoming the person you want to be — the kind who can judge, discern, and act — cannot be outsourced. It has to be practiced by each of us, with all the false starts and frustrations that practice entails.