I hope to eventually work on this newsletter full time. To help me do that—to give you more essays, more history, and more good times—I need your help. All you have to do is pledge $5, so that I know I have your support for the moment I turn on paid subscriptions. We’re currently on track to hit the goal of 100 pledges by May 31, but there’s still much further to go. If you like what you read here, this is the best moment to help Learning From Examples.
Casablanca is one of those films whose reputation precedes it. For anyone unfamiliar, it’s about a guy named Rick who runs a bar in Morocco in the Second World War. Rick drinks whiskey, plays nice with soldiers, and complains about his lost love Ilsa.
She turns up in short order, of course, which gives us the classic line ‘Of all the gin joints in all the towns in all the world, she walks into mine.’ But all is not well. Ilsa is married to someone else.
Her new man is Victor Laszlo, a resistance fighter looking to undermine the Nazi war effort. A lot of moping ensues before Rick—who is now hoping for another shot with Isla—eventually decides to help her flee the country with Laszlo.
Casablanca is a good bit of American propaganda. It came out in 1942 having successfully answered a bunch of questions from US officials like: ‘will this picture help win the war?’ Few will be surprised to hear that the Germans are the bad guys who get their just deserts and the Allies are morally upright fellas who come out on top.
The film is a nice example of what happens when a director is asked to toe the line. Rick and Isla, we are told, are madly in love. But they don’t really act like it. There’s almost no physical affection between them, even in the flashbacks when we’re meant to believe they’re head over heels.
That’s because the makers saw no alternative. Since 1934 the American film industry had operated under the Motion Picture Production Code, a set of self-imposed moral guidelines designed to pre-empt federal censorship. Better known as the Hays Code, the system banned profanity, nudity, and the sympathetic portrayal of crime or adultery. Its function was to present a virtuous version of American life.
When the U.S. entered the Second World War, Washington didn’t need to build a censorship apparatus from scratch. Hollywood had already built one for them. Casablanca was produced within that system, which is why Rick and Ilsa’s affair can be implied but never shown. The Hays Code forbade adultery from being depicted positively, which forces the film to turn romance into ambiguity.
To show an affair would be to endorse it, so the love story becomes elliptical. Rick and Ilsa stare at each other like people remembering dreams. Their lines seem half finished. ‘We’ll always have Paris’ works because the audience is only shown the squeaky clean bits of their time in the city of light.
Rick’s penchant for whining meant that Casablanca never did it for me, but it’s hard to deny that—at least with respect to the love affair—the result is powerful. Passion becomes tension and desire becomes sacrifice. Rick, standing on the airfield, lets Ilsa go because the script won’t let him have her.
The Hays Code was a programme of self-censorship, one that Hollywood imposed on itself because it saw no alternative. By the early 1930s, the film industry was facing a storm of moral outrage from church groups, women's leagues, and state censorship boards for corrupting the public. There was too much sex, too much crime, too many independent women.
Federal regulation began to loom after a string of celebrity scandals agitated lawmakers, much like the ‘video nasties’ craze of the 1980s in the UK that I wrote about a few weeks ago. Fearing the worst, the studios decided to take pre-emptive action to keep the government out and the box office open.
Enter Will Hays, a Presbyterian elder and former U.S. Postmaster General. He was respectable, religious, and well-connected. But perhaps more importantly, he was happy to be the face of Hollywood's clean-up operation.
Under his leadership, the industry adopted rules designed to keep films ‘morally wholesome.’ It banned profanity, nudity, interracial romance, ‘excessive and lustful kissing,’ and any portrayal of crime or adultery that might seem enjoyable. By 1934, the studios agreed to bind themselves to a new enforcement arm, the Production Code Administration, which had the power to deny a film its seal of approval. No seal meant no distribution. The government backed off and religious groups declared victory. The studios kept control as long as they told the right story.
The Hays Code smuggled in a particular view of human nature, one that imagined audiences as easily swayed, morally porous, and incapable of dealing with ambiguity. It saw the movie theatre become a moral classroom where crime was always punished and sex always implied. It should go without saying that the authorities always came out on top.
Civil society was in a contest to decide who gets to define decency and on what grounds. That’s why content moderation is rarely just about content. It’s about the worldview on the other side of the filter and the assumptions about what people can handle, what might corrupt them, what kinds of lives are worth representing. Good intentions or otherwise, behind every censorship regime is a vision of what people are and what they are capable of.
From movie to model
The belief that people are deeply impressionable is the same assumption that animates part of today’s panic over AI. Take misinformation. Powered by generative models that create fake news and recommender systems that serve it en masse, AI is ushering in a new age of epistemic insecurity.
Except it isn’t. Despite the fact that it feels plausible (how can it not when AI certainly has the capability to do these things?), pretty much every credible study out there shows that not to be the case. Next time you hear someone call themselves a ‘misinformation expert’, I encourage you to take a look at their data for yourself.
Despite some recent high-profile moves away from moderation, the last ten years generally followed the path laid out by the Hays Code. Control the flow of information. Shape the message. Protect the public from themselves. Epistemic security for all is certainly a laudable goal, but one that history shows is likely to be tricky to successfully legislate for.
The dynamics work differently with respect to recommender systems (more like engines for distribution) and generative models (more akin to movie-making). The latter, which I’ll focus on the rest of this post, uses a whole bunch of mechanisms to shape outputs. Probably the most visible example are filters that screen responses before they are produced, but the entire development process—from pre-training to post-training modifications to alignment mechanisms—all determine how a model can act.
There are lots of good use-cases for these sorts of tools as they relate to shaping model behaviour. Preventing them from helping people construct biological weapons or conducting sophisticated cyber attacks is straightforwardly good. But aside from uncontroversial red lines, questions about what sort of values an AI model ought to abide by are horribly fraught.
What counts as harm is based on a narrow vision of humanity that assumes we’re all psychologically fragile. Should a model refuse to generate content critical of religion? Should it avoid satire? Should it prioritise safety over openness? The problem with answering these questions is that you inevitably disappoint someone. Even attempts at pluralistic alignment tend to work at the group level rather than the personal.
Whether it’s the Hays Code or content filters, every moderation system encodes assumptions about what knowledge is valuable and who can be trusted with it. AI systems generally prize civility over confrontation, consensus over dissent, and safety over ambiguity. The result is a machine flattens reality into something that is acceptable in the broadest possible terms.
In Discipline and Punish, Foucault argued that power works by shaping what’s normal. By teaching us what questions to ask and how they ought to be answered, moderation contains a certain productive logic. The Hays Code produced a moral universe in which sin could be hinted at but never rewarded, where some desires were real but unspeakable.
Normality manifests in the AI project as models that sound like the HR department. That makes a lot of sense when you remind yourself that the goal of developers is to protect themselves, that are they incentivised to make models that don’t ruffle too many feathers. Many decisions are made by small teams under pressure to minimise risk, drawing on institutional reference points that feel legally non-explosive.
The result is a voice that sounds sterile, but its one that is already starting to change. As people begin to actually use AI, the labs are starting to realise that character counts. That was the bet behind models like Claude Sonnet 3.5 (RIP) and to a lesser extent GPT-4.5, which suggests that nobody wants to pay for a model that sounds like its doing corporate mediation.
As science and technology studies thinkers like to point out, power is about how systems are built. Moderation lives in the pipes. It’s in the training data, the fine-tuning, the user interface, and the invisible refusals. Haraway would call it situated knowledge, a particular view of the world that in this instance we might describe as elite, Western, liberal, and professional.
If knowledge is situated, then so is censorship. The Hays Code reflected the anxieties of a particular slice of American society, usually some combination of white, affluent, pious, and powerful. These groups were asserting a claim about who should shape the moral imagination of the nation, one that was eventually formalised by Hays. What looked like universal decency was in fact a tight set of values elevated through institutional leverage.
Better still, the rules were enforced by the studios themselves. Compliance was built into the production process as scripts were pre-approved, endings rewritten, and scenes cut before they were even filmed. It was a funny kind of censorship machine that could ignore overt repression because the system has internalised constraint.
The boundaries felt natural because the people creating the content were also managing its limits. Far from forcing movie-makers to put out bad films, critics generally agree that the Hays Code ran alongside the Golden Age of Hollywood. Some of my all time favourites like All About Eve, The Philadelphia Story, and Gone With the Wind were all made under the Hays Code.
In part, that’s because writers used the rules as a source of inspiration. A murder couldn’t go unpunished, so noir turned fatalism into an art form. Romance couldn’t be consummated, so filmmakers mastered the language of implication. That doesn’t mean censorship is good, but it does mean that old adage—constraint breeds creativity—has something going for it.
We'll always have Paris
There’s a scene near the end of Casablanca where Ilsa says to Rick ‘But what about us?’ He looks at her and replies ‘We'll always have Paris.’ It’s a line that shouldn’t really work, one that ought to feel at best evasive and at worst hollow.
But it lands because it gestures to something we’re not allowed to see. The affair is gone and the love sublimated into memory. It’s a perfect product of the Code: romantic sacrifice wrapped in moral clarity.
Content moderation has never just been about keeping people safe. It’s about who gets to decide what’s acceptable based on a particular view of what people are like and what they need protecting from. That view shapes what gets censored and what gets created.
Like the film studios of the 1930s, today’s model-makers are engaged in a programme of content moderation or self-censorship (delete as appropriate) to protect the public and to protect themselves. Better to impose rules that shape what models can say than risk government intervention.
The Hays Code shows that self-imposed moral constraints, driven by pressure from the right places, can end up defining an entire cultural era. What began as a defensive gesture became a generative force that reshaped the moral atmosphere of American life.
Something similar is happening with AI, except the stakes are higher. These systems are both interactive and intimate. They’re tutors, confidants, creative partners, and companions. That means that every moderation decision, every filtered answer, and every refusal carries emotional weight.
Culture shapes cultural artefacts, and cultural artefacts shape culture. That is as true for Hollywood as it is AI. Both self-censor to avoid the wrath of the state, both mould tastes, and both set the limits of imagination.
The question is who gets to set AI’s rules. Is it some vague sense of ‘the public’ through focus groups and polling? Is it the trust and safety teams inside the labs? Or in the end will it be government? I see three major ways forward, though there are no doubt others:
Top-down licensing: Governments eventually mandate a safety threshold. Firms compete on performance inside a fixed compliance box, like age ratings for film.
User-selectable guardrails: A marketplace of ‘safety profiles’ in which you pick your own filter, sort of like Google’s sliders on its Vertex platform.
Open-weight maximalism: Open models proliferate, guardrails become optional, and governments tighten application-layer controls.
If only a handful of rule-makers define the guardrails, we inherit their biases at scale. If we embrace openness, we allow more genuinely harmful content to slip through the cracks. Even user-selectable guardrails only deal with your own AI content, rather than content generated by others.
Whatever the case, we’re left with a question with no clear answer: who do you trust to hold the pen?
Excellent article Harry. Often people talk down the potential dangers of AI by comparing it to the new technology scares of the past, but this made me think more about what the artists of the future are giving away - the models aren't just another media platform or a new kind of tool, like the radio, or the camera, or the novel because so much of the agency is pre-built into the form. So you might get to be a GPT Michaelangelo through the mastery of your prompts, but the shading and detail of your art isnt truly at your command.