I recently argued that cultivating a sense of taste will get you through the age of AI slop and then some. The era of creative automation will be caked in sludge, but there’s no reason it shouldn’t also bring with it more good stuff than ever before. If you can reliably sort the better from the bitter then you’re onto a winner.
I’m deeply uncertain about the specific ways that AI will reconfigure the world of mass culture, but — assuming the models maintain their current rate of improvement — any analysis has to begin with a supply side shock that produces new kinds of filtering mechanisms.
One of these is deeper personalisation, where algorithmic feeds shape what you see based on your stated and revealed preferences. The other is to rage against the machine by seeking out human work, curated selections, or even high quality AI content discovered through effort and taste.
To make this case, I’ve cobbled together a simple representation of the entertainment ecosystem and the points at which generative models are likely to exert pressure. I want to give you a map of what’s changing right now (and what could change in the future) to make better sense of the coming flood.
This post mainly deals with video content production because that’s where we spend a great deal of our entertainment hours, though some of these ideas could in principle be applied to any other type of visual media.
Not just an opportunity to sharpen my rusty Google Slides skills, the above toy model helps us think about the production and consumption of visual culture. Its layers are organised around two elements: participants (the individuals or institutions operating in that space) and enablers (the underlying contingencies that shape what they can do).
Each section of this post explores how AI may reshape these areas by altering who creates, how it circulates, where the money flows, which rights get recognised, and how we decide what's worth our time. It’s not exhaustive, but I found this exercise useful for puzzling out what a sloppier future looks like in practice.
Creation
Visual content production is made up of a pipeline of specialists. Writers pen scripts, artists storyboard frames, designers construct sets and make costumes, actors read lines, directors of photography mock up scenes, editors assemble rough cuts, VFX teams add effects, and composers score sounds.
If you squint a little, these roles can be separated into four groups that each deal with a piece of the creative process: development (finding the right ideas), pre-production (preparing the ground to shoot), production (the actual making of the thing), and post-production (turning raw materials into the final product).
LLMs are already being used for scriptwriting. Strange as it sounds, I know people in the industry lamenting that some producers are openly saying they ‘prefer’ AI scripts. Likewise, tools like Runway or Pika can rustle up moving or animatic storyboards from textual inputs, and image generation tools can be used to mock up design references.
Once production is complete, VFX teams perform upscaling and de-aging using AI, with studios feeding plates to tools that render a face before a human compositor finesses the final frame. On the audio side, AI voices fill dialogue replacement gaps or generate foreign language dubs.
But AI’s role on the shoot itself is still narrow. There are models that flag capture errors and some experiments with face replacement tools, but these systems aren’t dramatically changing what it means to make a movie.
As for what comes next, there are two possible futures: (a) the models get marginally better and continue to plug-in across each of these stages, or (b) the models get much better and collapse the whole stack.
My money is on the latter. I don’t see any reason to expect the pace of progress to slow (just look at recent video models from Google, HailuoAI, and Midjourney). In goes a script fragment and out comes a photorealistic video clip with camera motion, environmental dynamics, and synced sound design.
If that kind of output becomes reliable, then suddenly you don’t need such a big team to make a flick. That might put serious pressure on jobs in the short term, though over a slightly longer time horizon I wonder whether we might actually see more roles in the creative industries as barriers fall and big players reorientate themselves towards prestige projects.
The maximalist version of this dynamic leads to something like the rise of the bedroom studio where small creator groups operate like established production houses. Their creation speed is faster, their iteration cycles are tighter, and their overheads are a fraction of what they might otherwise be (with wages and compute accounting for the biggest expenditures).
Ok, but we’ve been hearing about the bedroom studio for a while now. When exactly is this going to happen?
I think we’re looking at five years until a team of three produces something broadly comparable with a major studio. An AI generated animated film will probably be the first to enjoy popular acclaim (though there’s no chance it will receive an award for its trouble any time soon).
But we’re not there yet. Google’s Veo 3 is stunning, but it can only generate about eight seconds of content at any given moment. Perhaps more tricky is the question of character and world consistency across shots. There is lots of progress being made, but right now it’s tricky to replicate scenes with uniformity over time.
If video generation models remain roughly where they are now, then we can say goodbye to the bedroom studio. Hollywood will breath a sigh of relief. After all, they get to use AI to slash costs while avoiding having to compete with huge numbers of new aspiring film shops.
The alternative isn’t so rosy for them. Should the models keep getting better — say, maintaining the current pace of improvement for three years — then the big boys are in for a shock.
Distribution
But lets not get carried away here. The studios are giant machines that distribute as well as create. Marketing budgets stretch into the millions, and it isn’t exactly cheap to show your film at the box office.
The creation element is simple to sum up: more content is coming, we just don’t know how much more and how likely it is to be any good. The question that flows from this is observation is about how to wade through the slop until we make our way to the good stuff.
In the streaming era, distribution was already algorithmically driven (about 3/4 of what people watch on Netflix is driven by its much revered recommendation engine). TikTok’s feed can catapult an unknown creator to millions of views overnight or bury a video into obscurity based on a few signals.
In one sense we’re in for more of the same. More YouTube channels that churn out a steady drip of AI brainrot. Generative music streams pumping out audio 24/7. Thousands of AI e-books flooding self-publishing platforms. These dynamics already put enormous strain on the systems that act as gatekeepers between creators and audiences.
But the amount of content that exists today is nothing compared to what’s coming. A supply side shock will make it even harder for distributors to select content for viewers, which will in turn produce new kinds of filtering mechanisms:
Streaming services get ruthless: Every major platform will accept more titles as acquisition costs plummet, but their front pages can’t grow to keep pace with its expanding catalogue. Expect anything that doesn’t hit the right metrics to disappear from view after a short probation window.
Creator platforms become serious: If you can make a good looking 90 minute film with three people and $10K of compute, a Netflix deal that takes months and comes with a bunch of notes looks less attractive. Platforms like YouTube, which will now get high quality films to complement the slop, are the beneficiary here (though many creators will still want the prestige that comes with a deal).
Cinema becomes elite: It costs a lot to run a cinema, and screening cheap AI films doesn’t help proprietors break even. I see an ‘operafication’ of cinema on the horizon, where a trip to the silver screen is for those who want to watch ‘human made’ flicks that justify the ticket price.
Supply balloons but the prime real estate (home pages, top ten rows, and theatre screens) doesn’t keep pace. Distribution therefore tilts toward (a) algorithmic triage for the masses or (b) human filtered corridors for viewers who equate artisanal with meaningful.
No matter what happens, more content is coming. That content needs to be filtered, and the primary way that will happen is through increasingly sophisticated layers of personalisation. That might sound great on one level (more stuff that I like) but hellish on another (no one likes the same stuff as me).
The benefit of this dynamic is that it’s likely to produce a reaction in the form of human tastemakers to inject a much needed dose of authenticity into proceedings. Newsletters, curated communities, forums, and group chats will flourish to help people figure out where they should spend their time. Long deemed irrelevant, I expect critics to become more popular than ever.
For creators, the options are to chase mass exposure through algorithmic optimisation or cultivate loyal niches who actively resist the feed. This is often what happens today with small musicians, who tend to make more from merch than they do from streaming royalties.
Economy
Historically, the process through which cash flows through the entertainment ecosystem has been relatively linear. Studios pay talent to produce content, content is distributed via cinema or television and sold to audiences as advertisers pay for eyeballs. Some of that revenue trickles back to studios as royalties or profit shares, which pays for new projects.
The Netflix era complicated these flows. Money now moves through subscription pools, and payouts are tied to opaque viewership metrics rather than direct sales. The creator economy added new branches to the tree, with individual creators earning through ads, Patreon, merchandise, and a slew of other direct-to-fan channels.
Generative models push this logic further. Lets begin with the big one: the cost of making stuff. If an indie film that once cost $5 million can be made with AI help for $500k, that changes the break even geometry for its backers. More projects might get made for less, but also potentially earn less if the market gets saturated.
One way to make sense of this is through the ‘cost-to-quality tradeoff’ that deals with how much richness a creative artefact has relative to the value of resources that went into its making.
On the above Thoughtful Graphic™, we have one axis for production cost and another for creative quality. In the bottom left quadrant, there’s commodity content (e.g. clickbait videos). In the top-right, we’ve got prestige spectacles that match budget with discernment.
Bottom-right might be formulaic blockbusters. These are expensive but creatively safe, often relying on finding and reconstituting existing IP. Finally, the top-left is where I’m imagining indie creations, stuff made on a shoestring budget but with a recognisably creative texture.
Commodity content (low-cost, low-quality): What most people mean by slop, this segment becomes saturated as generative models collapse the cost curve in the quadrant where friction was already lowest. Expect it to overwhelm search, pull down standards, and inflate volume to a ludicrous scale.
Formulaic blockbusters (high-cost, low-quality): From one type of sludge to another, this section deals with what we might call box office bait. AI will lower costs without necessarily lowering quality, which will either free up budget for risky bets or allow studios to put out more derivative stuff.
Indie artisanal (low-cost, high-quality): This is the place that I expect AI to have the highest relative impact because it amplifies individual vision at near zero cost. If things go well, the bedroom-studios release more high quality films then we know what to do with.
Prestige spectacle (high-cost, high-quality): Today, these are expensive but somewhat risky flicks like Dune. Tomorrow, generative models bring a bit of risk deflation to the table. If your passion project can be produced at a fraction of the cost, then you can put more of your budget into pushing boundaries. And even if you have a ‘no AI’ policy, these films will still get made because they represent authenticity.
If this plays out, AI actually increases the size of the creative middle class. We’ll see more budding creators who can sustain themselves because they can produce content efficiently for a modest but loyal audience. This scenario is one of decentralisation in which many creators each earning a decent living serving specific audience tastes predicated on low costs and direct fan engagement.
Less enticing is a winner-take-all situation where algorithms amplify those who already have the cash to spend on promotion. Maybe you can drop $10 million marketing a $500,000 project and have it make financial sense. That changes the shape of the media economy because it encourages studios and streamers to put more weight behind cheap content with high upside and low risk.
But efficiency comes at a price. Traditional paid reach still drives global hits, but it’s still losing ground relative to trust-based discovery (a dynamic I expect to become more pronounced over time should we see the emergence of new forms of curation).
The studios won’t vanish, but they may be forced into a kind of strategic bifurcation that mediates between slop at scale on one side and rarefied artisanal work on the other. This feels natural to me when cheap content can now be made cheaper, and prestige content can now be made with less risk.
Model providers and GPU makers will profit, but I’m optimistic that the structural advantage shifts away from creative incumbents. If production becomes cheaper and distribution more personal, then the centre of gravity in the media economy starts to tilt towards the best work. In the world of the bedroom studio, that work can come from anywhere.
Rights
Entertainment rights have always been complicated, but they’ve mostly relied on clear roles and processes. Writers write, actors act, and editors edit. Each of those actions gets tracked, attributed, and remunerated through extremely long-in-the-tooth frameworks.
Alas, when the pipeline concertinas, those structures start to look unstable.
We’ve already seen strikes organised around the use of AI in Hollywood, back in 2023 when models were positively primitive compared to today. Studios are experimenting with scanning background actors and generating digital extras, and voice actors worry about contracts that could allow AI clones of their voice to be used without proper compensation. As the SAG-AFTRA union put it:
This ‘groundbreaking’ AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So if you think that’s a groundbreaking proposal, I suggest you think again.
Since then, California enacted SAG-AFTRA–backed laws that force producers to secure performers’ informed consent before creating a digital replica of a living or deceased actor’s likeness. And earlier this year, the group struck a deal allowing actors to license digital replicas of their voice for use in ads on the condition that actors get paid and retain control over how their likeness is used.
These are important developments, but they only deal with rights within the existing system. They don't grapple with the brave new world, the place where jobs shuffle around, new hybrid roles emerge, and small creators work across the stack. A video might be made by one person using a dozen different models. Is that person the sole creator? What if they’ve built using AI outputs trained on other people’s work?
These questions represent some of the most hotly contested battleground in the legal landscape. Getty is suing Stability AI for scraping its stock image archive and artists have challenged AI model training on copyrighted material without consent.
In the courts, nobody seems entirely sure if training a model on public data counts as fair use or a breach of copyright. At stake here is the question of whether ‘exposure’ to a work constitutes a kind of replication, a question the law isn’t yet equipped to answer.
But if I ask a model steeped in copyrighted scripts for a screenplay, the infringement question turns on the output itself. Does it lift dialogue or story beats in a ‘substantially similar’ way to existing material?
This is clearer territory in that the systems to distinguish whether an output infringes on copyright are well established. That being said, it’s unclear whether training on copyrighted material increase the likelihood of infringement at all.
Where the courts land will reshape the creative economy. If the legal risks are too high, the biggest model providers will hesitate to release generative tools at all. Platforms might deprioritise AI flicks to avoid reputational damage and consumers might shun content deemed to be made unethically.
One path is that the headline lawsuits break the tech companies’ way. Training counts as fair use, outputs are judged only on similarity, and the only requirement is opt-out registries or provenance tags. In that scenario the models keep getting better, datasets stay fat, and the legal skirmishes become manageable for developers. This looks likely given recent judgements.
A second path sees judges decide that wholesale scraping crosses the line, halting the flywheel and forcing a new licensing regime into existence. It could be something like blanket licences for text and image corpora, mandatory opt-ins or maybe even a new ‘training right’ to sit alongside copyright.
If I had to guess, I don’t see a future where lawsuits prevent training entirely. Courts may raise hurdles, but the hunger for national competitiveness all but guarantees they won’t remain in place indefinitely. If aggressive rulings threaten domestic AI firms, legislators might decide to rewrite the rules by redefining fair use or carving out exceptions.
Preference
Sooner than you think, we’ll have high quality interactive stories made based on what you say you like or what the platform thinks you like. Endless mixes of references and styles designed specifically for the viewer. Narratives that shift in real time to optimise for some metric that the streamer decided was most valuable.
This isn’t exactly a new idea, but I like the ‘audience of one’ framing that emphasises the unique nature of experiential storytelling. It’s the logical extreme of the trend described in the distribution section, where stories unfold based on our stated and revealed preferences.
The idea of interactive stories tailored to viewers might sound out there, but firms like Disney are already heading in that direction. It’s really just the next step from blockbuster homogeneity to algorithmic curation. Today’s streaming services shape what we see based on what we’ve seen, so it’s no giant leap to apply that logic to creation itself.
The setting is anything you can imagine (or is created to scratch an itch you never knew you had). Characters react to your choices and the plot lines adjust to themes you care about. For many, they’ll prefer to spend time here than in the real world.
Of course, there are a few things wrong with this picture.
Even ignoring technical constraints, part of the joy of entertainment is shared experience. If we all disappear into bubbles of customised content, there’ll be no-one left to talk to about your favourite show with. You might tell them about your AI adventure, but that sounds about as fun as filling someone in about your dreams.
The more AI slop we see, the more authenticity we crave. This is a human reaction. When confronted with an abundance of cheap goods, some people turn to artisanal alternatives (like the revival of vinyl records in the age of music streaming).
This idea reminds me of William Morris, the Victorian designer I wrote about earlier this year. Morris rebelled against the industrial mass production of his time by championing handmade craftsmanship and designs that had the imprint of human artistry.
In the AI era, ‘handmade’ might mean works that emphasise human presence. Perhaps theatre performances, live events, or analogue arts aren’t going anywhere. This is why I think the cinema, which will embrace human made authenticity, will eventually become a culturally elite institution like ballet or the opera.
As for the consumer, the curation of taste will become an identity statement. When choices are infinite, choosing to follow a specific human creator or a certain aesthetic movement becomes a way to say something about who you are.
But for any of this to happen, we need enough people to buy what model makers are selling. We’re still waiting for AI’s breakout hit that confers legitimacy on the whole category, attracting more talent and investment into AI content, which in turn yields better works.
History suggests that the doom and gloom may be overblown. As early as 1855, observers of photography were lamenting it ‘would be the death of art’ had been proved wrong. Painters didn’t disappear with photography, but they did reinvent painting as movements like Cubism turned to what cameras couldn’t capture. In the same vein, human creators in entertainment will likely gravitate to what AI can’t do. Maybe that’s providing a deeper sense of authenticity, or live presence, or simply the unique weirdness of individual imagination.
As a result, the future of entertainment will split along two lines. Some will chase the sugar rush of infinite customisation and give themselves to content that knows them better than they know themselves. Others will react by seeking out the human to build shared experiences and find a sense of meaning in their creative diet.
Scrolling alone?
What happens next will depend on a handful of breakthroughs, a few legal judgments, and a million little decisions made by creators and audiences. Despite that, I have a (low confidence) view about the basic shape of things to come.
The models probably aren’t going to remake entertainment by replacing Hollywood. But they will flood the supply side with more content than we can possibly process and reshape demand according to increasingly important selection mechanisms.
Some of those filters will be computational but others will be social. That’s the split I keep coming back to, the one between deeper personalisation and a reaction against it. We call it the age of slop, but we’d just as well call it the age of extremes.
If you cultivate a sense of taste and follow good curators, the coming years could be extraordinarily rich. But if you let the feed satisfy your preferences, there will be no escape from the average, the meaningless, and the unintelligible.
I’m optimistic about the elite artisanal production economy but pessimistic about the substack divisive slop dispenser economy sadly!
Hope this link works. To be safe, will do it over on your X...
An artist named Claire Silver was interviewed at her showing The DAM Show.
https://x.com/ClaireSilver12/status/1942252480357634052
If this works, start at beginning of vid and be patient. In the middle, Claire goes into AI and (she's right) how Unsupervised Learning is what in end will make true advancement, especially if the "it" and/or "them" are thinking/communicating in their language.
Otherwise, I feel we set such low bar where humans just churn out remakes, the streamers call "new" something from 2013 (even 1999)...
Middle step would be to fashion a rough plot and get opinion from AI examining from several angles to then fashion...