<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Learning From Examples: Essays]]></title><description><![CDATA[Writing about knowledge and its dependents.]]></description><link>https://www.learningfromexamples.com/s/essays</link><generator>Substack</generator><lastBuildDate>Fri, 01 May 2026 17:46:29 GMT</lastBuildDate><atom:link href="https://www.learningfromexamples.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Harry Law]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[learningfromexamples@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[learningfromexamples@substack.com]]></itunes:email><itunes:name><![CDATA[Harry Law]]></itunes:name></itunes:owner><itunes:author><![CDATA[Harry Law]]></itunes:author><googleplay:owner><![CDATA[learningfromexamples@substack.com]]></googleplay:owner><googleplay:email><![CDATA[learningfromexamples@substack.com]]></googleplay:email><googleplay:author><![CDATA[Harry Law]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Love thy robot ]]></title><description><![CDATA[Robotics slop and character rot]]></description><link>https://www.learningfromexamples.com/p/love-thy-robot</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/love-thy-robot</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Fri, 31 Oct 2025 11:25:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VUc5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I&#8217;ve been a little lax with my personal writing recently. In part that&#8217;s because I&#8217;m spending most of my time researching at the Cosmos Institute, but it&#8217;s also because my wife and I are expecting our first child any day now. I&#8217;ll keep writing as often as I can, but for the foreseeable future my posting schedule may be more irregular than usual. </em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VUc5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VUc5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VUc5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VUc5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VUc5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VUc5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg" width="960" height="544" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:544,&quot;width&quot;:960,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:275031,&quot;alt&quot;:&quot;File:Jacob Jordaens - The Four Latin Church Fathers.jpg&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="File:Jacob Jordaens - The Four Latin Church Fathers.jpg" title="File:Jacob Jordaens - The Four Latin Church Fathers.jpg" srcset="https://substackcdn.com/image/fetch/$s_!VUc5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VUc5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VUc5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VUc5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc59b4f89-f205-42b0-93f5-2455263635f5_960x544.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Jacob Jordaens, The four Latin doctors of the church (1620-1625)</figcaption></figure></div><p>Humanoid robots are cool. Like driverless cars, they are one of those rare pieces of modern technology that feel appropriately futuristic. Should the makers be able to <a href="https://x.com/VraserX/status/1983397732480958877">judo flip teleoperation</a> into full automation, we can expect the stuff of sci-fi serials to be used enthusiastically by anyone who can get their hands on them.</p><p>This week, X1 made its NEO model available for <a href="https://www.1x.tech/order">pre-order</a>. For $500 a month and a deposit, you can get one of these 5&#8217;6 guys delivered to your house in 2026 (so long as you live in the United States). By all accounts NEO, which looks a bit like a walking 2000s PC speaker, can do a pretty good job at helping you around the house. X1 CEO Bernt &#216;yvind B&#248;rnich <a href="https://www.youtube.com/watch?v=f3c4mQty_so&amp;t=355s">called</a> it &#8220;robotics slop&#8221; insofar as the robot (or for now, its human pilot) can perform basic household chores to a good-but-not-great level. </p><p>I don&#8217;t live in the United States, so I don&#8217;t hold out hope for seeing one in action any time soon. I&#8217;m also not exactly sure when teleoperation will become full automation. Maybe a few years. Maybe sooner. But even if I could get my hands on a completely autonomous bot, I wonder what it would be like to have an electronic footman that lives in my house and does all the stuff I don&#8217;t want to do. Sure, a humanoid robot would (probably) save me more time than not, but I suspect it might be a strange experience to boss around a thing cosplaying as a human every day. </p><p>If our character is shaped by our habits, then it seems to me that interacting with a bipedal robot on a daily basis would be pretty relevant for the type of person I am and would like to be in the future. Should I start mistreating my new guest &#8212; or if I get used to commanding a human-like thing that always obeys &#8212; then I might find that style of interaction rubs off on my personality. I&#8217;m not saying it will make me evil, but on some level practising this kind of domination strikes me as Not Good for the soul. </p><p>Of course, it might be fine. Maybe I&#8217;d get used to it quickly, and it wouldn&#8217;t have too much of an impact on the sort of person I am learning to become. In either case, I won&#8217;t know what it means for me or anyone else until we give it a go. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h3>Objectification </h3><p>Is it wrong to treat an inanimate object badly? In some respects no. A rock doesn&#8217;t have a sense of interiority, so you don&#8217;t need to worry about hurting its feelings. If there is no subject of experience on the receiving end, then there is no moral patient to fret over. There&#8217;s nothing to wrong, nothing to injure, and no duty owed. At that level, treating an inanimate object badly is simply not a moral act. It is value-neutral, like clearing a fallen branch from a path or dismantling a broken chair. </p><p>This is all well and good, but it doesn&#8217;t tell us much about the person treating an object badly. From this perspective, we might still worry about what kind of person we become by taking pleasure in destruction. Throwing a rock into an empty patch of dirt may not exactly be a morally troublesome act, but what about if you threw it somewhere more interesting, say in the direction of a gravestone? </p><p>Even if it does little damage, almost everyone recognises that this would still be a kind of desecration. Clearly the stone still feels nothing, but the act signals contempt toward the human world the stone belongs to. It violates a norm of care that flows outward from the living and the dead alike. </p><p>These kinds of acts reveal something about who we already are, but they also shape our growth by accustoming us to certain ways of being. When we rehearse indifference toward something that carries significance, we become the kind of person for whom indifference comes naturally. In that sense there&#8217;s a harm taking place to the self who becomes habituated to treating the world as something beneath them.</p><p>Then we have more sophisticated artefacts like, for example, a common household vacuum cleaner. You probably wouldn&#8217;t destroy your own, partly because you might need to hoover something up later but also because doing so would feel petty and self-corroding. Just like flinging a stone in a cemetery, the act changes who we are for the worse. Maybe not by much, but enough to matter with enough repetition.  </p><p>But humanoid robots aren&#8217;t vacuum cleaners. These are things that will live alongside us, proxies for real people that we interact with as if they were social partners. This relationship strikes me as different in kind to the vast majority of tools and technologies that we have at our disposal. </p><p>Even if you know your new companion is a machine, it&#8217;s still a person-shaped thing that elicits scripts of command, deference, status, greeting, blame, and praise. That means your mind treats it as a social partner by default, but it also means that every interaction is coloured by a posture of mastery. You issue orders without negotiation, expect compliance without comment, and correct behaviour without apology. All the while, you are practicing being a person who takes compliance to be the natural order of things.</p><p>To be clear, my concern here isn&#8217;t that the robot feels but that we respond as if it belongs in the moral space normally occupied by persons. I am not thinking about people forming sentimental attachments that lead to over-reliance (that is one scenario, but not the most interesting one). The deeper issue is the kind of stance we learn to inhabit. </p><p>So, we might distinguish two different forms of relation:</p><ul><li><p><strong>The instrumental</strong>, wherein the human form is used to secure trust and ease of interaction. The robot is still treated as a tool, but one that works better because it feels familiar. This provides a kind of psychological leverage in that the design nudges us into a cooperative stance.</p></li><li><p><strong>The moral</strong>, wherein we begin to treat it as a quasi-subject that sits inside the space we normally reserve for persons. Once it occupies that zone, our behaviour towards it becomes expressive. In engaging with it, we are practising a way of relating that, over time, changes who we are. </p></li></ul><p>The latter dimension has animated discussions about our relationship with technology for the better part of two thousand years. Its roots go at least as far back as Plato, who described technology as a form of craft knowledge that shaped both product and practitioner. The cobbler&#8217;s <em>t&#233;chn&#275;</em> produced shoes, but it also cultivated habits of judgment about fit, durability, and beauty; the navigator&#8217;s <em>t&#233;chn&#275;</em> guided ships, but it also demanded an attunement to winds, stars, and currents. </p><p>In this framing, technology is something like a &#8220;training ground&#8221; or a set of practices that form the character of those who wield it. Technology externalises human capacity, but it also bends those faculties back towards us by fostering new dispositions and habits. </p><p>Aristotle argues that character is moulded by habit, that over time your actions in the world form the essence of who you are. For the man the early moderns called simply The Philosopher, the self is built one act at a time. He famously reminds us that we become just by doing just acts, wise by doing wise acts, and brave by doing brave acts. </p><p>But what about when people look like tools and tools look like people?</p><p>Humans are, after all, predisposed to treat anything with eyes and a voice as a social partner. We respond to appearance as if it indicates personhood, we extend the grammar of interaction to anything we can, and we adopt the stance that normally accompanies a face-to-face encounter if the situation allows for it. Some people soften their tone and say &#8220;thank you&#8221; to a voice assistant on their phone, even though they know perfectly well there&#8217;s no one on the other end. Others do the same with ChatGPT, though there is some logic here insofar as politeness often produces better responses. </p><p>The point is that human-like cues pull us into patterns of social behaviour. Given these robots are about as human-looking a form as we can imagine, we should expect them to stimulate our social reflexes and modify our expectations accordingly. Provided enough time, expectations become habits and habits become character.</p><p>There is a second concern here, adjacent to character but distinct from it. This issue deals with the nature of shared life that requires us to encounter other wills and adjust to them. Freedom is in one sense the skill of navigating a world full of other agents, each with claims and desires of their own. </p><p>If we spend enough time commanding a thing that always does what we ask, we may come to see effortful negotiation as an irritation and other minds as obstacles. A life without friction may feel pleasant, but it also risks dampening our sense of what freedom really is: the discipline of sharing a world with other beings.</p><h3>Benevolent authority </h3><p>When I&#8217;m writing about AI and philosophy I often find myself circling something that one could call the &#8220;skill issue&#8221; objection. This basically holds that people are pretty good at figuring stuff out for themselves and concerns about waning autonomy in the era of AI are overplayed. It&#8217;s not that deep, buddy. </p><p>In some ways, I have a soft spot for this idea. It&#8217;s true that most people can separate play-acting from real life and that we don&#8217;t instantly absorb every influence in our environment like sponges. This comes down to the nature of the self, which needs to be both flexible enough to accommodate change when experiencing new things and stable enough to avoid an about-face at the drop of a hat. </p><p>We&#8217;ve been here before insofar as servant societies of the past also supported civic virtue. Comments on the obvious shortcomings of these particular social relations aside, the butler didn&#8217;t corrupt the statesman and the aristocrat had a thing for civic society. This tells us that hierarchy and assistance do not <em>automatically</em> corrupt character, that you can maintain a semblance of virtuousness so long as authority is exercised with restraint and dignity.</p><p>Nor is it obvious that delegation is always bad news for becoming good people. Much of human achievement rests on being relieved of drudgery so we can spend time on the good stuff of judgement, creativity, and civic engagement. In a world already full of service relationships (e.g. apps, assistants, and actual people who help us) most of us somehow manage not to become petty tyrants.</p><p>The question with humanoid robots is not &#8220;will they deform the self by default?&#8221; but rather &#8220;how do we govern them in a way that makes us better?&#8221; In the best case, owning a humanoid robot and treating it well could actually allow us to grow by cultivating a kind of benevolent authority. </p><p>Aristotelian ethics describes this dynamic as <em>oikonomia </em>or &#8220;proper rule&#8221;. It suggests that some types of virtue are expressed through right use of power, that the point is not to renounce authority but to wield it in a way that disciplines the self as much as it directs others. Augustine argues something similar by insisting that power is only just when guided by &#8220;rightly ordered love&#8221;. If we are to rule over others, we must rule the self first. </p><p>Humanoid robots are going to eventually stand in for people. Maybe not right now, but likely one day in the not too distant future. When that moment arrives, many us will have a thing that walks and talks like a human that we can command to do our bidding. If we treat them with respect, we will become better for it; if we treat them with contempt, we will be the ones who suffer. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Learning From Examples! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The worst time to have a problem]]></title><description><![CDATA[You shouldn't "just do things"]]></description><link>https://www.learningfromexamples.com/p/the-worst-time-to-have-a-problem</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/the-worst-time-to-have-a-problem</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 30 Sep 2025 10:25:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pYUe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pYUe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pYUe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png 424w, https://substackcdn.com/image/fetch/$s_!pYUe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png 848w, https://substackcdn.com/image/fetch/$s_!pYUe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png 1272w, https://substackcdn.com/image/fetch/$s_!pYUe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pYUe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png" width="1456" height="792" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:792,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8033370,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/174534147?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pYUe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png 424w, https://substackcdn.com/image/fetch/$s_!pYUe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png 848w, https://substackcdn.com/image/fetch/$s_!pYUe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png 1272w, https://substackcdn.com/image/fetch/$s_!pYUe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28df0238-220a-4916-9f46-4ed7cb4b8c7e_2894x1574.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Nicholas Roerich &#8216;Legend&#8217;, from the series &#8216;Messiah&#8217; (1923)</figcaption></figure></div><p>AI developers have a style. Badge logos and blocky text are a given, but even promotional materials have converged on the same type of vibe. The typical advert uses some combination of a smiling person talking to a phone, knowing replies by the assistant delivered in a friendly patter, and the dum-tsh-dum of a snare drum as the camera skips from bike-shed to bistro. </p><p>Anthropic&#8217;s <a href="https://www.youtube.com/watch?v=FDNkDBNR7AM">recent effort</a> is better than most, jettisoning the happy-go-lucky aesthetic for a mish-mash of falling pianos, country road-trips, neolithic cave paintings, deep sea exploration, and stellar phenomena. The idea is that AI is useful in the places you can imagine and in many others you can&#8217;t, that despite the doom and gloom so pervasive in Western societies &#8220;there has never been a better time to have a problem&#8221;. </p><p>That&#8217;s certainly true in some respects. Today&#8217;s models are more or less the stuff of pulp sci-fi dreams, friendly golems that have something to say about whatever question you pose (so long as you are careful to abide by the usage policy). Notwithstanding the protests of those who think the entire AI project is made of smoke and mirrors, millions of people seem to agree that they have problems that thinking machines can solve.      </p><p>But in another sense, there has never been a <em>worse</em> time to have a problem. After all, the point of problems is to work them out ourselves rather than have someone solve them for us. As the old saying goes: &#8220;If you give a man a fish, he may eat for a day. If you teach a man to fish, he eats for a lifetime&#8221;.</p><h3>Just doing stuff</h3><p>The <a href="https://jasmi.news/p/dictionary">aphorism of the moment</a> holds that &#8220;you can just do stuff&#8221;. A favourite of Silicon Valley, the phrase embodies a strain of thinking that distinguishes between &#8220;high agency&#8221; people and those who don&#8217;t have the right stuff. On one level this is a truism. You can of course do many things, some of which are meaningful and some of which are not. It&#8217;s a great time to have a problem, you just need to roll up your sleeves and get stuck in. </p><p>Agency here is about freedom, the leeway to do what you want and act on your desires. Freedom, though, has a funny habit of decaying into licence, the doing of whatever comes first to hand without reflection on its worth. If you spend your time &#8220;just doing&#8221; pointless things, that doesn&#8217;t strike me as particularly agentic.</p><p>What makes the mantra seductive is that it spares us the labour of asking which things are worth doing. In a culture obsessed with output, action becomes its own reward. Better to do something than nothing, to be in the arena than sitting in the crowd. </p><p>The cult of agency is a counterweight to a world of bureaucracies and stasis, a place where &#8220;<a href="https://www.reddit.com/r/nothingeverhappens/">nothing ever happens</a>&#8221;. The ability to act at all can feel like a triumph, and so to take action is to stick one in the eye of institutional inertia. The language of agency thrives in certain corners of the internet because it assures us that we have what it takes to strike out somewhere, anywhere, on our own. </p><p>A more generous reading of the &#8220;just do stuff&#8221; meme is that implicit in its logic is that choice alone is not enough, that you do have to pick and choose wisely. A supermarket aisle may present a hundred brands of the same product without making us wiser about what we want or why we want it. The high-agency move would be to pick the perfect item, or better yet start your own grocery chain from first principles. </p><p>We might say that action should lead somewhere worth going, that movement is a means not an end. If this is true, then we know why the ideal of agency feels incomplete: it describes the ability to act but not the standard by which action is judged. Without that end, its easy to mistake momentum for direction, novelty for growth, and busyness for a life well-lived.</p><p>So agency needs direction, but how do we know where to focus our efforts? You figure it out by knowing the kind of person you are today and the kind of person you want to become tomorrow. This is better, but now we&#8217;re no longer talking about agency in a strict sense. We&#8217;re in the land of <em>autonomy</em>, the cultivated capacity to live well by reflecting on the type of person we want to be. </p><p>Autonomy is about deciding which things are worth doing and then binding yourself to that decision when appetite, novelty, or fatigue threaten to take you somewhere else. It&#8217;s about not-doing as much as it is doing. To live with autonomy is to set the rules by which competing desires are brought to order so a person can act for the better. </p><p>You practice autonomy by noticing your impulses and testing them against a standard you chose. One way to imagine the split is to think about first and second order preferences, where the former concerns what you want right now and the latter describes the kind of person you want to be. If you &#8220;just do stuff&#8221; in service of your proximate wants, don&#8217;t be surprised when you feel something is still missing even after you founded that company or wrote that book.   </p><h3>The shape of problems</h3><p>Figuring stuff out for yourself has a practical element (in that it is the condition of knowledge) and a moral element (in that it trains you to become the kind of person you want to be). Plato&#8217;s <em>Apology</em> famously gives us Socrates&#8217; claim that &#8220;the unexamined life is not worth living&#8221;. In this framing, virtue is a product of questioning because it forces us to test our assumptions and to reform our character.   </p><p>The mathematician George P&#243;lya said solving a problem using &#8220;your own means&#8221; trains the habits of reason and allows the doer to become more than they were. What he meant was that the value of problem-solving lies in the the struggle, that each attempt at reasoning leaves behind the residue of skill. It gives you a sharper sense of what counts as a good and a clearer picture of what kind of thinker you are, so that the next problem &#8212; and the one after that &#8212; gets easier.</p><p>When we ask an LLM to solve our problems, we get a serviceable answer at the cost of truly understanding how we got there. Knowing, in other words, is not the same as growing. Every time we use ChatGPT to work something out for us we deprive ourselves of the opportunity to become a little bit wiser. People are already outsourcing cognitive labour to large language models with little regard for debates about whether AI can &#8220;think&#8221; or not.</p><p>The rub is that AI doesn&#8217;t <em>only</em> help us do things. Clearly in some instances it does, like teaching us a new skill or surfacing sources of information that we might not have seen. But it also proposes what to do, how to do it, and why we should care. This shift moves us from assistance (a tool serving chosen ends) toward deference (something that proposes ends we adopt without thinking). </p><p>Models choose what you see first, how options are ordered, which interpretations are offered as &#8220;reasonable&#8221;, and which are not even offered for consideration in the first place. A recommended route, a suggested reply, or a pre-filled summary frame the terms of engagement by providing the architecture under which we make choices. They don&#8217;t always pick what you eat, but they forever set the menu. </p><p>Systems infer objectives from us and optimise toward them. Often that takes the form of maximising engagement, even though large language models are not explicitly designed with this goal in mind. Their stickiness in part flows from the post-training procedures designed to turn the base model into a chat assistant. It&#8217;s pretty easy for &#8220;<a href="https://ecorner.stanford.edu/wp-content/uploads/sites/2/2024/02/helpful-honest-harmless-ai-entire-talk-transcript.pdf">helpful, honest, and harmless</a>&#8221; to become &#8220;the kind of thing I quite like talking to all day&#8221;. </p><p>You might say that this problem is something that all technologies face, that we&#8217;ve been here before and the worries were overblown. The pen doesn&#8217;t tell us what to write anymore than the calculator tells us what to add or subtract, right? The difference is that while all technologies in some sense structure our actions &#8212; the wheel made certain journeys possible and cartography influenced patterns of trade &#8212; we don&#8217;t outsource the habit of thinking to these artefacts. </p><p>It&#8217;s also the case that some off-loading is beneficial. Humans have limited cognitive bandwidth, and spending it on memorising every route or re-deriving calculus is probably not the best use of that mental currency. The trick is to distinguish between delegation that clears space for higher forms of judgment and delegation that spells trouble for the work of judgment in the first place. </p><p>The classic justified truth belief (JTB) theory of knowledge <a href="https://spot.colorado.edu/~heathwoo/Phil100/jtb.html">describes</a> its subject as a mental representation that corresponds to reality, one that is underwritten by a justification. It&#8217;s essentially a mental mirror of the world that is true and warranted. Assuming that JTB tells us something useful about how knowledge is made, then the problem that AI poses is clear enough. </p><p>AI can deliver a proposition that happens to be true, but if you have not traced the steps, weighed the reasons, and ruled out the alternatives yourself, then that knowledge isn&#8217;t really yours. I&#8217;m not so worried about machines making mistakes, but I do wonder whether the act of deference erodes the habits that let us truly say we know. </p><p>We might even say that autonomy reveals itself most clearly when tested against the temptation of deference. AI endangers self-rule but it also provides the conditions under which it can be tested, offering each of us a chance to practise rejecting the easy answer and favouring the harder work of thinking. </p><p>I use ChatGPT or Claude most days, and I&#8217;m probably as guilty as anyone for asking the robot about things I could probably have figured out for myself. I don&#8217;t try to police my use, but I do try to think deliberately about it. One difference lies in letting it clear space and letting it fill space for me. The temptation is always toward the latter, because it&#8217;s easier to accept answers than to wrestle with problems. </p><p>But to live well with machines is to insist that they serve our efforts at growth rather than replace them, that they enlarge the field for judgment instead of shrinking it. The task of becoming the person you want to be &#8212; the kind who can judge, discern, and act &#8212; cannot be outsourced. It has to be practiced by each of us, with all the false starts and frustrations that practice entails. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Enlightenment Aliens]]></title><description><![CDATA[Revisiting the plurality of worlds]]></description><link>https://www.learningfromexamples.com/p/enlightenment-aliens</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/enlightenment-aliens</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 16 Sep 2025 10:25:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rBPr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rBPr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rBPr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rBPr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rBPr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rBPr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rBPr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg" width="1456" height="1223" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1223,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Heritage Images, Getty Images&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Heritage Images, Getty Images" title="Heritage Images, Getty Images" srcset="https://substackcdn.com/image/fetch/$s_!rBPr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rBPr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rBPr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rBPr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6476a78d-f950-41cb-a9ee-6727d5726e9b_2048x1720.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Bernard Le Bovier de Fontenelle (1657-1757) Meditating on the Proliferation of Worlds</em> by Jean Baptiste Morret (1791)</figcaption></figure></div><blockquote><p>&#8220;You see that white part of the sky, called the milky-way. Can you guess what it is? An infinity of little stars, invisible to our eyes on account of their smallness, and placed so close to each other that they seem but a stream of light. I wish I had a telescope here to shew you this cluster of worlds.&#8221; </p><p><strong>Bernard Le Bouyer de Fontenelle, </strong><em><strong>Conversations on the Plurality of Worlds </strong></em><strong>(1686)</strong></p></blockquote><p>Bernard Le Bouyer de Fontenelle was a French author and philosopher. He wrote history and theatre, and even tried his hand as a lawyer before quickly deciding the legal profession wasn&#8217;t for him. Fontenelle was the model of an Enlightenment man. He believed in the renewal of human will and reason, and argued with gusto against academic colleagues who thought the great works of the past could never be equaled. </p><p>In 1686, the Frenchman <a href="https://www.gutenberg.org/files/66559/66559-h/66559-h.htm">published</a> <em>Conversations on the Plurality of Worlds</em>. Written as a string of exchanges between a philosopher (a thinly veiled stand-in for Fontenelle) and an intelligent woman called the Marchioness, the work established the template for Enlightenment extraterrestrial discourse by recasting astronomy as a question of humanity&#8217;s place in the cosmos.</p><p>The first evening starts with the Marchioness and the philosopher on an evening stroll. As they watch the Moon and stars, Fontenelle&#8217;s philosopher bashfully admits that &#8220;I have taken it in my head that every star may be a world&#8221;. His trepidation flows from the recognition that radical ideas often run against the grain of human nature, that people like to cling to that which flatters their pride. </p><p>Our philosopher explains that astronomers held on to the old Ptolemaic model of the heavens because they wanted to put themselves at the centre of the universe, much like the courtier who tries to place himself in the most prominent position at court. Copernicus won out, he tells us, because the Ptolemaic system buckled under the weight of its own complexity. When Mars appeared to move backwards in the sky, astronomers explained it by saying the planet circled on a little ring, which in turn circled on a bigger ring, and so on. These epicycles multiplied until the model eventually looked like a funhouse mirror version of Ptolemy&#8217;s original scheme. The reason we were hesitant to accept it is for fear of what a heliocentric account means for our place in the universe. </p><p>The Marchioness isn&#8217;t moved by his argument. She asks: &#8220;Do you suppose I feel humbler for knowing that the earth goes round the sun? I assure you I esteem myself just as highly as I did before.&#8221; This is the essential question of the book, one born of the Enlightenment confidence in reason and nature&#8217;s order. We only believed Copernicus, he says, because the system of nature compelled us to. Yet in doing so we learned to accept something that cuts against the human instinct to put ourselves at the centre of the cosmos. </p><p><em>Conversations on the Plurality of Worlds</em> is a negotiation between science and the human condition. If the stars are other worlds, then the universe is richer than we ever imagined. That which seemed threatening &#8212; humanity&#8217;s downward movement in the celestial hierarchy &#8212; seemed to Fontenelle to demonstrate that reason could accommodate dislocation. He saw this upheaval as proof that reason could bear uncomfortable truths, that humans could (and should) draw dignity from their new place in the pecking order.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h3>Is there anyone out there?  </h3><p>Today, speculation about aliens is old hat. We see it in popular media, in research efforts like <a href="https://breakthroughinitiatives.org/initiative/1">Breakthrough Listen</a>, and in academia where studies of extrasolar beings are finally approaching something like scholarly respectability. Whether you have a strong opinion on the matter or not, all of us are well aware of the possibility of life out there. </p><p>It&#8217;s easy to forget that wasn&#8217;t always the case. We take discussion about little green men for granted, but our ancestors probably wouldn&#8217;t understand the question. Then again, the preoccupation with ET certainly feels like a very modern phenomenon. It&#8217;s the stuff of Hollywood, pulp sci-fi magazines, and turn of the century novelists, isn&#8217;t it? </p><p>Aliens surely loom larger in our collective cultural imagination than ever before, but serious consideration of extraterrestrial life as we might understand it has been hundreds of years in the making. The roots of our moment begin with the convulsions of early modern science, when the telescope made the stars look like other suns and the planets look like other earths. </p><p>For most of human history the heavens were the realm of gods, spirits, and influences; only in the seventeenth century did people begin to ask, in a recognisably modern way, whether those distant worlds might harbour other kinds of life. Bernard de Fontenelle was one of the first to popularise that shift. In <em>Conversations on the Plurality of Worlds</em>, he wondered whether the planets of the solar system contained life, and even speculated that every star in the night sky contained a solar system like our own.  </p><p>But he also knew that alien life may not look like us. On possible lunar inhabitants, he wrote: &#8220;I say there are inhabitants, and I likewise say they may not at all resemble us,&#8221; and that any alien life must adapt to its own planetary conditions: &#8220;when I affirm that the moon is not peopled by men; you will see that according to the idea I entertain of the endless diversity of the works of nature, it is impossible such beings as we, should be placed there.&#8221; </p><p>His modesty masked confident assertions about extraterrestrial existence based on the principle of plenitude, the belief that nature abhors waste and fills all possible spaces with life. What made Fontenelle&#8217;s case so effective was both the boldness of his claim and the elegance of its presentation. He wrapped unsettling ideas in polite conversation, choosing as his interlocutor a witty, curious woman. In doing so, he signalled that reason was not the preserve of scholars alone, but something that anyone with curiosity could exercise. As he <a href="https://www.gutenberg.org/files/66559/66559-h/66559-h.htm">explained</a>:</p><blockquote><p>&#8220;In these Conversations I have represented a woman receiving information on things with which she was entirely unacquainted. I thought this fiction would enable me to give the subject more ornament, and would encourage the female sex in the pursuit of knowledge, by the example of a woman who though ignorant of the sciences, is capable of understanding all she is told, and arranging in her ideas the worlds and vortices. Why should any woman allow the superiority of this imaginary Marchioness, who only believes what she could not avoid understanding?&#8221; </p></blockquote><h3>Plurality of words </h3><p>Fontenelle wasn&#8217;t the only Enlightenment thinker wondering about alien life. The &#8216;plurality of worlds&#8217; debate became a live issue as scientists continued to spy new solar objects down the end of their telescopes. Naturally, Enlightenment thinkers moved to grapple with the questions that flowed from these observations. Were humans unique in the cosmos? How would extraterrestrial life affect Christian salvation doctrine? What moral obligations might exist toward rational beings on distant worlds? </p><p>The Dutch polymath Christiaan Huygens' posthumously published <em>Cosmotheoros</em> in 1698, which provided a systematic scientific treatment of extraterrestrial life by applying Newtonian physics and observational astronomy. Unlike Fontenelle's accessible dialogues, Huygens wrote a dense tome that established methodological principles that would influence subsequent astronomical speculation. </p><p>His fundamental <a href="https://www.gutenberg.org/files/71191/71191-0.txt">argument</a> rested on the Copernican principle: &#8220;A Man that is of Copernicus's Opinion, that this Earth of ours is a Planet, carry'd round and enlighten'd by the Sun, like the rest of the Planets, cannot but sometimes think that it's not improbable that the rest of the Planets have their Dress and Furniture, and perhaps their Inhabitants too as well as this Earth of ours.&#8221; This style of inquiry, one that connected reason and analogy, became the dominant approach in Enlightenment extraterrestrial discourse.</p><p>Huygens provided remarkably detailed <a href="https://publicdomainreview.org/essay/the-uncertain-heavens/">speculation</a> about &#8216;Planetarians&#8217; based on functional reasoning about intelligence and technology. He argued they must possess manipulative organs because &#8220;without their help and assistance men could never arrive to the improvement of their Minds in natural Knowledge.&#8221; Perhaps his most famous idea was that the inhabitants of Jupiter must cultivate something like hemp for rope-making in their sailing ships, an assumption that demonstrated the period's confidence in analogical reasoning and its assumption that technological development followed universal patterns. </p><p>Others discussed extraterrestrial visitors as a form of social criticism. Voltaire's <em>Microm&#233;gas</em> from 1752 <a href="https://publicdomainreview.org/collection/micromegas-by-voltaire-1752/">features</a> a giant from Sirius (Microm&#233;gas, 120,000 feet tall) who visits Earth with a Saturnian companion (a puny 6,000 feet tall) in a text that sought to provide a cosmic perspective on human vanity. When Earth's inhabitants <a href="https://www.themarginalian.org/2015/08/14/micromegas-voltaire-elizabeth-hall/">claim</a> the universe was created for their benefit, &#8220;the two travelers fell on each other, choking with laughter&#8221;. </p><p>Of course, no treatment of life amongst the stars would be complete without religion. American founding father Thomas Paine deployed extraterrestrial life as a central argument against traditional Christianity in <em>The Age of Reason,</em> published in three volumes between 1794 and 1807. Paine's core argument targeted Christianity's cosmic exclusivity:</p><blockquote><p>&#8220;Though it is not a direct article of the Christian system, that this world that we inhabit is the whole of the habitable creation, yet it is so worked up therewith, from what is called the Mosaic account of the Creation, the story of Eve and the apple, and the counterpart of that story, the death of the Son of God, that to believe otherwise, that is, to believe that <strong>God created a plurality of worlds, at least as numerous as what we call stars</strong>, renders the Christian system of faith at once little and ridiculous, and scatters it in the mind like feathers in the air.&#8221; </p></blockquote><p>Paine was not out to abolish belief in God, but he was out to reform it. If revelation on Earth was the only path to salvation, what of the innumerable other worlds? To posit a separate incarnation for each, he argued, was absurd; to limit salvation to Earth was parochial. Faith in the creator must reflect the immensity of creation, and Christian doctrine ought to accommodate the true scale of the universe.   </p><p>Finally, Immanuel Kant integrated extraterrestrial speculation into his comprehensive cosmological system. His <em>Universal Natural History and Theory of the Heavens</em> from 1755 examined solar system formation, arguing that the same processes that produced life here would operate elsewhere throughout the universe.</p><p>In <em>Critique of Pure Reason</em>, he <a href="https://ui.adsabs.harvard.edu/abs/2016IJAsB..15..261L/abstract">wrote</a>: &#8220;if it were possible to settle by any sort of experience whether there are inhabitants of at least some of the planets that we see, I might well bet everything that I have on it. Hence I say that it is not merely an opinion but a strong belief (on the correctness of which I would wager many advantages in life) that there are also inhabitants of other worlds.&#8221; </p><p>The idea constituted a hierarchical arrangement of planetary inhabitants based on distance from the Sun. Beings on planets closer to the Sun would be of a denser and more refined nature, while those on distant planets would be made of lighter stuff. All would possess reason, but their physical forms and capabilities would vary according to planetary environments. </p><p>Kant used the possibility of extraterrestrials less to describe aliens themselves than to clarify what it meant to be human. For Fontenelle it was a way to charm readers into accepting displacement, for Huygens to prove the universality of nature&#8217;s laws, for Voltaire to puncture vanity, and for Paine to expose the limits of revelation. In each case, other worlds served as proxies for disputes over knowledge, power, and salvation.</p><h3>New horizons </h3><p>In <em>Conversations on the Plurality of Worlds, </em>the Marchioness tells the philosopher &#8220;You are making the universe so unbounded that I feel lost in it; I don't know where I am&#8221;. The proper response, he insists, is to feel the opposite: </p><blockquote><p>&#8220;For my part, said I, I think it very pleasing. Were the sky only a blue arch to which the stars were fixed, the universe would seem narrow and confined; there would not be room to breathe: now that we attribute an infinitely greater extent and depth to this blue firmament, by dividing it into thousands of vortices, I seem to be more at liberty; to live in a freer air&#8221;. </p></blockquote><p>Extraterrestrials were a rhetorical instrument. They allowed Enlightenment writers to weaken the idea of divine privilege and to argue for the universality of reason, law, and moral order. Speculation about other worlds was a way of imagining a universe without exemptions, a politics without ecclesiastical hierarchies, and a humanity defined by its participation in a community of rational beings. </p><p>In embracing the plurality of worlds, Enlightenment thinkers completed the Copernican revolution in the cultural imagination. Its &#8216;principle of mediocrity&#8217; &#8212; the claim that Earth is not special, that what happens here is likely to happen elsewhere &#8212; was the scientific manifestation of the Enlightenment&#8217;s organising principle. Once you accept that the same laws of nature apply throughout the cosmos, you undercut the idea that anyone ought to benefit from a pre-ordained position on Earth. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The shock that opens the question ]]></title><description><![CDATA[Rotation and renewal in algorithmic culture]]></description><link>https://www.learningfromexamples.com/p/the-shock-that-opens-the-question</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/the-shock-that-opens-the-question</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 02 Sep 2025 10:25:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QK8b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QK8b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QK8b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png 424w, https://substackcdn.com/image/fetch/$s_!QK8b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png 848w, https://substackcdn.com/image/fetch/$s_!QK8b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png 1272w, https://substackcdn.com/image/fetch/$s_!QK8b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QK8b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png" width="2702" height="1520" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1520,&quot;width&quot;:2702,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5637847,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/172158347?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a26461b-23d6-484b-877c-da5ecb28afc5_3400x1520.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QK8b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png 424w, https://substackcdn.com/image/fetch/$s_!QK8b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png 848w, https://substackcdn.com/image/fetch/$s_!QK8b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png 1272w, https://substackcdn.com/image/fetch/$s_!QK8b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7212fed5-4b04-4e21-bd34-4a8641aa8af5_2702x1520.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Veronese&#8217;s <em>Christ among the Doctors in the Temple</em> (1560). </figcaption></figure></div><p>In <em>Either/Or</em>, S&#248;ren Kierkegaard tells us about life&#8217;s great enemy. It&#8217;s not pain or suffering, sin or despair. It&#8217;s not even failure or death. As it turns out, our true nemesis is boredom. He calls it &#8216;the root of all evil&#8217; because it describes a mode of living where the habits that once formed character no longer continue to do so. Past actions stop making those same actions in the present feel significant, shaking off the sense of progress or growth that previously defined them. </p><p>Soon enough, you don&#8217;t recognise yourself in your own decisions. You might counter that maybe that&#8217;s okay, as you can always find new routines and habits. But not so fast. Kierkegaard also thinks boredom applies to the new as well as the old. Fresh experiences are meant to unsettle, to introduce friction, and to make us reconsider what we thought we knew. Under boredom, novelty is stripped of that power. It becomes a distraction, something briefly stimulating but quickly assimilated without any change in how you see yourself or the world.</p><p>The aesthetic life craves stimulation, but diversion has an annoying tendency to harden into the repetitive or mundane; the ethical life depends on habit, but habit without true renewal likes to decay into tedium. His salve is a &#8216;rotation method&#8217; where we learn the capacity to let the old appear new and the new acquire depth. </p><p>In practice, that means learning to vary our perspective rather than our circumstances, to approach the same experience from new angles, and to linger with new experiences long enough for them to take root. It can be as simple as rereading a favourite book and noticing what strikes you differently, or taking a familiar walk with a new locus of <a href="https://www.learningfromexamples.com/p/the-fly-and-the-filter">attention</a>. It can mean resisting the impulse to scroll for something &#8216;new&#8217; and giving time for the novel thing you just discovered to mature into a deeper form of understanding. </p><p>What is at stake here is something like the freedom to truly know who you are and how to live. After all, freedom is not only the power to choose but the power to recognise yourself in your choices. By this reading, we might say a person is only practising true <a href="https://blog.cosmos-institute.org/p/is-algorithmic-mediation-always-bad">autonomy</a> &#8212; the cultivated capacity to deliberate well about how to live &#8212; if their judgments are the sort they would continue to endorse after putting them to the question. </p><p>Philosophers call this &#8216;<a href="https://academic.oup.com/book/45443/chapter-abstract/389465395?redirectedFrom=fulltext">erotetic equilibrium</a>&#8217;, the idea that a judgment counts as autonomous only if it can withstand the twin forces of reflection and deliberation. As Kierkegaard sees it, the threat of boredom is that it compromises this settlement as the familiar no longer feels grounded and the novel no longer feels renewing. Put in other terms: autonomy requires a rhythm between familiarity (in the form of stable, habituated judgment) and novelty (through disruptive experiences that reopen old encounters). </p><p>Today, our lives are governed by technology. We spend our time listening to music or watching videos served to us by algorithms, with the average person logging roughly <a href="https://www.demandsage.com/screen-time-statistics/#:~:text=On%20average%2C%20people%20worldwide%20now,Let's%20explore.">seven hours</a> looking at screens of various sizes. Not all of our time spent on a phone or a laptop is shaped by AI, but even just accounting for social media that takes us to something like <a href="https://explodingtopics.com/blog/social-media-usage">two hours</a> of every day at the mercy of recommender systems that govern what we experience. </p><p>Algorithmic recommendations are a boon for Kierkegaardian boredom. They interrupt the rhythm between old and new by systematically skewing novelty in favour the already-known and familiarity towards the already-consumed. In doing so, they erode the cycle of disruption and renewal that autonomy requires, leaving us with choices that are neither truly tested nor truly sustained. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h3>Boredom machines</h3><p>Most recommender systems optimise for <em>adjacency</em>. Spotify&#8217;s &#8216;Discover Weekly,&#8217; YouTube&#8217;s &#8216;Up Next,&#8217; TikTok&#8217;s &#8216;For You&#8217; feeds are all built to keep you coming back for more. The next item in the carousel is chosen because it is maximally like something you already enjoyed. </p><p>What looks like freshness is too often a very narrow kind of variation. These systems work by mapping your past behaviour into a dense space of similarities, then drawing from the tight cluster around it. The result is that the &#8216;new&#8217; is different enough to feel like discovery, but close enough never to rock the boat. </p><p>That&#8217;s fine in some instances. Nobody needs their dinner playlist to feel like a challenge, and most people welcome a gentle learning curve when picking up a new app or tool. But when this becomes a dominant pattern of cultural life, the cost is that novelty never truly unsettles us and familiarity never deepens into mastery. We stay entertained, yes, but we do not grow. </p><p>Humans need what we might call the shock that opens the question. For Kant, this takes the form of the sublime, when the mind confronts something that does not fit its existing categories. The sublime is unsettling because it resists assimilation, yet in straining to make sense of it we discover that our capacity for reason and judgment exceeds what can be taken in by the senses. Without these disturbances, our judgments may never find themselves truly tested against what lies on the other side of what we know.</p><p>Heidegger describes the <em>Unheimlichkeit</em> or the uncanny moment when the familiar suddenly feels strange. Such moments matter because they interrupt the everyday flow and disclose possibilities we had previously ignored. Human potential requires this estrangement because we only know that our routines and preferences are genuinely ours when they face this sort of interruption.  </p><p>We might comfort ourselves by urging that while algorithmic novelty isn&#8217;t always new, the systems at least allow us to become familiar with the things we already love. But that type of familiarity rarely matures into depth. Instagram recirculates the same handful of recipes, fitness routines, or travel spots, but the repetition doesn&#8217;t necessarily make you a better cook, athlete, or traveller. Familiarity here is broad but shallow, a surface-level exposure rather than patient discipline that ripens into something more meaningful.</p><p>Aristotle <a href="http://2">argues</a> that virtue flows from habituation, the repeated practice of good actions until they become second nature. True familiarity approaches stability and depth only when repetition is combined with the good work of attention and discipline. The kind of algorithmic repetition we live with today looks like habit because it gives us the same patterns over and over, but it lacks its substance because the residue of experience is rarely incorporated into our character.</p><p>That&#8217;s because the logic of retention rewards what is easiest to consume again, not what is hardest to master. The system is designed to serve us repetition calibrated to hold attention rather than to cultivate depth. Where Aristotle thought habit was the slow conversion of action into character, platforms like to hold us in place rather than carry us forward as persons. </p><p>One way to think about this idea is the relationship between first and second order preferences, the difference between &#8216;man, I would love a cigarette&#8217; and &#8216;I wish I could stop smoking&#8217;. First order desires are immediate and situational but second order desires are reflective, the stance you take on your own wants. In this framing, autonomy is not just acting on a first order preference but being able to align oneself with the second order judgments you endorse about the life you want to live. </p><p>Habituation is a bridge between these levels, with repeated actions gradually harmonising impulses and sustained reflection building character. The problem with algorithmic culture is that it breaks this connection. It gratifies first order preferences without giving them the friction that might force second order reflection. Clearly, that doesn&#8217;t happen all the time. Even within algorithmic culture one can pull away and use the same tools to pursue depth, like the person who studies guitar through YouTube tutorials or who joins an online community and learns to cook. </p><p>But these are acts of resistance, not dominant kinds of engagement. </p><p>Algorithmic culture dampens novelty and familiarity on their own terms, but what matters most is the negotiation of these two forces. It is in the back-and-forth between disruption and stability that our choices become truly our own. When novelty tests our habits and familiarity steadies our responses, we gain the chance to endorse our lives rather than simply living through them. </p><p>John Dewey, writing on education, <a href="https://www.schoolofeducators.com/wp-content/uploads/2011/12/EXPERIENCE-EDUCATION-JOHN-DEWEY.pdf?utm_source=chatgpt.com">put forward</a> the idea that growth depends on the dual conditions of continuity and interaction. We develop when new experiences disrupt us, but only if they can also be tied back into what we already know. In this view, knowing oneself is about learning to weave these ideals into a life that we can call our own. </p><p>We need encounters that unsettle and habits that hold, moments that throw our judgments into question and practices that let them take root. To lose that balance is to risk the boredom Kierkegaard feared: a life rich in stimulation yet poor in meaning. </p><h3>A different beast</h3><p>Not all AI is made equal. We might interact with recommender systems on a daily basis, but many of us aren&#8217;t even aware that we&#8217;re doing so. Large language models feel different because they confront us more directly. They speak to us, take instructions, and generate responses to our feedback in the moment. These systems aren&#8217;t necessarily more fluid, but we are more aware of how these artefacts respond to human interaction. </p><p>We consult with them by treating the model as an oracle for information or advice, and we collaborate with them by enlisting them as a partner in drafting, editing, or brainstorming. But we also let LLMs play the parts we assign them &#8212; as tutor, friend, or opponent &#8212; and even hand off some tasks to them altogether. These modes of interaction promise control (we set the terms of the interaction) and companionship (we enter into a dialogue with the machine). </p><p>At first blush, they look like autonomy preserving custodians of algorithmic culture because we manage inputs and decide whether to accept or reject what is offered. Yet we know that the range of responses is bounded by training data, defaults, and hidden constraints that shape the choices we appear to make. I have lost count of the number of times models surface the same idea in different contexts, and we are all too aware of the linguistic tics that make LLM-generated text stand out. </p><p>Large models can in principle explore a giant space of possibilities, but in practice they tend to organise around the same set of patterns. At the point of use, these systems gratify first order preferences in a way that doesn&#8217;t guarantee the type of self-reflection we need to grow. A student may turn to a model in order to &#8216;learn,&#8217; but the act of outsourcing the work can shortcut the struggle through which understanding emerges. The immediate preference is satisfied, but the deeper preference &#8212; to know, to master, or to grow &#8212; may not get a look-in. </p><p>More deliberate modes of use can help. We might say that a student who asks for a sequence of questions to work through, or a writer who uses the model to expose weaknesses in their own draft, is using the tool to test and refine their deeper commitments. In these cases the model becomes a means of holding ourselves to account, of forcing our immediate desires to answer to the kind of person we aspire to be.</p><p>At its best, AI returns our words in unfamiliar shapes and forces us to clarify what we mean. In those moments the familiar is unsettled and the novel anchored. At its worst, the same system may take us closer to &#8216;<a href="http://youtube.com/watch?v=H_g0RSSo0ho&amp;ab_channel=BigThink">autocomplete for life</a>&#8217;, gratifying first order desires while leaving our deeper commitments untouched. Instead of testing our judgments, they tell us what we want to hear. </p><p>Sam Altman acknowledged these two types of usage in a <a href="https://www.youtube.com/watch?v=hmtuvNfytjM">recent</a> podcast: &#8220;There are some people who are clearly using ChatGPT not to think. And there are some people using it to think more than they ever have before. I am hopeful that we'll be able to build the tool in a way that encourages them to stretch their bandwidth a little more&#8221;. </p><p>The idea of rotation is useful here. Language models help us grow when they take something we already know and reframe it in a way that brings forth a new perspective. When you get a response, better to ask the LLM to defend, refine, or contradict itself rather than taking it as gospel. That might seem obvious to some, but last year&#8217;s &#8216;<a href="https://blog.cosmos-institute.org/p/the-claude-boys">Claude Boys</a>&#8217; phenomenon reminds us that people don&#8217;t always like to do that kind of work.    </p><p>These ideas encourage us to remember that autonomy depends on the rhythm between novelty and familiarity. We need habits that hold and shocks that unsettle, practices that shape character and encounters that open the question. So long as we use them judiciously, large language models are commensurate with that account of autonomy. They can either gratify our first order desires in ways that leave us unchanged, or they can be turned into partners that test our ideas, sharpen our commitments, and force us to see ourselves anew. </p><p>The difference lies partly in us, but also in the technology itself. Some systems make renewal harder, though not impossible, while others open the space more readily. To live well is still to weave the known and the unknown into a life we can endorse, and that remains our task, whatever tools we choose to do it. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The beast in the woods]]></title><description><![CDATA[On progress, prophecy, and determinism]]></description><link>https://www.learningfromexamples.com/p/the-beast-in-the-jungle</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/the-beast-in-the-jungle</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 12 Aug 2025 10:25:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WXAV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WXAV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WXAV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WXAV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WXAV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WXAV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WXAV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg" width="3792" height="2378" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2378,&quot;width&quot;:3792,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4167208,&quot;alt&quot;:&quot;File:Giovanni di Paolo (Giovanni di Paolo di Grazia) (Italian, Udine  1487&#8211;1564 Rome) - The Creation of the World and the Expulsion from Paradise  - Google Art Project.jpg - Wikimedia Commons&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="File:Giovanni di Paolo (Giovanni di Paolo di Grazia) (Italian, Udine  1487&#8211;1564 Rome) - The Creation of the World and the Expulsion from Paradise  - Google Art Project.jpg - Wikimedia Commons" title="File:Giovanni di Paolo (Giovanni di Paolo di Grazia) (Italian, Udine  1487&#8211;1564 Rome) - The Creation of the World and the Expulsion from Paradise  - Google Art Project.jpg - Wikimedia Commons" srcset="https://substackcdn.com/image/fetch/$s_!WXAV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WXAV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WXAV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WXAV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11c5f59-e96f-4799-921e-0c4956a5d79b_3792x2378.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Giovanni di Paolo, Creation of the World and Expulsion from Paradise, 1445 </figcaption></figure></div><p>Henry James&#8217; <em>The</em> <em>Beast in the Jungle</em> follows a well-heeled American drifter in London called John Marcher. Over the course of the novella, Marcher agitates about the coming of a &#8216;beast&#8217; poised to emerge from the undergrowth and destroy him and all that he holds dear.</p><p>In the closing moments we find the beast is a fiction; or rather, the anticipation of the monster&#8217;s coming is beastly in that Marcher lets it consume his life. He ignores his love, his friends, and his career in preparation of the creature, only for those actions to provoke the monster into being.</p><p>James&#8217; little book is in that sense a self fulfilling prophecy, a cautionary tale about foresight and control. The essence of the story is common enough to be inaugurated as the Eighth Basic Plot: Chaucer&#8217;s <em>Pardoner&#8217;s Tale </em>sees those hunting death become murderers; <em>The Appointment in Samarra</em> finds a man running from death only to meet him at his destination; and in Boccaccio&#8217;s<em> Decameron</em> a man&#8217;s jealousy over losing his lover causes her untimely demise. </p><p>These are all Oedipal tragedies of sorts, though in each case the doom is more literal than Marcher&#8217;s fate in <em>The Beast in the Jungle.</em> In the classic version of prophecy gone sour, our hero&#8217;s folly is a desire to take action. In James&#8217; book, stasis is the malady. We are shown a man afraid to travel, who fears love and intimacy, and who can&#8217;t bring himself to live life in case something goes wrong. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h3>Future shock </h3><p>The things we call &#8216;technologies&#8217; are ways of imposing order on an unruly world. They are artefacts, devices, and systems that contain possibilities for structuring human activity. Deliberately or by chance, consciously or unconsciously, societies select certain frameworks that determine how people work, communicate, travel, and play. </p><p>These structures are technological, and they in turn shape our ability to live in the world and produce new kinds of structures. Recognising that fact is not the same as advocating for a kind of technological determinism, the idea that technological development is the basic currency of change and that humans have no choice but to sit back and let it happen. Determinism is simply too all-encompassing a theory of progress, one that does scant justice to the choices that arise as we design, build, deploy, and configure our technologies.</p><p>We know that technologies do not materialise whole, that they are assembled inside labs, garages, parliaments, and patent offices. A technology&#8217;s function is a standing preference manifested in the world. When an engineer decides to design a safety guard to prevent a saw from touching a workman&#8217;s hand, the preference is &#8216;saw stops before contact&#8217; over &#8216;saw cuts hand&#8217;.</p><p>The parts of our woodcutting machine continue to enforce this preference long after the thing has been designed and put to use in lumber yards. It keeps happening, every time it is used, whether or not the person slicing logs knows it or not. We might say that preferences or &#8216;<a href="https://www.learningfromexamples.com/p/weighed-measured-and-found-wanting">values</a>&#8217; live in technology, which is one reason that technological determinism falls apart under just a little bit of scrutiny. QWERTY was not the fastest keyboard but the one that had a layout that <a href="https://www.smithsonianmag.com/history/the-qwerty-keyboard-will-never-die-where-did-the-150-year-old-design-come-from-49863249/">seemed to prevent</a> typewriters from jamming. The choice stuck because salesmen, schools, and secretaries made it stick.</p><p>When engineers wrote the GSM mobile phone standard in the late 1980s, they famously added in a 160 character Short Message Service (SMS) as a low priority maintenance channel so network staff could ping each other with status alerts. It wasn&#8217;t marketed, priced, or even imagined as a consumer feature. A <a href="https://www.telefonica.com/en/communication-room/blog/origin-history-sms/">contractor sent</a> &#8216;Merry Christmas&#8217; from his desktop to a colleague&#8217;s handset, and not long after curious users began trading notes.</p><p>What began as a backstage diagnostic tool morphed, through unplanned tinkering and uptake, into what was once the world&#8217;s favourite chat medium. The point here is not only that technology can change, but that social logics are often the drivers of that change. Taken in the round, it reminds us that any time you use a technology you are entering into a social negotiation with all those who had a say in its making and usage.</p><p>But let&#8217;s not get carried away. Technological determinism may be inadequate, yet so too is the view that technical things do not matter at all. It is deeply misguided to assume that once we locate the social origins behind a particular technology, we will have explained everything of importance.</p><p>By parcelling the day into equal and audible hours, the mechanical clock let factory owners synchronise shifts. Nobody decreed that the bell <em>must</em> rule the worker, yet once the hours of the day could be precisely tracked the temptation to regiment wasn&#8217;t too far behind. Gears didn&#8217;t force obedience, but they create affordances and invite patterns of use that can be hard to resist. You can, in theory, ignore a seatbelt reminder that asks you to buckle up, but in practice most of us would rather give in.</p><p>Design choices at T&#8320; become path dependencies at T&#8321; and common sense at T&#8322;. If we stop at the backstory &#8212; who funded the clock, who invented the seatbelt &#8212; we close our eyes to the way artefacts shape and are shaped by the movements of everyday life. Conversely, if we fetishise the thing, we overlook the social climate that birthed it and our capacity to reroute it if we choose to do so.</p><h3>Directions of travel </h3><p>In the<em> Republic,</em> Plato tells us about <em>techne, </em>or expert know-how of a craft like carpentry or surgery. For the Greek, <em>techne </em>brings with it a kind of automatic authority that flows from expertise. We submit to the surgeon on how to set a broken bone because she has the skills and knowledge to align the fragments and keep infection at bay.</p><p>Plato leans on that prestige in his masterwork, arguing by analogy that the city should likewise be steered by those with the requisite political <em>techne. </em>These are the philosopher kings who understand the true nature of things, so their expertise ought to ground the state&#8217;s right to rule.</p><p>To make his case, Plato asks us to think about a ship on the high seas. Large sailing vessels need to be steered with a firm hand, so sailors must yield to their captain's commands. We don&#8217;t expect ships to be run democratically because a vessel&#8217;s survival hinges on technically informed decisions, like how to trim sail in a squall or plot a safe course through shoals.</p><p>Plato goes on to suggest that governing a state is much the same. It is something rather like captaining a ship or practicing medicine in that it demands specialised knowledge and the wisdom to apply it judiciously. He returns to this idea in the<em> Laws,</em> where he compares his own work to that of a well established craft. </p><blockquote><p>&#8220;The shipwright, you know, begins his work by laying down the keel of the vessel and indicating her outlines, and I feel myself to be doing the same thing in my attempt to present you with outlines of human lives.... I am really laying the keels of the vessels by due consideration of the question by what means or manner of life we shall make our voyage over the sea of time to the best purpose.&#8221;</p></blockquote><p>Philosophers have a reputation for having their heads in the clouds, but there is some evidence that Plato did in fact seek to put his skills as a designer of political societies into practical effect. He famously travelled from Athens to the court of Dionysius the Elder, hoping to transform his host into a philosopher king willing to put the principles of political <em>techne</em> to work. </p><p>Plato treats <em>techne</em> as a model for political rule, honouring the shipwright&#8217;s expertise only insofar as it buttresses his claim that the city should be steered by those who know. In the <em>Laws,</em> he bars actual artisans from citizenship on the ground that their craft absorbs them wholly, and leaves no room for the higher labour of deliberating about justice.</p><p>In this model, hierarchy precedes technology. Authority is granted from above on the basis of wisdom, while the makers are banished from citizenship so they can focus on their craft and avoid upsetting the political applecart. </p><p>Of course, the opposite is also true. Technology constitutes political order just as surely as political order constitutes technology. In <em>The Visible Hand</em>, the historian Alfred Chandler argues that the expansion of the railroad in the 19th century shows how certain crafts grow their own hierarchies.</p><p>Railroads, he writes, could move freight across the continent in any weather, but speed was useless without an army of schedulers, track gangs, clerks, and district superintendents to choreograph arrivals, inspect boilers, and bill customers. Out of that practice emerged the first modern managerial pyramid, with rungs as rigid as any military. </p><p>The telegraph needs repeaters and time-zone standards; the power grid needs load balancers and dispatch centres; a cloud platform needs site reliability engineers, compliance teams, and a legal department. Each new layer of machinery widens the gap between operator and outcome, and that expansion calls forth coordinators to put the pieces back together.</p><p>Seen this way, <em>techne</em> is an engine that manufactures new politics in situ. The timetable does as much governing as the governor, and X dot com can settle arguments faster than a senate debate. Yes, we make technology &#8212; but technology makes us too. </p><h3>Lost in the woods </h3><p>There&#8217;s a meme about our current place in the &#8216;tech tree&#8217;, one that asks how it is that trillions of dollars of capital came to be expended in one of the <a href="https://www.investmentresearchpartners.com/post/chart-of-the-week-8-3-2025">largest programmes</a> of investment in history. This story, the AI story, involves a combination of repurposed hardware, extremely rich companies, and mountains of data created by the growth of the internet. </p><p>Graphics processing units were originally designed for rendering virtual environments in video games. How fortunate that this architecture, created for performing thousands of operations in parallel, was exactly what deep learning systems needed in order to chew through data fast enough to make the magic happen. </p><p>Of course none of that matters if you have nothing to feed the networks. Garbage in garbage out may be true, but quantity seems to have a quality of its own. Still, to keep those loss functions down you need access to huge amounts of data. This is possible because the commercial internet, especially social media platforms, persuaded billions of people to publish text, images, and video as a side effect of trying to entertain friends or sell products. </p><p>Growing fat on targeted advertising, internet infrastructure, and consumer goods, those same firms piled up extraordinary cash reserves. When the time came, they could plough mountains of dollars into data centre construction, specialised chips, and research laboratories. A single US firm can now spend more on AI hardware <a href="https://x.com/robertwiblin/status/1951248197881393235">than the UK does on defence</a>.</p><p>Political orders embed themselves in the design and allocation of tools, while some tools push back by generating new political structures. We might say the <em>techne</em> of deep learning was socially selected, just as we might acknowledge that our place in the tech tree means the basic shape of frontier AI systems is unlikely to change much in the medium term. That isn&#8217;t to say that progress is sure to slow, but instead that we already know what AGI will look like if it&#8217;s built in the next five years (assuming the <a href="https://www.learningfromexamples.com/p/the-uk-expects-agi-in-four-years">predictions</a> of the US and UK governments are correct).    </p><p>Whether by taking action or staying still, in the Oedipal tragedy fate always wins in the end. The weight of the future is simply too great to contend with, its power overwhelming for mere mortals. Determinists think something similar. They rightly point out that technology has a life of its own, but they are quick to forget that it is also enmeshed with the lives of others. </p><p>In James&#8217; book, the future is paralysing. Our protagonist cannot see the wood from the trees, the ways his life is already changing as he frets over the coming beast. In our moment, we wonder whether the monster will arrive in 5 years or 50. Whatever happens &#8212; and whenever it happens &#8212; like John Marcher we&#8217;ll only recognise it with the benefit of hindsight. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Weighed, measured, and found wanting ]]></title><description><![CDATA[You're telling me an AI aligned these values?]]></description><link>https://www.learningfromexamples.com/p/weighed-measured-and-found-wanting</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/weighed-measured-and-found-wanting</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 05 Aug 2025 10:25:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4xxb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4xxb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4xxb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4xxb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4xxb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4xxb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4xxb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg" width="1095" height="899" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:899,&quot;width&quot;:1095,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:460476,&quot;alt&quot;:&quot;Luminarium Encyclopedia: Medieval Cosmology and Worldview&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Luminarium Encyclopedia: Medieval Cosmology and Worldview" title="Luminarium Encyclopedia: Medieval Cosmology and Worldview" srcset="https://substackcdn.com/image/fetch/$s_!4xxb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4xxb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4xxb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4xxb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cd0f83f-ef8e-4c7b-a14a-152659820b54_1095x899.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Ptolemaic Planisphere by Andreas Cellarius, Harmonia Macrocosmica, 1661 (later reprint).</figcaption></figure></div><p>John Wilkins was a mover and shaker in the early years of the Royal Society. He was a clergyman and an experimenter whose passion project was &#8216;philosophical language&#8217;, a universal written system that could directly correspond with the structure of things in the world.</p><p>Wilkins wanted to turn the bones of language into an ontological framework for making sense of reality. His efforts remind me of noted postmodern linguist Plato, whose <em>Cratylus </em>put forward the idea that words must have intrinsic meanings. We are told that the Homeric hero Hector, for example, gets his name from the Greek verb &#8216;&#233;chein&#8217; or &#8216;to hold&#8217; because he was said to &#8216;hold&#8217; the city of Troy as its great protector. </p><p>In his 1668 <em>An Essay towards a Real Character</em>, Wilkins <a href="https://languagelog.ldc.upenn.edu/nll/?p=49359">introduced</a> descriptive tables that showed how components of language could be used to classify certain animals. The word for 'elephant&#8217; turns up as &#8216;<strong>zibi</strong>&#8217;<strong>,</strong> made up of &#8216;<strong>zi</strong>&#8217; (the two letter root for every <em>beast</em>), followed by a <strong>&#8216;b&#8217;</strong> (the consonant marking whole footed mammals), before finishing with <strong>i</strong> (the vowel assigned to the corresponding species in that row). </p><p>Like so many neat ideas, Wilkins&#8217; philosophical language dissolved on contact with reality. It was too clever and too clumsy. Once you try to sort things into boxes, you soon find that the world has an annoying habit of contorting to avoid easy classification. </p><p>Argentine Jorge Luis Borges <a href="https://languagelog.ldc.upenn.edu/myl/ldc/wilkins.html">found</a> Wilkins&#8217; work in the 1940s, then famously sent it up by describing a &#8216;Celestial Emporium&#8217; whose animal classes include &#8216;those belonging to the Emperor,&#8217; &#8216;frenzied ones,&#8217; and &#8216;those included in this classification.&#8217; Borges&#8217; point was that taxonomies are as arbitrary as they are brittle, that they tend to break the moment they are faced with a creature, a culture, or contradiction that doesn&#8217;t fit the scheme.  </p><p>Wilkins thought his work could be a cabinet of cabinets, a <em>scala naturae</em> for the age of microscopes and coffee house empiricism. He lived as science wrestled with its Scholastic inheritance, a drive to fix the natures of things by figuring what they were and how they were related to one another. Our man pushed that logic to its obvious conclusion. If the world is orderly, then a language that mirrors that order must also be orderly.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h3>One of these things is not like the others </h3><p>On 29 March 1823 a package from Sir Thomas Brisbane, the Governor of New South Wales, arrived at Edinburgh College Museum. Inside were two platypus carcasses, their &#8216;rostrum half dissolved, and the pile loose,&#8217; as the curator&#8217;s assistant William MacGillivray <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC5062051/?utm_source=chatgpt.com">grumbled</a> in his log. </p><p>One went to the display case, the other to the Scottish anatomist Robert Knox&#8217;s dissecting table, where its curious mix of qualities proved inconvenient for every classification schema of the day. Knox found fur but no nipples. A keratinous beak and a cloaca, but no feathers to match. And a venomous spur without a cold-blooded body temperature.</p><p>There were lots of ways to cut the taxonomical cake, but the knife of choice came when Carl Linnaeus laid down the rules around a century earlier. In the 1758 <em>Systema Naturae</em> he offered a key composed of classes, orders, genera, and species, each demarcated by a handful of traits. Hair and teats? Mammal. Feathers and a beak? Bird. Scales and cold blood? Reptile. The attraction was its promise of mutual exclusivity: once a creature could be placed within one class, every other stayed limits. For a while it worked a charm, letting European naturalists sort the spoils of empire into their preferred locations.</p><p>By the early nineteenth century, however, the anomalies were coming thick and fast. Marsupials that suckled young yet carried their offspring in a pouch; microscopic euglena that prowled for food like an animal but carried chloroplasts like a plant; and the duck-billed creature that arrived in Scotland to confound the biologists. Each exception forced addenda, sub-orders, and awkward footnotes until the Linnaean grid was overrun by a patchwork of special cases. </p><p>To some extent the Victorians were alive to these anxieties. In <em>A System of Logic</em> published two decades after the incident in Edinburgh, John Stuart Mill argued that some groupings track real causal similarities while others are categories of convenience. After <em>On the Origin of Species</em> in 1859, the taxonomical project shifted away from fixed essences and towards a genealogical map of shared descent. In the post Darwinian order, classification was explicitly about relations and overlaps instead of Platonic blueprints. </p><p>As Ferdinand de Saussure took much glee in pointing out, a sign has no natural bond to its referent. Tree is not tree-ness in sound-form; it&#8217;s the noise we agree on because it isn&#8217;t three, free, or shrub<em>.</em> Meaning is the friction produced by contrast among signs. Vocabulary triangulates between differences and the essence of the thing is only stable insofar as it exists as what Ludwig Wittgenstein called a &#8216;family resemblance&#8217; between things with overlapping similarities.</p><h3>Moral philosophy by checklist</h3><p>Values are the structure we impose on the messiness of the moral universe. They are <a href="https://arxiv.org/pdf/2404.10636v2">meant</a> to &#8220;capture collective wisdom about what is important in human life, in various contexts and at various scales&#8221; and help us sort the better from the bitter. </p><p>For large language models as in technology more generally, we appeal to &#8216;values&#8217; as a    source of illumination to help us puzzle through the most difficult questions and choices. Alas, the concept of &#8216;values&#8217; is better seen as a symptom of confusion. We retreat behind values when we can&#8217;t find the right words for talking precisely about the most basic aspects of the human condition. </p><p>As Langdon Winner put it in <em>The</em> <em>Whale and the Reactor </em>almost forty years ago:  </p><blockquote><p>In a seemingly endless array of books, articles, and scholarly meetings, the hollow discourse about "values" usurps much of the space formerly occupied by much richer, more expressive categories of moral and political language. The longer such talk continues, the more vacuous it becomes, the further removed from any solid ground. </p></blockquote><p>We are minded to believe that there have always been &#8216;values&#8217; just as surely as there has been a long history of spirited discussion about them. Except that isn&#8217;t really true. People have always had commitments, responsibilities, preferences, tastes, aspirations, convictions and cares. But only in the last century or so has anyone bundled these things together as &#8216;values&#8217; as we might understand them today. </p><p>Used as a noun, the word &#8216;value&#8217; is an old term that has throughout most of its history meant &#8216;the worth of something&#8217;. Commonly the worth of an object in material exchange or the status or worthiness of a person in the eyes of others. The word properly enters social and political thought in the writings of eighteenth and nineteenth century political economists, most consequently via Adam Smith, David Ricardo, and Karl Marx. </p><p>For them &#8216;value&#8217; meant the worth of a thing in a commercial sense, which is why a theory of value first appears wearing the clothes of economics. Later in the nineteenth century Friedrich Nietzsche commandeered the term to signify the sum of principles, ideals, and desires that make up the basic motivational structure of a person or people. </p><p>Nietzsche wrote about the need for <em>umwertung aller werte</em> or the &#8216;revaluation of all values&#8217;, a kind of controlled demolition of Christian morality. He wanted to tear down the moral order, sift through the rubble for anything still moving, and then rebuild a more life-affirming house from the ground up.</p><p>Later, Ralph Barton Perry proposed a &#8216;general theory of value&#8217;  that tried to give a reasonable account of the full range of human interests. Value is in this setting any object of note, whether that interest is aesthetic, moral, economic, or religious. He grounded these concerns in the life of instinct or desire, then cast ethics as a social technology for reconciling the inevitable clashes among them. </p><p>Even towards the middle of the nineteenth century, talk of &#8216;value&#8217; was generally taken to be about some attribute of a given object. One might use or keep safe a thing  because it had a certain value. Economic or sentimental, value was still value. We still accept this meaning, as say the &#8216;value of&#8217; intellectual property or spending time with one&#8217;s family.  </p><p>Today, you are just as likely to hear &#8216;value&#8217; used to describe wholly subjective phenomena. People, groups, cultures, and even whole countries (British Values&#8482; or American Values&#8482;) apparently have values that influence how they show up in the world. </p><p>These kinds of values are basically general dispositions, a semi-conscious filter of taste or conduct that reside in us rather than in the world. We do not cherish charity because charity is good; charity is good because our internal value set fires a positive signal when we see some philanthropy that we approve of. </p><p>All such things are personal sentiments don&#8217;t you know, despite the fact they can also be stretched across the full width of the nation state. You have your values just as I have mine. One community exalts self-reliance, another solidarity, a third ritual purity. </p><p>Our world is a values shop (not to be confused with a value supermarket full of discount deals), where we fill up the trolley with the values commensurate with internally held sentiments. Prices are strictly personal &#8212; your courage may be on two for one, my justice a luxury import &#8212; so haggling is futile. </p><p>The problem with this state of affairs is that it prevents us from thinking critically about the moral world. In the ethics of technology, things are rarely named outright as good, prudent, or admirable and courses of actions are seldom defended as fair or necessary. The winning move is to mumble about &#8216;values,&#8217; as though the label itself ought to carry the day.</p><h3>Keep off the grass</h3><p>Values are a moral taxonomy, a set of friendly labels that lets corporations, governments, or individuals signal virtue without wrestling with the particulars. A list of values feels tidy, mutually exclusive, and reassuringly universal. But we know better, don&#8217;t we? Courage can too easily become recklessness, loyalty can clash with justice, and patience can take the edge off excellence.</p><p>For AI, critics and boosters both retreat behind &#8216;value alignment&#8217; programmes that assume moral life can be rendered as a checklist &#8212; fairness, privacy, autonomy, and so on &#8212; and that the machine&#8217;s task is simply to occupy as many boxes as possible. You don&#8217;t need to say much about which values are preferable, you just need to cram in as many as possible to your taxonomy of virtues. In fact, if you just make sure one of them is pluralism you can call it a day. </p><p>The most basic facets of the human condition are easily swallowed by the value alignment project. Don&#8217;t think too hard about it. Better to concede that moral life has no rough edges and that the work of judgement is secondary. Who cares to ask what courage demands or whom justice serves when you can list pleasant sounding labels and pat yourself on the back for a job well done.</p><p>Behind the lists are ideals of good and harm, duty and power, claim and consequence. Those words bite because they force us to take sides and give reasons. They make the trade-offs real by reminding us that value alignment cannot in fact be all things to all people. Better than that, it forces us to concede that moral philosophy is more than ticking boxes.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI will make personality hires of us all]]></title><description><![CDATA[Introducing the vibes premium]]></description><link>https://www.learningfromexamples.com/p/the-vibes-premium</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/the-vibes-premium</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 29 Jul 2025 10:25:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!R11J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!R11J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!R11J!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg 424w, https://substackcdn.com/image/fetch/$s_!R11J!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg 848w, https://substackcdn.com/image/fetch/$s_!R11J!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!R11J!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!R11J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg" width="728" height="364" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Ben and Elaine on the bus in The Graduate&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Ben and Elaine on the bus in The Graduate" title="Ben and Elaine on the bus in The Graduate" srcset="https://substackcdn.com/image/fetch/$s_!R11J!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg 424w, https://substackcdn.com/image/fetch/$s_!R11J!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg 848w, https://substackcdn.com/image/fetch/$s_!R11J!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!R11J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f52ae78-d50e-4807-a94f-fbf7c4d454f1_1628x814.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The final scene from <em>The Graduate</em> from 1967</figcaption></figure></div><p>Teach First is a UK charity that drops graduates into education jobs. It&#8217;s generally considered to be a good organisation, one that I know from experience has helped lots of smart people build a career in teaching. The group has historically selected candidates based on written assessments, but earlier this month it said it was accelerating a plan to switch towards face-to-face interviews. </p><p>The reason? University graduates are using AI in applications, which predominantly take the form of written assignments where anyone can put ChatGPT to work. Patrick Dempsey from Teach First <a href="https://www.theguardian.com/technology/2025/jul/13/graduates-teach-first-in-person-interviews-ai">said</a> the charity had seen around a 30% increase in applications so far this year on the same period in 2024, a development he primarily attributes to large language models: </p><blockquote><p>&#8220;The shift from written assessment to task-based assessment is something we feel the need to accelerate&#8230;there are instances where people are leaving the tail end of a ChatGPT message in an application answer, and of course they get rejected.&#8221;</p></blockquote><p>Other accounts paint a similar picture, with graduate employment specialist Bright Network <a href="https://www.theguardian.com/technology/2025/jul/13/graduates-teach-first-in-person-interviews-ai">reporting</a> that the number of people using AI for job applications has risen from 38% last year to 50% this year. And why wouldn&#8217;t they? Applications are tedious at the best of times, never mind when the odds of success are long.  </p><p>If everyone can write well enough to pass a hiring round or two, it follows that employers will change tack to focus on assessments where LLMs aren&#8217;t much use. While that might seem straightforward enough, we ought to remember that in-person tests are not exactly a direct replacement. </p><p>Those running the hiring process are no doubt aware that teachers <a href="https://www.bbc.com/news/articles/c1kvyj7dkp0o">are encouraged to use AI</a> in the classroom, so in-person interviews already proceed on the basis that successful candidates will use LLMs when they get the job. </p><p>A pivot to face-to-face tests might simply ignore that reality, or it might just reflect the lack of compelling alternatives. But whether or not the move aims to approximate written tests via in-person assessment doesn&#8217;t really matter. After all, the nature of face-to-face tests means they also probe for verbal and social competencies.   </p><p>That&#8217;s because the people who succeed will be the ones who interview best, who get on well with those asking the questions in a way that makes them think they&#8217;ll be suited to the job. I think about this as the <strong>vibes premium:</strong> the increase in value placed on subjective traits &#8212; charisma, manner, confidence, aesthetic, speech, and presence &#8212; as it becomes tougher to use technical measures to distinguish between candidates. </p><p>The idea is related to but different from &#8216;interpersonal skills&#8217; that are also likely to <a href="https://open.substack.com/pub/benjamintodd/p/how-not-to-lose-your-job-to-ai?r=6bggh&amp;utm_campaign=post&amp;utm_medium=web">appreciate</a> in value in the age of AI. However, our vibes premium is less &#8216;problem-solving&#8217; or &#8216;leadership&#8217; and more simply &#8216;making people want me around&#8217;. There are lots of ways to make that happen, but they generally deal with personality traits and behaviours even softer than these &#8216;soft skills&#8217;. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h3>Applying for everything </h3><p>It&#8217;s rough out there for graduates. Clearly good jobs have always been difficult to land, but I suspect competition has been stiffening for quite some time as barriers to entry have collapsed. More people apply to graduate opportunities thanks to an expansion in higher education, online application portals, social media job openings, and international mobility. Now candidates can use AI tools that let anyone sound polished, which in turn raises the standards of applications and makes it harder for any one person to land a job. </p><p>On the other side of the line, a few friends have told me they no longer plan to hire because their respective leadership teams are convinced AI can do the tasks that they might want a junior person to take on (though it is worth saying the picture is <a href="http://t.co/h2aRQIY3OD">more complicated</a> than that). </p><p>Whatever the case, a <a href="https://www.independent.co.uk/news/business/jobs-chatgpt-ai-automation-adzuna-b2779656.html">recent survey</a> reported that vacancies for graduate jobs, apprenticeships, internships and junior jobs with no degree requirement dropped by 32% since the launch of ChatGPT in November 2022. The same poll says that entry level jobs now account for 25% of the market in the UK, down from 28.9% in 2022. </p><p>We have two forces at play, both of which may be driven at least in part by the emergence of powerful AI systems: </p><ul><li><p><strong>Graduate job availability is falling</strong>. This might be because managers are automating the work or doing it themselves with AI, though it is also possible there are other <a href="https://www.thetimes.co.uk/article/why-the-odds-are-stacked-against-todays-university-graduates-6h5q39m27?utm_source=chatgpt.com">labour market effects</a> at play that are responsible. </p></li><li><p><strong>Graduates are using AI in applications, </strong>so writing samples converge around the same level of quality. This compression may happen elsewhere, but it&#8217;s especially influential for entry level roles where differentiating factors are harder to come by.</p></li></ul><p>In some ways the emergence of powerful AI systems is deeply humanistic. In entertainment, I expect good taste and human-led curation to <a href="https://www.learningfromexamples.com/p/inside-the-slop-factory">become more valuable</a> in a world overflowing with slop. I also imagine the same is true for the workplace, where technical skills become less relevant versus the qualities that only humans possess. </p><p>To be clear, my view is that AI raises the floor for technical skills but doesn&#8217;t necessarily eliminate the ceiling. A sharp candidate who knows the domain, has original ideas, and uses LLMs well can still outpace someone who just pastes in a prompt. The best performing candidates might even find themselves <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">slowed down</a> by the technology. </p><p>Even so, we&#8217;re interested in entry level jobs and graduate roles. These are the places where more people now get passable technical skills than ever before, which means employers need to look elsewhere to distinguish between candidates. </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nsas!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" width="48" height="15.652173913043478" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:150,&quot;width&quot;:460,&quot;resizeWidth&quot;:48,&quot;bytes&quot;:12198,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162870944?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a011107-4790-4b64-9f4c-4b8fcace22de_460x330.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Regardless of how good the models get in the near future, I don&#8217;t see them fully displacing humans across the board. Part of that is about allowing employers to maintain a clear locus of responsibility at work, so if an AI makes a mistake firms can point to someone holding the bag. </p><p>But it&#8217;s also because there are some roles where we <em>want</em> a human in the loop, even if we don&#8217;t strictly need one. People go to a doctor for reassurance as well as care. They want empathy, explanation, and the sense that someone is emotionally attuned to their issue. The same goes for teachers, therapists, lawyers, and countless other jobs whose value partly flows from the human touch.</p><p>Alas, that is probably little comfort to graduates finding it hard to land a job. Even if today&#8217;s impact is overblown, young people will be those who experience the economic impact of LLMs first because it&#8217;s easier for the technology to help low skilled people become average rather than the excellent to become brilliant. </p><p>This sounds worrisome, but it does mean that they are likely to be the first to respond to the new demands of employers. Things that centre the human, things that let them play well with others, and things that mean their bosses want them around. </p><h3>New signals </h3><p>That leaves us in an interesting place, one where technical proficiencies like writing are no longer reliable indicators of skill or effort. Firms of all stripes are hungry for new kinds of signals that they can use to sort people, and so we get a hard pivot to face-to-face interviews. </p><p>This is a type of assessment less interested in output than in character, a trend that is likely to become more popular as firms look specifically for traits that only human beings can provide. It&#8217;s a manifestation of a kind of neo-humanism premised on the idea that the more technically proficient machine outputs become, the more we value the ineffable. </p><p>Neo-humanism returns hiring to something closer to the old apprenticeship model, where being able to work alongside someone mattered more than being highly credentialed. And at the other end of the spectrum, we should remember that elite institutions like Oxbridge never stopped interviewing for some courses. </p><p>There's something appealing about jobs going to people who can actually collaborate and communicate rather than those who simply know their way around a marking scheme. But a world where success depends on being likeable in a fifteen minute conversation is going to be a brutal one for people to navigate, where the premium on hard work may not be what it once was.</p><p>Maybe the future pans out differently, though I wouldn&#8217;t be so sure. As technical skills become table stakes, knowledge work will be increasingly defined by the stuff that makes us human. Put another way: given enough time, AI will make personality hires of us all. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The UK expects AGI in four years. Why doesn’t it act like it?]]></title><description><![CDATA[Trust me bro Westminster edition]]></description><link>https://www.learningfromexamples.com/p/the-uk-expects-agi-in-four-years</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/the-uk-expects-agi-in-four-years</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 22 Jul 2025 10:25:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!AMNb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AMNb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AMNb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg 424w, https://substackcdn.com/image/fetch/$s_!AMNb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg 848w, https://substackcdn.com/image/fetch/$s_!AMNb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!AMNb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AMNb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg" width="1456" height="968" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:968,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AMNb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg 424w, https://substackcdn.com/image/fetch/$s_!AMNb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg 848w, https://substackcdn.com/image/fetch/$s_!AMNb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!AMNb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d9f6a2-d0e0-4a0b-ba42-cb2c2efaacec_1536x1021.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Clarence Gardens</em> by William Ratcliffe from 1912. </figcaption></figure></div><p>Peter Kyle is the UK Secretary of State for Science, Innovation and Technology. Responsible for asking the machinery of government to foster, develop, and react to important developments in these industries, Kyle is a senior politician whose voice carries weight in Westminster. </p><p>He also thinks that artificial general intelligence (AGI) is coming. And soon. On a recent <a href="https://x.com/Discoplomacy/status/1942577923938029654">podcast</a>, Kyle said: <strong>&#8220;I think by the end of this parliament we're going to be knocking on artificial general intelligence.&#8221;</strong> For those not familiar with the timetable of the British political system, that puts the arrival of AGI in 2029.</p><p>It&#8217;s a strange statement that can be read in a few ways. Maybe Kyle doesn&#8217;t really believe what he&#8217;s saying or maybe he&#8217;s parroting lines that he&#8217;s <a href="https://www.transformernews.ai/p/congress-ccp-agi-hearing">heard from</a> American politicians. I take Kyle to be intelligent and I don&#8217;t know what he has to gain by putting these remarks out there if he doesn&#8217;t believe them, so I&#8217;d be minded to give him the benefit of the doubt and grant that he believes his own forecast. </p><p>Another reading might stress that Kyle has a very specific view of what &#8216;AGI&#8217;  means that doesn&#8217;t really correspond to how people generally think about the technology. He followed up his statement with the suitably cryptic: &#8220;I think in certain areas, it will have been achieved,&#8221; which suggests he&#8217;s talking about systems with human-level performance in some circumstances but not in others. The problem with this interpretation is that it glosses over the &#8216;general' in artificial general intelligence (not to mention that today&#8217;s systems <em>already</em> exceed humans in some domains). </p><p>Option number three is that Kyle thinks of AGI broadly like most other people in the field &#8212; a system capable of conducting the majority of cognitive tasks that a human can &#8212; and does indeed think we should expect a system like this in just a few years. </p><p>For the purposes of this post, I am going to assume that this interpretation is correct. Kyle knows what AGI is and he believes in what he&#8217;s saying. And as the person responsible for government technology policy, we should probably treat these remarks with the respect they deserve. </p><p>One way to do that is to ask an obvious but important question. If the UK government thinks AGI is coming within the next five years, is it behaving with the seriousness we should expect to prepare for its arrival? </p><p>Of course not. </p><p>That&#8217;s not to dismiss the good work done by the AI Security Institute (AISI) or those behind the <a href="https://assets.publishing.service.gov.uk/media/678639913a9388161c5d2376/ai_opportunities_action_plan_government_repsonse.pdf">AI Opportunities Action Plan</a>, but rather to point out that even efforts that move faster than the glacial speed of government aren&#8217;t enough if the Secretary of State&#8217;s timelines are correct. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h2>Five tests  </h2><p>What follows are some loose thoughts about how a UK government would behave if it <em>actually</em> believed that AGI was about to arrive. I&#8217;ve structured these as &#8216;tests&#8217; around a few of the policy areas that I expect to matter for the successful deployment of AGI. Compute to make the models tick, a national security posture that reflects the reality of a world with AGI, efforts to harden the country against economic and social shocks, moves to bolster state capacity, and regulations mandating various governance requirements. </p><p>It should go without saying that pretty much all of these have an extremely low chance of happening. The point of this exercise isn&#8217;t to blackpill anyone but to show the extent of the disconnect between (a) believing AGI is just around the corner and (b) the policies adopted by the government. </p><h3>Compute </h3><p>If AGI appears anytime soon, it&#8217;s going to be based on a version of the large model paradigm. Whatever specific form it takes, at the very least we&#8217;re talking about a massive connectionist model that needs a great deal of compute to develop and serve to users. </p><p>These both matter for the UK if AGI is as close as Kyle thinks. While there are no national champions like Mistral in France to compete with American or Chinese labs, we should expect that &#8216;<a href="https://www.theguardian.com/technology/2023/mar/15/uk-to-invest-900m-in-supercomputer-in-bid-to-build-own-britgpt">BritGPT</a>&#8217; will make an almighty comeback if the government feels confident it could eventually become BritAGI. </p><p>But the main ticket is the juice for usage, which is sometimes referred to as &#8216;test time&#8217; or &#8216;inference&#8217; compute. If we have systems that can do pretty much anything a remote worker can, the main factor constraining their use is access to compute. Some of that will come from overseas, but any government that really believed compute was about to become king would want to have a supply at home for a bunch of <a href="https://writing.antonleicht.me/p/datacenter-delusions">economic and political reasons</a>. </p><p>How&#8217;s the UK doing on that front? Somewhere between terrible and badly. Despite the fact the powers that be have accepted the proposal of the <a href="https://assets.publishing.service.gov.uk/media/678639913a9388161c5d2376/ai_opportunities_action_plan_government_repsonse.pdf">AI Opportunities Action Plan</a> to increase compute by a factor of 20 by 2030, a quick back of the envelope calculation suggests that would still leave Britain with well under 4% of the total raw horsepower the American public and private sector <em>already</em> has on the books. Even the <a href="https://www.gov.uk/government/publications/uk-compute-roadmap/uk-compute-roadmap">Compute Roadmap</a>, which sounds impressive on first blush, talks up total investment that is about half of what Microsoft plans to spend independently in 2025.  </p><p>If the government really thought AGI was five years away, it would be looking to increase compute by 100x from the current floor. This would still be on the light side, but might take us from &#8216;bad&#8217; to &#8216;somewhat bad&#8217; in the context of the UK&#8217;s size relative to Uncle Sam. Clearly that&#8217;s easier said than done, but one way forward would probably include a combination of tax breaks, accelerating the rollout of proposed <a href="https://www.gov.uk/government/publications/ai-opportunities-action-plan-government-response/ai-opportunities-action-plan-government-response#ai-growth-zones">AI Growth Zones</a>, and good old-fashioned state investment.    </p><h3>Defence </h3><p>If AGI really was just around the corner, today&#8217;s geopolitical settlement may be in for a shock. As I recently <a href="https://time.com/7291455/ukraine-demonstrated-agi-war/">wrote about</a> for Time, complexity is the strategic currency of war in the information age &#8212; and AGI is a complexity accelerator. But how this shakes out in practice depends on who makes the technology and where it lives. </p><p>A recent <a href="https://www.rand.org/pubs/research_reports/RRA3034-2.html">report</a> from the RAND Corporation explores this idea in more detail. The authors sketch eight different geopolitical futures including one where the United States uses AGI to usher in a moment of unipolar power, one where China does the same, one where AGI is shared amongst liberal democratic powers, and one where the machine goes loco and takes over. </p><p>These scenarios are based on the idea that AGI could be a powerful tool for organising warfare, organising material and controlling robotic hardware, and decoding enemy plans. If you honestly believed AGI was just around the corner, it is safe to say the 2025 <a href="https://assets.publishing.service.gov.uk/media/683d89f181deb72cce2680a5/The_Strategic_Defence_Review_2025_-_Making_Britain_Safer_-_secure_at_home__strong_abroad.pdf">Strategic Defence Review</a> is already a shade out of date. Concepts like &#8216;digital targeting web&#8217; or a &#8216;Digital Warfighter group&#8217; all incorrectly presuppose that humans remain the ultimate decision-makers, strategists, and actors in the age of AGI.  </p><h3>Preparedness </h3><p>A government that really believed AGI was set to arrive before the decade is out would be treating it like a national emergency. The first order of business would be to pin AGI to the mast of state in a way that survives elections. Westminster has done this before (e.g. granting the Bank of England its independence), so we&#8217;re not exactly in uncharted waters. We might imagine an AGI Commission that would:</p><ul><li><p>Regularly report to Parliament on how close we are to certain thresholds by assessing capability evaluations, lab disclosures, and macro trends.  </p></li><li><p>Licence and inspect anything above a defined compute or capability threshold, probably working in combination with an AISI with teeth (more below). </p></li><li><p>Plan for the downstream consequences &#8212; labour shocks, social changes, threat acceleration &#8212; and hand government a menu of possible responses. </p></li></ul><p>Every serious scenario in which AGI can do &#8216;the majority of cognitive tasks a human can&#8217; ends with a labour market that looks as if a neutron bomb has gone off. At the very least, the UK should probably be trialling regional universal basic income schemes and looking long and hard at corporate tax so some income from frontier models flows to the exchequer. </p><p>This is to say nothing of the many other unexpected ways that a world with AGI could spell trouble for the state. Maybe it&#8217;s cyber or bio attacks or maybe it&#8217;s an algorithmic arbitrage engine that shorts the pound into free-fall. I don&#8217;t know. No one does for sure. The point is that once you build AGI, the menu of unpleasant surprises may multiply faster than any Whitehall risk register can keep up.</p><p>Planning is already happening inside government, but right now no-one really cares or is paying attention. My understanding from those near the action is that the civil service has concluded that if any of these violent risks materialise there&#8217;s nothing the state can do. </p><h3>State capacity   </h3><p>But in some ways, the UK is doing more than most. The AI Security Institute hands out grants, tests models, and has spun up independent safety and interpretability programmes staffed by some impressive CVs. Alas, it&#8217;s still small change. We are talking about sums that are similar to what Google spends on catering over the same period.</p><p>If we buy Kyle&#8217;s timelines, we should be increasing AISI&#8217;s budget by between a factor of 10 and 100. That kind of jump for a group with a &#163;240M <a href="https://www.techuk.org/resource/spending-review-2025-what-s-in-it-for-tech.html">budget</a> sounds crazy until you remember the frontier labs burn through billions of dollars of cash every year. If ministers are serious about peering inside a system that may shortly outsmart them, billions, not millions, have to be the unit of account.</p><p>Likewise, the UK has its Advanced Research and Invention Agency (ARIA) based on the American DARPA model. The agency <a href="https://www.timeshighereducation.com/news/reeves-uplifts-aria-budget-ps1-billion-and-funds-ai-courses?utm_source=chatgpt.com">has about</a> &#163;1bn to spend over a couple of years, some of which already goes towards promising approaches for making AI safe. Multiply that by the same order of magnitude and that is exactly the territory you need to be in if you want the state to steer the terms on which AGI arrives. </p><p>Could the Treasury stomach numbers like that? Not really, unless they pulled the old trick of treating the cash as defence spending. But that won&#8217;t happen, because the headline figures are really a referendum on belief. If you keep budgets in the tens of millions,  you tacitly confess you do not in fact expect AGI by 2029. </p><h3>Regulation </h3><p>The AI Security Institute is doing good work. Its budget is bigger than its American counterpart, which it has agreements with to co-test models, and it has inspired the formation of AISIs the world over. But we should remember that while labs let AISI test their models, they aren&#8217;t compelled to by law. </p><p>They don&#8217;t give AISI access to <em>every</em> model, and they don&#8217;t have any responsibility to make changes even if testing finds something wrong. If ministers really believe an AGI is about to walk through the door, we might expect them to do something other than leaving safety to goodwill.  </p><p>If they were treating governance seriously, they would give AISI a Royal Charter with powers to match. A charter makes it harder for a future government to quietly trim its wings; inspector powers let it enter labs uninvited, run its own scripts, and &#8212; if the model fails certain tests &#8212; issue a stop order. Of course, there is simply no way these actions will happen without American backing, but a charter would still be a signal of intent to back-up short timelines.  </p><p>Ministers had drafted a &#8216;frontier-model&#8217; safety bill for earlier this year, but they <a href="https://www.theguardian.com/technology/2025/feb/24/uk-delays-plans-to-regulate-ai-as-ministers-seek-to-align-with-trump-administration">shelved it</a> in February to better align with the new US administration. Officials now talk about a broader &#8216;AI Bill&#8217; to be introduced in the future, but no one really knows what that is likely to include. </p><h2>Honesty is the best policy </h2><p>For the record, my own timelines for AGI are longer than four years. But if I was running the country, and if my timelines were as short as Kyle&#8217;s, you can bet I&#8217;d be implementing some policies that reflected the logical consequences of my beliefs. </p><p>The UK government is forecasting a technological event on the scale of the steam engine, then responding as if it were a smartphone upgrade. Either the state&#8217;s machinery must accelerate to match the timetable, or the timetable is made up. </p><p>If ministers truly expect to be &#8216;knocking on AGI&#8217; within one parliamentary term, then the compute, safety science, resilience measures, and legal guardrails have to scale accordingly. If they can&#8217;t or won&#8217;t do that, maybe they should admit that four years is a fantasy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Smells like machine spirit ]]></title><description><![CDATA[Animism and ambient intelligence]]></description><link>https://www.learningfromexamples.com/p/smells-like-machine-spirit</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/smells-like-machine-spirit</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 15 Jul 2025 10:25:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MJfX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MJfX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MJfX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png 424w, https://substackcdn.com/image/fetch/$s_!MJfX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png 848w, https://substackcdn.com/image/fetch/$s_!MJfX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png 1272w, https://substackcdn.com/image/fetch/$s_!MJfX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MJfX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png" width="1456" height="860" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:860,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7740649,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/167202965?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MJfX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png 424w, https://substackcdn.com/image/fetch/$s_!MJfX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png 848w, https://substackcdn.com/image/fetch/$s_!MJfX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png 1272w, https://substackcdn.com/image/fetch/$s_!MJfX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7221d7df-3212-4fa4-ba44-d623cf181280_2604x1538.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Christ Pantocrator surrounded by the Tetramorph, altar frontal, Solanllong (Ripoll) 1200 - 1210. </figcaption></figure></div><p>A few years ago you probably read about the &#8216;internet of things,&#8217; a comfortable but baggy way of describing a network of interconnected electronic devices. The idea occupied a central place in that other forgotten project, the &#8216;fourth industrial revolution,&#8217; which jammed together everything from genomics to virtual reality.</p><p>The goal of the internet of things was to make everyday machines &#8216;smart,&#8217; a mission that I suppose has been accomplished. I allegedly have a smart speaker, a smart TV, and even a smart oven that I tell myself I will one day connect to the wifi network. It&#8217;s a fittingly dull task for a technology that doesn&#8217;t get the blood pumping, one that in some ways follows the path laid down by electricity or telephony. Like the smart-everything, these are remarkable things that became first mundane and then invisible.</p><p>If you can gloss over the horrible phraseology, the connected device is a useful thread to pull for making sense of assumptions about what good machines do. When an old appliance breaks, it&#8217;s replaced by a new model that insists on attaching itself to your wifi. The physical layer gets wider as more sensors, communication nodes, and platforms are folded into the network. But it also gets deeper as the connected devices become a little more lively.</p><p>Boosters <a href="https://www.neuco-group.com/the-future-impact-of-intelligent-machines/?utm_source=chatgpt.com">describe</a> these devices as manifestations of &#8216;artificial intelligence&#8217; because they identify changes in the environment and change state accordingly. Weighing a smart washing machine against a large language model seems a bit overwrought, but it does raise an important point about the nature of digital intelligence: it doesn&#8217;t care about the shape of its container.</p><p>ChatGPT may animate your computer or phone, but the real magic is happening in an Arizona data centre. Even without access to the internet, compression techniques <a href="https://astrobiology.com/2024/07/tricorder-tech-a-highly-capable-language-model-locally-on-your-phone.html">now let</a> a three billion parameter language model run on a phone&#8217;s battery without cooking it. Researchers are <a href="https://www.mdpi.com/2227-7390/13/11/1878?utm_source=chatgpt.com">doing the same</a> for the hardware layer, proving that models can live next to the signal rather than a continent away.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><p>This points us towards a curious observation about AI in the popular imagination. Large models can be anywhere with enough processing power or with sufficient connectivity, but we only tend to picture them residing in a small set of physical platforms. Humanoid robots loom especially large in the public psyche, giving the impression that digital intelligence obeys the same rules as our own.</p><p>But AI doesn&#8217;t work like that. It will populate the world around us and turn appliances, devices, and computers into talking (and in some instances, walking) machines that interact with us from whatever vantage point they can cling to. </p><p>For the purposes of this post, I&#8217;m going to make a few assumptions about the future. These are (a) the models will broadly maintain their current rate of improvement for the foreseeable future; (b) the best models of any given moment will shrink to allow for local deployments; (c) different models will be able to communicate with each other; and (d) lots of physical platforms are capable of hosting the models in one form or another. </p><p>If each of these assumptions hold, there&#8217;s no reason to think we won&#8217;t have one AI model (or a handful of models) that live across many substrates within the next couple of years. After all, we already have Claude <a href="https://www.anthropic.com/research/project-vend-1">in a vending machine</a>, Grok <a href="https://www.tesla.com/support/articles/grok">inside the newest Teslas</a>, and LLM-powered assistants in <a href="https://www.cdomagazine.tech/aiml/genai-in-your-fridge-samsung-to-launch-home-appliances-with-llm-powered-bixby">fridges, ovens, and dishwashers</a>. </p><h3>Animism through the ages </h3><p>Animism is the conviction that spirit inhabits matter, one that shows up far earlier than the word itself. Palaeolithic hunters painted animals on rock walls and carefully arranged bear skulls in a way that archaeologists <a href="https://www.penn.museum/sites/expedition/the-cult-of-the-cave-bear/?">interpret</a> as negotiations with animal persons. When Edward Burnett Tylor coined the term animism in his 1871 book <em>Primitive Culture</em>, he <a href="https://darwin-online.org.uk/converted/pdf/1871_Tylor_PrimitiveCulture_CUL-DAR.LIB.635.pdf">formalised</a> that observation by defining early religion as &#8216;the doctrine of souls and other spiritual beings in general&#8217; that could reside in the natural world. </p><p>By the Bronze Age, the impulse to see spirit in matter hardened into liturgy. In Mesopotamia a newly carved cult statue underwent the <em>m&#238;s-p&#238;</em> (&#8216;washing of the mouth&#8217;) procession. Craftsmen led the image to a riverbank where its lips were ritually cleansed, then &#8216;opened&#8217; with cedar oil. From that moment on, the wood and precious metal counted as the god&#8217;s living presence and was capable of eating offerings, signing treaties, and punishing neglect. </p><p>A similar performance took place in the Egyptian Old Kingdom. Priests put <em>peseshkaf</em> blades to the mouth and eyes of statues in the &#8216;opening of the mouth&#8217; ritual <a href="https://www.ucl.ac.uk/museums-static/digitalegypt/religion/wpr.html?">that enabled</a> a figure to enjoy food and speak in the afterlife. Eberhard Otto&#8217;s <em>Das &#228;gyptische Mund&#246;ffnungsritual</em> from 1960 identified 75 examples of the practice, an effort that emphasises the institutional heft underpinning animistic practice.  </p><p>Centuries later, objects did god&#8217;s work in the churches of the Byzantine Empire. Painted boards, splinters of bone, weapons, and other artefacts were thought to contain divine energy (<em>energeia</em>). John of Damascus made the theology explicit when he <a href="https://christianhistoryinstitute.org/study/module/john-of-damascus">said</a>: &#8216;I do not worship matter; I worship the God of matter, who became matter for my sake.&#8217;</p><p>But probably the most famous example of animism comes from Japan&#8217;s Shinto, a religious tradition whose roots lie in early agrarian rites that personified the forces sustaining rice cultivation. At its centre are <em>kami</em>, the packets of vitality saturating both nature and human-made objects that invite reverence through craftsmanship or long use. </p><p>Everyday life in a Shinto frame assumes a world that listens back. A shrine gate marks a threshold where rock and tree possess their own intentions and household rituals treat the cooking fire or well as moral participants. That habitual attribution of inner life is what scholars describe as animism. </p><p>When the electric telegraph connected Europe to America in the nineteenth century, the Victorians seized the new medium as proof that voices could travel between planes of existence. S&#233;ances were framed as &#8216;circuits&#8217; and mediums styled themselves human telegraph stations. A New York weekly even titled itself <em>The Spiritual Telegraph</em>, <a href="https://iapsop.com/archive/materials/spiritual_telegraph/index.html?">reporting</a> on dispatches from the afterlife in the form of a newspaper. As Desmond G. Fitzgerald, the editor of the <em>Electrician</em>, <a href="https://www.cambridge.org/core/journals/british-journal-for-the-history-of-science/article/abs/telegraphy-is-an-occult-art-cromwell-fleetwood-varley-and-the-diffusion-of-electricity-to-the-other-world/95A66EC53BF82CF62F8C6E2F7E1F4DF7?utm_source=chatgpt.com">put it</a> in May 1862:</p><blockquote><p>&#8220;Telegraphy has been until lately an art occult even to many of the votaries of electrical science. Submarine telegraphy, initiated by a bold and tentative process &#8211; the laying of the Dover cable in the year 1850 &#8211; opened out a vast field of opportunity both to merit and competency, and to unscrupulous determination. For the purposes of the latter, the field was to be kept close [<em>sic</em>], and science, which can alone be secured by merit, more or less ignored.&#8221;</p></blockquote><p>The author Jeffrey Sconce notes that popular magazines styled the s&#233;ance room as a kind of domestic telegraph station. In <em>Haunted Media</em>, he describes the rise of spiritualism as a utopian response to the electronic powers presented by telegraphy and connects the emergence of the radio with an &#8216;atomized vision of the afterlife.&#8217; </p><p>The upshot is that animism has a knack for rerouting through the technologies of the day, whether that&#8217;s burial artefacts, swords, or telegraph wires. People may not worship microwave ovens in the future, but I wouldn&#8217;t rule out the adoption patterns of animistic living that stress interconnectedness, vibrancy, and agency. </p><h3>Intelligence in the pipes </h3><p>Animism is more habit than philosophy, a set of reflexes we use when faced with unexpected forms of mediation. Its long life reminds us that we've done this before, and that what feels new &#8212; doorbells and ovens and thermostats that listen &#8212; is the latest chapter in a much older story about how we relate to our surroundings. </p><p>The diffusion of AI into the world is a well-trodden cultural negotiation, one that should make us weary of the image of an embodied intelligence that only exists within humanoid robots or specialist AI hardware. That will surely happen, but these kinds of deployments will only represent the most visible form of physical manifestation. </p><p>Just as a smart thermostat slips into invisibility, large models may come to occupy our surroundings in ways that feel almost as unremarkable. Animism has some utility here. It encourages us to notice intelligence in the places we aren&#8217;t used to looking, to imagine the world as a little more vibrant. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Inside the slop factory ]]></title><description><![CDATA[Cautious sloptimism about the future of entertainment]]></description><link>https://www.learningfromexamples.com/p/inside-the-slop-factory</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/inside-the-slop-factory</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 08 Jul 2025 10:25:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dwwV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dwwV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dwwV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png 424w, https://substackcdn.com/image/fetch/$s_!dwwV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png 848w, https://substackcdn.com/image/fetch/$s_!dwwV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png 1272w, https://substackcdn.com/image/fetch/$s_!dwwV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dwwV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png" width="1456" height="809" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:809,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11448037,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/164407008?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dwwV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png 424w, https://substackcdn.com/image/fetch/$s_!dwwV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png 848w, https://substackcdn.com/image/fetch/$s_!dwwV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png 1272w, https://substackcdn.com/image/fetch/$s_!dwwV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6ab0cfc-6ed8-46b4-9663-7c89ed2d2f90_3276x1820.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Sciences and the Arts by Adriaen van Stalbent (1650)</figcaption></figure></div><p>I recently argued that <a href="https://www.learningfromexamples.com/p/taste-is-all-you-need">cultivating</a> a sense of taste will get you through the age of AI slop and then some. The era of creative automation will be caked in sludge, but there&#8217;s no reason it shouldn&#8217;t also bring with it more good stuff than ever before. If you can reliably sort the better from the bitter then you&#8217;re onto a winner. </p><p>I&#8217;m deeply uncertain about the specific ways that AI will reconfigure the world of mass culture, but &#8212; assuming the models maintain their current rate of improvement &#8212; any analysis has to begin with a supply side shock that produces new kinds of filtering mechanisms.  </p><p>One of these is deeper personalisation, where algorithmic feeds shape what you see based on your stated and revealed preferences. The other is to rage against the machine by seeking out human work, curated selections, or even high quality AI content discovered through effort and taste. </p><p>To make this case, I&#8217;ve cobbled together a simple representation of the entertainment ecosystem and the points at which generative models are likely to exert pressure. I want to give you a map of what&#8217;s changing right now (and what could change in the future) to make better sense of the coming flood. </p><p>This post mainly deals with video content production because that&#8217;s where we spend a <a href="https://www.statista.com/statistics/611707/online-video-time-spent/">great deal</a> of our entertainment hours, though some of these ideas could in principle be applied to any other type of visual media. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LMPu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LMPu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png 424w, https://substackcdn.com/image/fetch/$s_!LMPu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png 848w, https://substackcdn.com/image/fetch/$s_!LMPu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png 1272w, https://substackcdn.com/image/fetch/$s_!LMPu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LMPu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png" width="1456" height="690" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:690,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:160760,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/164407008?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LMPu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png 424w, https://substackcdn.com/image/fetch/$s_!LMPu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png 848w, https://substackcdn.com/image/fetch/$s_!LMPu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png 1272w, https://substackcdn.com/image/fetch/$s_!LMPu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79cbaac5-5428-4742-950f-170920bc6f92_1498x710.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Not just an opportunity to sharpen my rusty Google Slides skills, the above toy model helps us think about the production and consumption of visual culture. Its layers are organised around two elements: participants (the individuals or institutions operating in that space) and enablers (the underlying contingencies that shape what they can do). </p><p>Each section of this post explores how AI may reshape these areas by altering who creates, how it circulates, where the money flows, which rights get recognised, and how we decide what's worth our time. It&#8217;s not exhaustive, but I found this exercise useful for puzzling out what a sloppier future looks like in practice.   </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h2>Creation</h2><p>Visual content production is made up of a pipeline of specialists. Writers pen scripts, artists storyboard frames, designers construct sets and make costumes, actors read lines, directors of photography mock up scenes, editors assemble rough cuts, VFX teams add effects, and composers score sounds. </p><p>If you squint a little, these roles can be separated into four groups that each deal with a piece of the creative process: development (finding the right ideas), pre-production (preparing the ground to shoot), production (the actual making of the thing), and post-production (turning raw materials into the final product). </p><p>LLMs are <a href="https://hasgeek.com/fifthelephant/2024/sub/fine-tuning-llms-for-script-writing-a-journey-into-GWjAkAtakJUdgfhGkAqtdZ">already</a> being used for scriptwriting. Strange as it sounds, I know people in the industry lamenting that some producers are openly saying they &#8216;prefer&#8217; AI scripts. Likewise, tools like Runway or Pika can rustle up moving or animatic storyboards from textual inputs, and image generation tools can be used to mock up design references. </p><p>Once production is complete, VFX teams <a href="https://www.thewrap.com/ai-vfx-production-labor/?utm_source=chatgpt.com">perform</a> upscaling and de-aging using AI, with studios feeding plates to tools that render a face before a human compositor finesses the final frame. On the audio side, AI voices <a href="https://www.respeecher.com/blog/ai-voices-adr-voiceover-indie-films?utm_source=chatgpt.com">fill</a> dialogue replacement gaps or generate foreign language dubs.</p><p>But AI&#8217;s role on the shoot itself is still narrow. There are models that flag capture errors and some experiments with face replacement tools, but these <a href="https://www.robertcmorton.com/best-ai-tools-for-filmmakers/?utm_source=chatgpt.com">systems</a> aren&#8217;t dramatically changing what it means to make a movie.  </p><p>As for what comes next, there are two possible futures: (a) the models get <em>marginally</em> <em>better</em> and continue to plug-in across each of these stages, or (b) the models get <em>much</em> <em>better</em> and collapse the whole stack. </p><p>My money is on the latter. I don&#8217;t see any reason to expect the pace of progress to slow (just look at recent video models from <a href="https://veo-3.ai/?utm_campaign=veo-01&amp;utm_term=veo%203&amp;gclid=Cj0KCQjwsNnCBhDRARIsAEzia4CnYuPeyQtuou-Fk3BKeBYw9rKqyBnC905t8r1pmu68bi8CXtN42mEaAoLFEALw_wcB&amp;source=google_ads&amp;gad_source=1&amp;gad_campaignid=22691415519&amp;gbraid=0AAAABALpdHxxi_dlQM5nUPPrHDLYWcSpt">Google</a>, <a href="https://hailuoai.video/">HailuoAI</a>, and <a href="https://updates.midjourney.com/introducing-our-v1-video-model/">Midjourney</a>). In goes a script fragment and out comes a photorealistic video clip with camera motion, environmental dynamics, and synced sound design.</p><p>If that kind of output becomes reliable, then suddenly you don&#8217;t need such a big team to make a flick. That might put serious pressure on jobs in the short term, though over a slightly longer time horizon I wonder whether we might actually see more roles in the creative industries as barriers fall and big players reorientate themselves towards prestige projects.   </p><p>The maximalist version of this dynamic leads to something like the rise of the bedroom studio where small creator groups operate like established production houses. Their creation speed is faster, their iteration cycles are tighter, and their overheads are a fraction of what they might otherwise be (with wages and compute accounting for the biggest expenditures). </p><p>Ok, but we&#8217;ve been hearing about the bedroom studio for a while now. When exactly is this going to happen? </p><p>I think we&#8217;re looking at five years until a team of three produces something broadly comparable with a major studio. An AI generated animated film will probably be the first to enjoy popular acclaim (though there&#8217;s no chance it will receive an award for its trouble any time soon). </p><p>But we&#8217;re not there yet. Google&#8217;s Veo 3 is stunning, but it can only generate about <a href="https://www.youtube.com/watch?v=WLDARkKs-T4">eight seconds</a> of content at any given moment. Perhaps more tricky is the question of character and world consistency across shots. There is lots of progress <a href="https://www.theverge.com/news/640821/runway-gen-4-artificial-intelligence-video-generator-filmmaking?utm_source=chatgpt.com">being made</a>, but right now it&#8217;s tricky to replicate scenes with uniformity over time.  </p><p>If video generation models remain roughly where they are now, then we can say goodbye to the bedroom studio. Hollywood will breath a sigh of relief. After all, they get to use AI to slash costs while avoiding having to compete with huge numbers of new aspiring film shops. </p><p>The alternative isn&#8217;t so rosy for them. Should the models keep getting better &#8212; say, maintaining the current pace of improvement for three years &#8212; then the big boys are in for a shock. </p><h2>Distribution</h2><p>But lets not get carried away here. The studios are giant machines that distribute as well as create. Marketing budgets stretch into the millions, and it isn&#8217;t exactly cheap to show your film at the box office.  </p><p>The creation element is simple to sum up: more content is coming, we just don&#8217;t know how much more and how likely it is to be any good. The question that flows from this is observation is about how to wade through the slop until we make our way to the good stuff. </p><p>In the streaming era, distribution was already algorithmically driven (about 3/4 of what people watch on Netflix is <a href="https://www.businessinsider.com/netflixs-recommendation-engine-drives-75-of-viewership-2012-4">driven</a> by its much revered recommendation engine). TikTok&#8217;s feed can catapult an unknown creator to millions of views overnight or bury a video into obscurity based on a few signals.  </p><p>In one sense we&#8217;re in for more of the same. More YouTube channels that churn out a steady drip of AI brainrot. Generative music streams pumping out audio 24/7. Thousands of AI e-books flooding self-publishing platforms. These dynamics already put enormous strain on the systems that act as gatekeepers between creators and audiences. </p><p>But the amount of content that exists today is nothing compared to what&#8217;s coming. A supply side shock will make it even harder for distributors to select content for viewers, which will in turn produce new kinds of filtering mechanisms: </p><ul><li><p><strong>Streaming services get ruthless:</strong> Every major platform will accept more titles as acquisition costs plummet, but their front pages can&#8217;t grow to keep pace with its expanding catalogue. Expect anything that doesn&#8217;t hit the right metrics to disappear from view after a short probation window. </p></li><li><p><strong>Creator platforms become serious:</strong> If you can make a good looking 90 minute film with three people and $10K of compute, a Netflix deal that takes months and comes with a bunch of notes looks less attractive. Platforms like YouTube, which will now get high quality films to complement the slop, are the beneficiary here (though many creators will still want the prestige that comes with a deal). </p></li><li><p><strong>Cinema becomes elite:</strong> It costs a lot to run a cinema, and screening cheap AI films doesn&#8217;t help proprietors break even. I see an &#8216;operafication&#8217; of cinema on the horizon, where a trip to the silver screen is for those who want to watch &#8216;human made&#8217; flicks that justify the ticket price. </p></li></ul><p>Supply balloons but the prime real estate (home pages, top ten rows, and theatre screens) doesn&#8217;t keep pace. Distribution therefore tilts toward (a) algorithmic triage for the masses or (b) human filtered corridors for viewers who equate artisanal with meaningful.</p><p>No matter what happens, more content is coming. That content needs to be filtered, and the primary way that will happen is through increasingly sophisticated layers of personalisation. That might sound great on one level (more stuff that I like) but hellish on another (no one likes the same stuff as me).</p><p>The benefit of this dynamic is that it&#8217;s likely to produce a reaction in the form of human tastemakers to inject a much needed dose of authenticity into proceedings. Newsletters, curated communities, forums, and group chats will flourish to help people figure out where they should spend their time. Long deemed irrelevant, I expect critics to become more popular than ever. </p><p>For creators, the options are to chase mass exposure through algorithmic optimisation or cultivate loyal niches who actively resist the feed. This is often what <a href="https://www.atvenu.com/post/how-much-money-artists-make-in-streaming-vs-merchandise-sales?utm_source=chatgpt.com">happens today</a> with small musicians, who tend to make more from merch than they do from streaming royalties. </p><h2>Economy<strong>  </strong></h2><p>Historically, the process through which cash flows through the entertainment ecosystem has been relatively linear. Studios pay talent to produce content, content is distributed via cinema or television and sold to audiences as advertisers pay for eyeballs. Some of that revenue trickles back to studios as royalties or profit shares, which pays for new projects.  </p><p>The Netflix era complicated these flows. Money now moves through subscription pools, and payouts are tied to <a href="https://www.latimes.com/entertainment-arts/business/story/2023-09-14/wga-writers-strike-sag-aftra-actors-strike-netflix-ratings-data-transparency?utm_source=chatgpt.com">opaque viewership metrics</a> rather than direct sales. The creator economy added new branches to the tree, with individual creators earning through ads, Patreon, merchandise, and a slew of other direct-to-fan channels.</p><p>Generative models push this logic further. Lets begin with the big one: the cost of making stuff. If an indie film that once cost $5 million can be made with AI help for $500k, that changes the break even geometry for its backers. More projects might get made for less, but also potentially earn less if the market gets saturated. </p><p>One way to make sense of this is through the &#8216;cost-to-quality tradeoff&#8217; that deals with how much richness a creative artefact has relative to the value of resources that went into its making. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oUNR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oUNR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png 424w, https://substackcdn.com/image/fetch/$s_!oUNR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png 848w, https://substackcdn.com/image/fetch/$s_!oUNR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png 1272w, https://substackcdn.com/image/fetch/$s_!oUNR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oUNR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png" width="1442" height="595" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:595,&quot;width&quot;:1442,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:71315,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/164407008?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbce91a3b-8e4a-48ea-99e0-6b8c6af1833e_1442x676.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oUNR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png 424w, https://substackcdn.com/image/fetch/$s_!oUNR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png 848w, https://substackcdn.com/image/fetch/$s_!oUNR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png 1272w, https://substackcdn.com/image/fetch/$s_!oUNR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271eb2b8-90ff-4766-a5f0-2b70858f5cfd_1442x595.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On the above Thoughtful Graphic&#8482;, we have one axis for production cost and another for creative quality. In the bottom left quadrant, there&#8217;s commodity content (e.g. clickbait videos). In the top-right, we&#8217;ve got prestige spectacles that match budget with discernment. </p><p>Bottom-right might be formulaic blockbusters. These are expensive but creatively safe, often relying on finding and reconstituting existing IP. Finally, the top-left is where I&#8217;m imagining indie creations, stuff made on a shoestring budget but with a recognisably creative texture. </p><ul><li><p><strong>Commodity content (low-cost, low-quality): </strong>What most people mean by slop, this segment becomes saturated as generative models collapse the cost curve in the quadrant where friction was already lowest. Expect it to overwhelm search, pull down standards, and inflate volume to a ludicrous scale.  </p></li><li><p><strong>Formulaic blockbusters (high-cost, low-quality):</strong> From one type of sludge to another, this section deals with what we might call box office bait. AI will lower costs without necessarily lowering quality, which will either free up budget for risky bets or allow studios to put out more derivative stuff.  </p></li><li><p><strong>Indie artisanal (low-cost, high-quality):</strong> This is the place that I expect AI to have the highest relative impact because it amplifies individual vision at near zero cost. If things go well, the bedroom-studios release more high quality films then we know what to do with. </p></li><li><p><strong>Prestige spectacle (high-cost, high-quality): </strong>Today, these are expensive but somewhat risky flicks like <em>Dune.</em> Tomorrow, generative models bring a bit of risk deflation to the table. If your passion project can be produced at a fraction of the cost, then you can put more of your budget into pushing boundaries. And even if you have a &#8216;no AI&#8217; policy, these films will still get made because they represent authenticity. </p></li></ul><p>If this plays out, AI actually increases the size of the creative middle class. We&#8217;ll see more budding creators who can sustain themselves because they can produce content efficiently for a modest but loyal audience. This scenario is one of decentralisation in which many creators each earning a decent living serving specific audience tastes predicated on low costs and direct fan engagement. </p><p>Less enticing is a winner-take-all situation where algorithms amplify those who already have the cash to spend on promotion. Maybe you can drop $10 million marketing a $500,000 project and have it make financial sense. That changes the shape of the media economy because it encourages studios and streamers to put more weight behind cheap content with high upside and low risk. </p><p>But efficiency comes at a price. Traditional paid reach still drives global hits, but it&#8217;s still losing ground relative to trust-based discovery (a dynamic I expect to become more pronounced over time should we see the emergence of new forms of curation). </p><p>The studios won&#8217;t vanish, but they may be forced into a kind of strategic bifurcation that mediates between slop at scale on one side and rarefied artisanal work on the other. This feels natural to me when cheap content can now be made cheaper, and prestige content can now be made with less risk. </p><p>Model providers and GPU makers will profit, but I&#8217;m optimistic that the structural advantage shifts away from creative incumbents. If production becomes cheaper and distribution more personal, then the centre of gravity in the media economy starts to tilt towards the best work. In the world of the bedroom studio, that work can come from anywhere. </p><h2>Rights</h2><p>Entertainment rights have always been complicated, but they&#8217;ve mostly relied on clear roles and processes. Writers write, actors act, and editors edit. Each of those actions gets tracked, attributed, and remunerated through extremely long-in-the-tooth frameworks. </p><p>Alas, when the pipeline concertinas, those structures start to look unstable. </p><p>We&#8217;ve already seen strikes organised around the use of AI in Hollywood, back in 2023 when models were positively primitive compared to today. Studios are experimenting with scanning background actors and generating digital extras, and voice actors worry about contracts that could allow AI clones of their voice to be used without proper compensation. As the SAG-AFTRA union <a href="https://www.theverge.com/2023/7/13/23794224/sag-aftra-actors-strike-ai-image-rights">put</a> it: </p><blockquote><p>This &#8216;groundbreaking&#8217; AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day&#8217;s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So if you think that&#8217;s a groundbreaking proposal, I suggest you think again. </p></blockquote><p>Since then, California <a href="https://www.theverge.com/2024/9/17/24247583/california-governor-newsom-signs-ai-digital-replica-bills?utm_source=chatgpt.com">enacted</a> SAG-AFTRA&#8211;backed laws that force producers to secure performers&#8217; informed consent before creating a digital replica of a living or deceased actor&#8217;s likeness. And earlier this year, the group <a href="https://www.sagaftra.org/sag-aftra-and-replica-studios-introduce-groundbreaking-ai-voice-agreement-ces?utm_source=chatgpt.com">struck a deal</a> allowing actors to license digital replicas of their voice for use in ads on the condition that actors get paid and retain control over how their likeness is used. </p><p>These are important developments, but they only deal with rights within the existing system. They don't grapple with the brave new world, the place where jobs shuffle around, new hybrid roles emerge, and small creators work across the stack. A video might be made by one person using a dozen different models. Is that person the sole creator? What if they&#8217;ve built using AI outputs trained on other people&#8217;s work?</p><p>These questions represent some of the most hotly contested battleground in the legal landscape. Getty is <a href="https://www.reuters.com/sustainability/boards-policy-regulation/gettys-landmark-uk-lawsuit-copyright-ai-set-begin-2025-06-09/">suing</a> Stability AI for scraping its stock image archive and artists have <a href="https://www.techpolicy.press/ai-lawsuits-worth-watching-a-curated-guide/">challenged</a> AI model training on copyrighted material without consent. </p><p>In the courts, nobody seems <a href="https://nysba.org/copyright-law-in-the-age-of-ai-navigating-authorship-infringement-and-creative-rights/?srsltid=AfmBOornYjeE5OTcs-wXDB08NJwiDcCx6QCYEHRFfK0bVoGweY0PXKSo">entirely sure</a> if training a model on public data counts as fair use or a breach of copyright. At stake here is the question of whether &#8216;exposure&#8217; to a work constitutes a kind of replication, a question the law isn&#8217;t yet equipped to answer. </p><p>But if I ask a model steeped in copyrighted scripts for a screenplay, the infringement question turns on the output itself. Does it lift dialogue or story beats in a &#8216;substantially similar&#8217; way to existing material? </p><p>This is clearer territory in that the systems to distinguish whether an output infringes on copyright are well established. That being said, it&#8217;s <a href="https://www.mayerbrown.com/en/insights/publications/2025/05/united-states-copyright-office-weighs-in-on-fair-use-defense-for-generative-ai-training?utm_source=chatgpt.com">unclear</a> whether training on copyrighted material increase the likelihood of infringement at all. </p><p>Where the courts land will reshape the creative economy. If the legal risks are too high, the biggest model providers will hesitate to release generative tools at all. Platforms might deprioritise AI flicks to avoid reputational damage and consumers might shun content deemed to be made unethically. </p><p>One path is that the headline lawsuits break the tech companies&#8217; way. Training counts as fair use, outputs are judged only on similarity, and the only requirement is opt-out registries or provenance tags. In that scenario the models keep getting better, datasets stay fat, and the legal skirmishes become manageable for developers. This looks likely given <a href="https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/">recent</a> judgements. </p><p>A second path sees judges decide that wholesale scraping crosses the line, halting the flywheel and forcing a new licensing regime into existence. It could be something like blanket licences for text and image corpora, mandatory opt-ins or maybe even a new &#8216;training right&#8217; to sit alongside copyright. </p><p>If I had to guess, I don&#8217;t see a future where lawsuits prevent training entirely. Courts may raise hurdles, but the hunger for national competitiveness all but guarantees they won&#8217;t remain in place indefinitely. If aggressive rulings threaten domestic AI firms, legislators might decide to rewrite the rules by redefining fair use or carving out exceptions. </p><h2>Preference </h2><p>Sooner than you think, we&#8217;ll have high quality interactive stories made based on what you say you like or what the platform thinks you like. Endless mixes of references and styles designed specifically for the viewer. Narratives that shift in real time to optimise for some metric that the streamer decided was most valuable. </p><p>This isn&#8217;t exactly a new idea, but I like the &#8216;audience of one&#8217; <a href="https://www.generalist.com/p/audience-of-one?utm_source=chatgpt.com">framing</a> that emphasises the unique nature of experiential storytelling. It&#8217;s the logical extreme of the trend described in the distribution section, where stories unfold based on our stated and revealed preferences. </p><p>The idea of interactive stories tailored to viewers might sound out there, but firms like Disney are <a href="https://www.thedailyupside.com/technology/artificial-intelligence/disney-generative-ai-patent-taps-into-users-memories/">already heading</a> in that direction. It&#8217;s really just the next step from blockbuster homogeneity to algorithmic curation. Today&#8217;s streaming services shape what we see based on what we&#8217;ve seen, so it&#8217;s no giant leap to apply that logic to creation itself. </p><p>The setting is anything you can imagine (or is created to scratch an itch you never knew you had). Characters react to your choices and the plot lines adjust to themes you care about. For many, they&#8217;ll prefer to spend time here than in the real world. </p><p>Of course, there are a few things wrong with this picture. </p><p>Even ignoring technical constraints, part of the joy of entertainment is shared experience. If we all disappear into bubbles of customised content, there&#8217;ll be no-one left to talk to about your favourite show with. You might tell them about your AI adventure, but that sounds about as fun as filling someone in about your dreams. </p><p>The more AI slop we see, the more authenticity we crave. This is a human reaction. When confronted with an abundance of cheap goods, some people turn to artisanal  alternatives (like the revival of vinyl records in the age of music streaming).</p><p>This idea reminds me of William Morris, the Victorian designer I <a href="https://www.learningfromexamples.com/p/the-slop-must-flow">wrote about</a> earlier this year. Morris rebelled against the industrial mass production of his time by championing handmade craftsmanship and designs that had the imprint of human artistry. </p><p>In the AI era, &#8216;handmade&#8217; might mean works that emphasise human presence. Perhaps theatre performances, live events, or analogue arts aren&#8217;t going anywhere. This is why I think the cinema, which will embrace human made authenticity, will eventually become a culturally elite institution like ballet or the opera. </p><p>As for the consumer, the curation of taste will <a href="https://www.learningfromexamples.com/p/taste-is-all-you-need">become</a> an identity statement. When choices are infinite, choosing to follow a specific human creator or a certain aesthetic movement becomes a way to say something about who you are. </p><p>But for any of this to happen, we need enough people to buy what model makers are selling. We&#8217;re still waiting for AI&#8217;s breakout hit that confers legitimacy on the whole category, attracting more talent and investment into AI content, which in turn yields better works. </p><p>History suggests that the doom and gloom may be overblown. As early as 1855, observers of photography were <a href="https://daily.jstor.org/did-photography-really-kill-portrait-painting/#:~:text=As%20early%20as%201855%2C%20one,outcome%20each%20day%2C%E2%80%9D%20he%20added">lamenting</a> it &#8216;would be the death of art&#8217; had been proved wrong. Painters didn&#8217;t disappear with photography, but they did reinvent painting as movements like Cubism turned to what cameras couldn&#8217;t capture. In the same vein, human creators in entertainment will likely gravitate to what AI can&#8217;t do.  Maybe that&#8217;s providing a deeper sense of authenticity, or live presence, or simply the unique weirdness of individual imagination.</p><p>As a result, the future of entertainment will split along two lines. Some will chase the sugar rush of infinite customisation and give themselves to content that knows them better than they know themselves. Others will react by seeking out the human to build shared experiences and find a sense of meaning in their creative diet. </p><h2>Scrolling alone?</h2><p>What happens next will depend on a handful of breakthroughs, a few legal judgments, and a million little decisions made by creators and audiences. Despite that, I have a (low confidence) view about the basic shape of things to come. </p><p>The models probably aren&#8217;t going to remake entertainment by replacing Hollywood. But they will flood the supply side with more content than we can possibly process and reshape demand according to increasingly important selection mechanisms. </p><p>Some of those filters will be computational but others will be social. That&#8217;s the split I keep coming back to, the one between deeper personalisation and a reaction against it. We call it the age of slop, but we&#8217;d just as well call it the age of extremes. </p><p>If you cultivate a sense of taste and follow good curators, the coming years could be extraordinarily rich. But if you let the feed satisfy your preferences, there will be no escape from the average, the meaningless, and the unintelligible.  </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free for more writing about history, culture and AI</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Against cultural alignment]]></title><description><![CDATA[AI with local values sounds great. So what's the problem?]]></description><link>https://www.learningfromexamples.com/p/against-cultural-alignment</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/against-cultural-alignment</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 01 Jul 2025 10:25:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nncj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nncj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nncj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nncj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nncj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nncj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nncj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg" width="1456" height="769" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/de82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:769,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nncj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nncj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nncj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nncj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde82c207-e67b-442d-a86f-a732ebf3e73e_5295x2795.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Fresco of constellations in Palazzo Farnese by Giovanni de' Vecchi 1574</figcaption></figure></div><p>Whenever someone mentions AI alignment you can bet that someone else isn&#8217;t too far away from asking &#8216;<em>alignment to</em> <em>what</em>?&#8217; with a certain degree of satisfaction. I&#8217;m thinking about coining a new law of internet discourse to describe this phenomenon. Something like Godwin&#8217;s law but for posts about AI.  </p><p>For those scratching their heads, it&#8217;s funny and frustrating in equal measure because it muddles two types of alignment. There are lots of different ways to describe these groups, but for our purposes we can think of them as <strong>technical alignment</strong> and <strong>value alignment</strong>. </p><p>The former deals with &#8216;getting AI to do what you want&#8217;. This is the problem that labs try to solve with gigantic sticking plasters like reinforcement learning from human feedback (RLHF), where the model is steered to interpret instructions, avoid jailbreaks, and generally avoid the <a href="https://www.reddit.com/r/google/comments/1gt6uvq/google_gemini_freaks_out_after_the_user_keeps/">spectacle</a> of crashing out. </p><p>Our second species of alignment asks whether an AI&#8217;s actions are ethically appropriate, and wants to know whose values they reflect. We can think about value alignment as the fuzzy process of ensuring the system conforms to some externally defined moral standard. </p><p>The &#8216;<em>alignment to</em> <em>what</em>?&#8217; bit assumes few have thought about the issue, but there&#8217;s a deep body of research on value pluralism and moral alignment stretching back before ChatGPT was a twinkle in Sam Altman&#8217;s eye. Not just from interested third parties, but <a href="https://link.springer.com/article/10.1007/s11023-020-09539-2">from the people</a> actually building the models.  </p><p>As for a preferred approach to value alignment, everyone has their own idea about what works best. The fashionable solution is sometimes called <strong>cultural alignment. </strong>It <a href="https://www.adalovelaceinstitute.org/blog/cultural-misalignment-llms/">emphasises</a> shunting the question away from developers and towards groups of people who use the models. </p><p>This post argues that this proposal is well-meaning but troublesome. It cautions against cultural alignment and advocates for alternatives that maximise personal choice and minimise pressure to conform to local norms. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h2>What&#8217;s wrong with cultural alignment? </h2><p>There&#8217;s a lot of work from the <a href="https://openai.com/index/democratic-inputs-to-ai/">labs</a>, <a href="https://hai.stanford.edu/news/the-digitalist-papers-a-vision-for-ai-and-democracy">academia</a> and <a href="https://www.cip.org/research/ai-roadmap">elsewhere</a> that wrestles with questions about how to elicit and codify values from different publics. There&#8217;s too much to deal with for this post, so I&#8217;m going to instead concentrate my efforts on a slightly higher level of abstraction. </p><p>I like to think about value alignment using a simple three-part model, which you can think of as a continuum from decentralised to concentrated:</p><ul><li><p><strong>Individual alignment:</strong> The AI adapts to the user&#8217;s own values, preferences, and moral intuitions. This model maximises agency and adaptability but risks echo chambers and moral inconsistency. </p></li><li><p><strong>Cultural alignment:</strong> AI aligns to the norms of a community, nation, or cultural group. Here, we get contextual sensitivity and local legitimacy but risk reifying power and calcifying tradition.</p></li><li><p><strong>Universal alignment:</strong> AI reflects abstract principles taken to apply to all humans everywhere. It aspires to impartiality and rights-based stability, but it can&#8217;t escape the problem of who defines those universals.</p></li></ul><p>None of these solutions are perfect, but recently the middle layer of cultural alignment is having something of a <a href="https://www.adalovelaceinstitute.org/blog/cultural-misalignment-llms/#:~:text=Research%20seems%20to%20suggest%20that,WEIRD%29">moment</a> in the sun. In practice, this is the idea behind <a href="https://openai.com/index/democratic-inputs-to-ai/">work</a> to explore the use of democratic processes for deciding what rules AI systems should follow.</p><p>The idea starts with the observation that the majority of powerful models are American, but they are used by millions of people outside of the USA. This is in part based on specific decisions made by developers in the model-making process, but it&#8217;s also because the likes of GPT 4.5 and Claude 4 are trained on largely English language data that capture a Western view of the world.  </p><p>Developers bake-in basic protections against violence, hate speech, and the active promotion of discrimination based on <a href="https://www.anthropic.com/news/claudes-constitution">common</a> ethical principles. But they also go further by trying to encode more substantive moral visions through <a href="https://www.bbc.com/news/technology-68412620">product decisions</a> or the guidelines given to human raters. </p><p>Research seems to back up the idea that today&#8217;s LLMs <a href="https://arxiv.org/abs/2504.08863">skew</a> toward U.S. and European perspectives while diverging from those in, say, the Middle East or Asia. Onlookers worry this is troublesome because they <a href="https://news.cornell.edu/stories/2024/09/reducing-cultural-bias-ai-one-sentence#:~:text=%E2%80%9CWe%20don%E2%80%99t%20want%20these%20models,%E2%80%9D">risk</a> promoting &#8216;just one cultural perspective&#8217; instead of reflecting local values.  </p><p>From this vantage point, rebalancing values to reflect local norms seems like a good idea. Values <a href="https://www.worldvaluessurvey.org/WVSContents.jsp">certainly differ</a> around the world, so doesn&#8217;t it make sense to align models in a way that reflects this reality? </p><p>But there are some problems with this picture. </p><p>Cultural alignment assumes that cultures have coherent, stable value systems that can meaningfully guide model behaviour. But as we all know, culture is a tricky thing to put your finger on. It&#8217;s an aggregation of viewpoints that are often at odds, of people who see the world in different ways and behave accordingly. Because every culture has its nonconformists, aligning a model in this way produces systems that exclude those who hold heterodox views. </p><p>Proponents counter that more <a href="https://arxiv.org/abs/2406.07814">sophisticated</a> schemes &#8212; meta-norm frameworks, collective constitutional fine-tuning, weighted deliberative panels &#8212; can surface a richer spread of voices than a blunt statistical average. Yet even these designs must freeze a snapshot of contested norms into rules for the model to follow, so the risk of silencing outliers never fully disappears.</p><p>But hold on! You might say: <em>&#8216;Even if imprecise, a programme of cultural alignment is still better than accepting American values. It might not be perfect, but it&#8217;s a step in the right direction.&#8217;</em></p><p>There is some truth to this in principle, but we have to weigh that payoff against the challenges that flow from supporting local orthodoxies and the introduction of new problems that brings with it. </p><p>Plenty of people are already alienated from their dominant local values, whether due to age, gender, class, religion, politics or something else. Even if you try and sample widely to capture these edge-cases, you still end up creating a system with a set of local beliefs that represent some idealised version of a given culture that rarely exists in reality. </p><p>In the rest of this piece, I take stock of five problems for cultural alignment. I argue that (a) the paradigm is unsuited to acting as the primary mechanism through which value alignment takes place, and (b) that any value alignment programme is better served by prioritising steerable systems that exist within a permissive but finite moral universe.  </p><h3>Exclusion </h3><p>When we talk about &#8216;local cultures&#8217;, we&#8217;re talking about a neat way of describing something that takes millions of messy, contradictory, and idiosyncratic lives and squashes them into a set of labels we can make sense of. Statements like &#8216;Spanish people value family&#8217; can be helpful heuristics, but they are not a blueprint for how any one person actually thinks or behaves. </p><p>Any programme of cultural alignment runs headfirst into this reality when it tries to accurately measure group-level values in a way that we can use. Popular cultural metrics (e.g. <a href="https://geerthofstede.com/culture-geert-hofstede-gert-jan-hofstede/6d-model-of-national-culture/">Hofstede&#8217;s dimensions</a> or <a href="https://www.worldvaluessurvey.org/wvs.jsp">World Values Survey</a> scores) often <a href="https://www.researchgate.net/publication/311295617_Beyond_Hofstede_Challenging_the_ten_commandments_of_cross-cultural_research">oversimplify</a> cultural expressions by reducing them to data points. </p><p>The problem is that even the best tools for capturing culture weren&#8217;t designed for alignment. A language model has to answer the question in front of it, but when it&#8217;s drawing on averages you get a system that&#8217;s allergic to nuance (read: real people). In practice, this means people who don&#8217;t look like the average person get written out. </p><p>Cultural alignment is a way of sanding down the weird, marginal, and dissident under well-meaning but flawed attempts to localise values. If your model takes cultural alignment as its organising principle, it&#8217;s possible that the people most at risk of being ignored &#8212; religious minorities, political dissenters, women in highly conservative societies &#8212; are those that slip through the cracks. </p><p>To get ahead of the problem, we might try and re-weight the training data or the reward model so under-represented voices get extra influence. That softens the edge cases, but every extra point of weight you give to one subgroup must come from somewhere else. Tilt the dials far enough and the median user no longer sees themself; keep the dials in place and the nonconformists stay invisible. </p><h3>Paternalism  </h3><p>Cultural alignment is both decentralising (some authority leaves the lab) and centralising (one sanctioned canon flows back to everyone). Once you define cultural norms and encode them into a model, you&#8217;re telling millions of people &#8216;<em>this is what people like you believe</em>.&#8217; If your approach doesn&#8217;t include plurality as one of its essential tenets, you&#8217;re stuck with a model that behaves according to some cluster of beliefs that many don&#8217;t agree with. </p><p>The problem here is that we have some third party deciding on behalf of the culture it&#8217;s seeking to represent. If it&#8217;s the labs, then we&#8217;ve outsourced moral representation to a handful of Californian companies. If it&#8217;s the state, we&#8217;ve handed governments the keys to mainline ideology into infrastructure. Either way, the moral franchise is exercised by a tiny property-owning electorate while everyone else is cast as a subject.</p><p>Of course, US labs aren&#8217;t going to train a whole model from scratch for every culture around the world. If they want to embark on a programme of cultural alignment, they&#8217;re likely to use a technique like Anthropic&#8217;s &#8216;<a href="https://www.anthropic.com/research/collective-constitutional-ai-aligning-a-language-model-with-public-input">collective constitutional AI</a>&#8217; method. But as you can see in the blog, there are several instances in which the respondents behind the project disagree. They lose out to the majority and see their views take a back seat. </p><p>One common response is <em>&#8216;just spin up a separate instance for every major worldview and let people pick.&#8217;</em> But a problem with, say LiberalGPT or ConservativeGPT, is that their provision would still be dependent on some third-party. And even if we get to pick from a menu, we are talking about rough worldviews that don't necessarily correspond to personal values (I don&#8217;t think that all liberals or all conservatives have precisely the same beliefs). Not to mention that this approach basically gives us echo chambers without the benefit of personal liberty. </p><h3>Reinforcement </h3><p>So far, we&#8217;ve talked about how cultural alignment can marginalise people, misrepresent values, and enforce consensus. But there&#8217;s a deeper structural risk worth dwelling on. When you embed cultural norms into a model and then deploy that model at scale, you are <a href="https://journals.sagepub.com/doi/10.1177/29768640251323147?int.sj-full-text.similar-articles.6=&amp;utm_source=chatgpt.com">actively shaping</a> the broader cultural context in which the model exists. </p><p>To be fair, this isn&#8217;t a <a href="https://academic.oup.com/pnasnexus/article/3/9/pgae346/7756548?utm_source=chatgpt.com&amp;login=false">problem</a> unique to cultural alignment. Any value alignment approach that tries to steer behaviour will inevitably mould the culture it&#8217;s dropped into. Personalisation mitigates the effect because it echoes each user rather than a single orthodoxy, but even there, the system is still reinforcing certain dispositions over time.</p><p>Like individual alignment, cultural alignment slips under the radar; but where  individual alignment is directionally agnostic at scale, cultural alignment guides users down a single path. </p><p>Whatever the model produces already looks familiar, so users accept it without noticing the nudge. Dissenting views receive less airtime, novel ideas sound eccentric, and taboo-breaking arguments never get to surface. Over time the model helps pin culture in place by delegitimising anything outside the frame. This makes cultural alignment a risky middle ground almost as persuasive as personalisation without the scrutiny that comes with universal alignment.</p><h3>Stasis </h3><p>One tricky problem with the cultural alignment project is that it claims to reflect what a society already believes, but in doing so risks arresting the processes by which beliefs change. Unlike universal alignment that seeks to drive us towards certain fixed ideas, cultural alignment looks at how we behaved in the past and updates the models accordingly.  </p><p>The rub is that change lives at the margin. You don&#8217;t have to believe society is getting better to believe that the ability to shift your stance is worth protecting. Cultural alignment threatens that by mistaking the average for the ideal, and the present for the permanent. </p><p>Take same-sex marriage. In 1950, the dominant view in most Western countries was that it was wrong. A culturally aligned model, trained on that consensus, would have affirmed that position. You can patch the model, but it takes time to figure out that something has changed and push out an update. </p><p>Someone has to decide when opinion has moved enough to gather new data or commission fresh surveys. Then they need to re-deploy, audit for regressions, and push the update out across all downstream products. That will happen with all the speed of government bureaucracy, which means that the updates could trail real-world change by years (especially when the new view is still contested). </p><h3>Relativism</h3><p>There are basically two ways of thinking about the diversity of human values: <strong>value pluralism</strong> and <strong>value relativism. </strong></p><p>Value pluralism holds that there are multiple, sometimes incompatible, goods that people can reasonably pursue (e.g. freedom, equality, or security) and that these values can&#8217;t always be reduced to a single master principle. It suggests that conflict between genuine moral values is tragic but real, and that choosing between them sometimes involves real loss. </p><p>On the other hand, value relativism claims that there is no objective way to evaluate values and that right or wrong are just whatever a given culture says they are. In its strong form, relativism rejects the possibility of cross-cultural moral critique. If a society condones slavery or subjugates women, that&#8217;s just their way of doing things.</p><p>The danger is that cultural alignment often confuses these two. It starts from a healthy respect for pluralism but slides into a kind of operational relativism, where any local norm becomes automatically valid simply because it&#8217;s local. </p><p>I describe myself as a pluralist in that I hold some values to be incompatible with basic moral responsibility. Without certain universal moral values, a maximalist programme of cultural alignment may endorse practices that many would see as troublesome. Not all the time, of course, but often enough to matter. Especially in places where dissent is already fragile and moral change depends on the courage of a few to challenge the many.</p><h2>What type of AI do we want?</h2><p>Cultural alignment is neither fine-grained enough to honour individual diversity, nor principled enough to serve as a moral foundation. It treats cultural averages as moral ideals, sidelining anyone who deviates from the script despite its best intentions. A better bet is to accept that people know themselves best <em>and</em> that some things are wrong no matter where you are. </p><p>That&#8217;s why my preferred approach looks like (a) a universal floor that guards against clear manifestations of bad behaviour, paired with (b) deep personalisation that gives everyone a model that acts in accordance with their values. You could layer culture on top, but only if it&#8217;s possible for individuals to override it in service of their own preferences. </p><p>Instead, we embrace the belief that different people can value different things within these boundaries in a way that is valid but uncomfortable. These tensions can&#8217;t always be neatly resolved, so while we ought to respect the clash we should also protect people&#8217;s ability to navigate it on their own terms.</p><p>I&#8217;m not saying personalisation is a silver bullet. A system that&#8217;s too eager to please may give us the moral world we already want, rather than the one we might strive for. Personalisation without restriction also risks people infringing on the affairs of others. And if my model is aligned to my values, and yours to yours, then what happens when we must coordinate? </p><p>This is partly why I&#8217;d prefer an approach that starts with a combination of universal values and stated preferences about the kind of person one wants to be. We give the model a principled foundation for its behaviour, rooted in our own moral identity to stop us from indulging our first order preferences (I want a cigarette) over our second order preferences (man, I wish I could stop smoking). Over time, small changes based on revealed preferences could refine this picture &#8212; but they should generally be subordinate to the user&#8217;s declared commitments.</p><p>A settlement on these terms grants that each of us is trying to live a life, and recognises that this effort is personal and plural. It&#8217;s not without pitfalls, but it builds from the right premise: that human beings are moral agents who deserve the right to choose. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI art is better off bad]]></title><description><![CDATA[Reclaiming the beautiful error]]></description><link>https://www.learningfromexamples.com/p/ai-art-is-better-off-bad</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/ai-art-is-better-off-bad</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 24 Jun 2025 10:25:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PmWl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PmWl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png 424w, https://substackcdn.com/image/fetch/$s_!PmWl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png 848w, https://substackcdn.com/image/fetch/$s_!PmWl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png 1272w, https://substackcdn.com/image/fetch/$s_!PmWl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PmWl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png" width="735" height="599" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:599,&quot;width&quot;:735,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:678280,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/163646599?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb11581dd-20fd-4529-8433-e90f3021227f_736x608.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PmWl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png 424w, https://substackcdn.com/image/fetch/$s_!PmWl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png 848w, https://substackcdn.com/image/fetch/$s_!PmWl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png 1272w, https://substackcdn.com/image/fetch/$s_!PmWl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47f2dddd-6e0e-472e-b925-dd6bede11625_735x599.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Vanit&#233;</em> from 1946 by Pablo Picasso.</figcaption></figure></div><p>It&#8217;s 1894 in the city of La Coru&#241;a on Spain&#8217;s northern coast and Jos&#233; Ruiz y Blasco is painting a pigeon. He steps away from his easel for a moment, but when he comes back his thirteen-year-old son has finished the job. </p><p>The boy is good. Very good. So good that his father stops dead in his tracks. Ruiz had been painting birds for years. He was a trained artist, a respected professor at the city&#8217;s School of Fine Arts. He knew technique when he saw it, and resolved to give up painting when he thought his son had surpassed his old man.</p><p>The boy is Pablo Picasso, the great Spanish artist who liked to remind us that &#8216;art is a lie that makes us realise truth.&#8217; </p><p>The anecdote isn&#8217;t completely true &#8212; there are later paintings attributed to Ruiz &#8212; but the episode still gets remembered as a particularly resonant origin story in the art history canon. </p><p>Picasso built his reputation on rejecting naturalistic representation. He pushed Cubism into the cultural imagination because he knew that literal depictions of reality were a fool&#8217;s game. Instead, the Spaniard spent the best bits of his career celebrating how futile it is to show things as they appear to be.</p><p>And yet the story we tell about his youth is one of technical mastery. It&#8217;s about a bird rendered so perfectly that it convinced a professional painter to put down his brushes for good. Maybe the myth is necessary. Perhaps we need to believe Picasso could portray the perfect pigeon before we accept that perfect pigeons aren't worth the paint.</p><p>Then again, the young Picasso&#8217;s work wasn't <em>actually</em> flawless. It couldn't be because reality is infinitely complex and infinitely temporal. Any attempt to represent it is by definition an act of reduction. All paintings, no matter how detailed, exist in the margin between the thing and someone's attempt to show it to us. </p><p>In this sense, art is error.</p><p>For those of us interested in the AI project, it&#8217;s an idea that explains why some generative art looks kitsch or banal. When Midjourney produces images indistinguishable from a National Geographic photography exhibition, we get technical proficiency that rings hollow. </p><p>Yes, as models become more sophisticated they get better at representing reality. But that doesn&#8217;t get anyone&#8217;s blood pumping because the gap between what is and what we see can never be fully closed. It&#8217;s for this reason that the wise artist embraces the space between, rather than pretending it doesn&#8217;t exist.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h3>Beautiful errors </h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8cem!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8cem!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png 424w, https://substackcdn.com/image/fetch/$s_!8cem!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png 848w, https://substackcdn.com/image/fetch/$s_!8cem!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png 1272w, https://substackcdn.com/image/fetch/$s_!8cem!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8cem!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png" width="1564" height="999" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:999,&quot;width&quot;:1564,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3581368,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/163646599?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93cb526a-d2aa-4f72-bcb2-0d6f668b172a_2198x1314.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8cem!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png 424w, https://substackcdn.com/image/fetch/$s_!8cem!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png 848w, https://substackcdn.com/image/fetch/$s_!8cem!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png 1272w, https://substackcdn.com/image/fetch/$s_!8cem!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8495c6c-6c74-4a08-ae29-e16a0d451f01_1564x999.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The School of Athens by Raphael from between 1509 and 1511. </figcaption></figure></div><p>A few years ago I went to the Vatican. It&#8217;s a good day out, so long as you have nothing against metal detectors or being herded from one room to the next like cattle. Michelangelo&#8217;s ceiling of the Sistine Chapel is especially wonderful, mainly because it&#8217;s big enough (and far away enough) to be seen from deep within the crowds. </p><p>If you wriggle past the tour groups you might steal a glimpse at Raphael&#8217;s School of Athens, that monument to Renaissance idealism where each figure exists in exquisite harmony. Every line of perspective is satisfyingly calibrated and every face an expression of classical beauty.</p><p>The School of Athens depicts neatly proportioned bodies whose classical beauty is itself a kind of fiction. We still look at it because Raphael&#8217;s &#8216;perfection&#8217; is a meaningful departure from reality, one that shows us a vision of human potential and intellectual harmony. </p><p>Alas, even error can become stale. What felt revolutionary in one generation becomes formulaic in the next. By the 19th century, Raphael's particular way of getting things wrong had been copied and systematised into predictable beauty. </p><p>What began as revolutionary techniques for representing reality became formulas for perfecting it. High art grew technically proficient but emotionally flat. Students copied masters who copied other masters, eventually creating a <a href="https://www.learningfromexamples.com/p/the-fly-and-the-filter">hall of mirrors</a> that reflected the same forms. </p><p>Art needed new kinds of meaningful mistakes. </p><p>Every creative movement that mattered was a rebellion against the world that came before it. The Impressionists abandoned linear perspective for fleeting light. The Expressionists distorted faces to show emotion. The Dadaists threw coherence out the window by embracing absurdity as their organising principle.</p><p>Each of these groups recognised that art mediates between intention and execution, between what we can see and what we can depict. Close that gap too fully and you have something closer to documentation than art. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!c2SO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c2SO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg 424w, https://substackcdn.com/image/fetch/$s_!c2SO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg 848w, https://substackcdn.com/image/fetch/$s_!c2SO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!c2SO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c2SO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg" width="1024" height="647" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:647,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!c2SO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg 424w, https://substackcdn.com/image/fetch/$s_!c2SO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg 848w, https://substackcdn.com/image/fetch/$s_!c2SO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!c2SO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F936c7d5d-eed8-4ccd-b874-abb6cf7b0493_1024x647.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Sun by Edvard Munch from 1910-11</figcaption></figure></div><p>Every representation departs from reality in some way, but there are two very different approaches to dealing with that withdrawal (discounting attempts at accurate depiction). You can try to fix reality's messiness by smoothing away its imperfections in pursuit of beauty. Or you can amplify the world&#8217;s strangeness, pushing departures further until they reveal something that literal representation cannot.</p><p>Classical art tries to hide artifice by making its idealisations feel natural and its corrections inevitable. Revolutionary art celebrates deficiency, encouraging the viewer to reckon with the distance between reality and representation.</p><p>Picasso did the latter. In 1907, he <a href="https://www.moma.org/collection/works/79766">produced</a> Les Demoiselles d'Avignon, a painting that critics initially dismissed it as the work of a madman. Faces twisted into geometric fragments. Bodies viewed from multiple angles simultaneously. Colours that clash and lines that unsettle. </p><p>What Picasso saw &#8212; and what would define modern art for the next century &#8212; was that error could be the point. The mistakes that classical artists spent lifetimes learning to avoid could be reimagined as new ways of seeing. </p><p>The Surrealists had figured this out too, though they approached it from the opposite direction. Salvador Dal&#237;&#8217;s paranoiac-critical method <a href="https://mma.pages.tufts.edu/fah188/clifford/Subsections/Paranoid%20Critical/paranoidcriticalmethod.html">involved</a> deliberately inducing a state of delusional perception, staring at random objects until he tapped into the &#8216;symbolic language&#8217; of the subconscious mind. </p><p>Ren&#233; Magritte&#8217;s The Treachery of Images, which famously shows a pipe with the caption &#8216;this is not a pipe&#8217;, forced viewers to <a href="https://www.fusionmagazine.org/why-this-is-not-a-pipe/">confront</a> the fiction of representation. The idea is that these words make us conscious of the membrane between rendition and reality.</p><p>But it was the Cubists who gave us the most systematic approach. The likes of Georges Braque and Juan Gris formalised a new visual grammar based on fragmentation. Unlike previous movements that rebelled against specific techniques, Cubism rejected the fundamental premise of Western art since the Renaissance: that painting should create the illusion of looking through a window at reality.</p><p>Cubism was about information density. By deliberately breaking perspective, its painters could pack more knowledge into a single image than a camera could capture. They thought that art should convey our accumulated knowledge of something, not just a single encounter with it. This is partly what Picasso was getting at when he <a href="https://www.theguardian.com/culture/2000/sep/04/artsfeatures2">said</a> &#8216;I paint objects as I think them, not as I see them&#8217;. </p><h3>The golden age of broken AI </h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BVEA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BVEA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png 424w, https://substackcdn.com/image/fetch/$s_!BVEA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png 848w, https://substackcdn.com/image/fetch/$s_!BVEA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png 1272w, https://substackcdn.com/image/fetch/$s_!BVEA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BVEA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png" width="703" height="623.5114107883818" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1710,&quot;width&quot;:1928,&quot;resizeWidth&quot;:703,&quot;bytes&quot;:4349566,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/163646599?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4063437f-e413-4b39-b3a7-63799863cba7_1928x1924.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BVEA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png 424w, https://substackcdn.com/image/fetch/$s_!BVEA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png 848w, https://substackcdn.com/image/fetch/$s_!BVEA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png 1272w, https://substackcdn.com/image/fetch/$s_!BVEA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0db82e8-c484-4169-81fe-c95dac7161eb_1928x1710.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A canvas-printed GAN piece I bought in 2019</figcaption></figure></div><p>I bought this painting in 2019. It looks like a classical allegory, but with a flatness that reminds us we&#8217;re looking at the work of a machine. The faces are clearly human, but impossibly so. </p><p>It was produced by a generative adversarial network, a system in which two neural networks are locked in competition. One network (the generator) tries to create images convincing enough to fool its opponent, while the other (the discriminator) evaluates and scores the generator's output, pushing both to improve through competition. You can <a href="https://dl.acm.org/doi/abs/10.1145/3351095.3373156">think of them</a> as artist and critic. </p><p>GANs can be used for generating synthetic training data and detecting deepfakes, but they're known by most people as the class of models that brought AI art into the public imagination (there was of course <a href="https://creativitywith.ai/googledeepdream/">Deep Dream,</a> but it was a little more contained to the extremely online amongst us). </p><p>Early generative adversarial networks couldn't paint a convincing human face, but they could create portraits that existed in the uncanny valley between recognition and abstraction. </p><p>A GAN that learned to associate &#8216;face&#8217; with certain patterns would dutifully reproduce them, even as the result looked like an acid trip. The machines were trying their best to paint like humans and failing, but the result was often better than they have any right to be. </p><p>We crossed a threshold somewhere between the flowing figures of early GANs and the hyperreal perfection of modern diffusion models (the things that make ChatGPT&#8217;s image generation function tick). The machines learned to stop making interesting mistakes by default. They became too competent and too reliable. </p><p>Early GAN outputs were obviously artificial, but they were more than synthetic slop. Compare that to your average image model today. In producing work that is too polished, they trigger a kind of emotional flatlining where we recognise a solid simulation and respond with indifference. </p><p>The labs have built systems that can render skin texture with exactness and generate lighting that obeys the laws of physics. Composing images according to the classical principles of beauty is for them light work. </p><p>But good art rarely tends to be so neat. Early image models were interesting because the machines had a thing for category errors. They saw features in noise, mixed up spatial relationships, and whipped up impossible architectures that felt emotionally satisfying.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nsas!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" width="70" height="22.82608695652174" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:150,&quot;width&quot;:460,&quot;resizeWidth&quot;:70,&quot;bytes&quot;:12198,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162870944?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a011107-4790-4b64-9f4c-4b8fcace22de_460x330.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Picasso spent his career learning classical techniques so he could break them. The best AI artists do the same thing by understanding how these systems function well enough to make them work poorly in the right ways.</p><p>Even in widely available image generators the tools are already there. Sliders that control how closely the model sticks to the prompt. Seed values that determine which accidents occur. Negative prompts that can force models to avoid trained behaviours. </p><p>Every AI system contains the germ of its own rebellion, if we're clever enough to cultivate it.</p><p>The most honest AI art puts <a href="https://x.com/emollick/status/1935504703023899096">artificiality</a> to work. Images that look machine-generated but use aesthetic distance to reveal truths about its subject, or texts that sound alien but illuminate aspects of language we take for granted. </p><p>Glitch artists have been doing <a href="https://www.destroyallcircuits.com/blogs/news/the-comprehensive-history-of-glitch-art-from-precursors-to-future-horizons">something like this</a> for a long time. What they did with early digital systems, we can do with transformers, diffusion models or generative adversarial networks. The key is carefully orchestrated failure that reveals patterns invisible to conventional seeing. </p><p>I&#8217;m glad the systems got better, but I&#8217;m disappointed that the default way of using them emphasises their ability to accurately model reality. It should go without saying that there are plenty of artists who already use AI thoughtfully in their work, but the point is that the average user is drawn to faultless representation rather than beautiful error. </p><p>Instead, we ought to remember that there is no such thing as a perfect picture. Not for the ancients, not for the modernists, and not for us. Better to recognise that art lives between reality and perspective, and that the gap is where the good stuff happens.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Academics are kidding themselves about AI]]></title><description><![CDATA[Ten suggestions for better criticism]]></description><link>https://www.learningfromexamples.com/p/what-academics-get-wrong</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/what-academics-get-wrong</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 17 Jun 2025 10:15:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oQBK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oQBK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oQBK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg 424w, https://substackcdn.com/image/fetch/$s_!oQBK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg 848w, https://substackcdn.com/image/fetch/$s_!oQBK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!oQBK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oQBK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg" width="1456" height="1051" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1051,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Fight Between Carnival and Lent - Wikipedia&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Fight Between Carnival and Lent - Wikipedia" title="The Fight Between Carnival and Lent - Wikipedia" srcset="https://substackcdn.com/image/fetch/$s_!oQBK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg 424w, https://substackcdn.com/image/fetch/$s_!oQBK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg 848w, https://substackcdn.com/image/fetch/$s_!oQBK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!oQBK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef9d15b9-4226-411a-9991-da7d91f68043_4800x3466.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Battle Between Carnival and Lent by Pieter Bruegel the Elder from 1559</figcaption></figure></div><p>Last week I <a href="https://www.learningfromexamples.com/p/academics-need-to-take-ai-seriously">wrote about</a> reasoning models. I argued that &#8212; despite some recent flawed work on the subject &#8212; they have some curious limitations, and outlined a rough sense of where I expect developers to go in the future based on those shortcomings. </p><p>While I was researching the piece, I read lots of recent critical writing about AI. Some of it was good, but much of it read like extremely wishful thinking about what exactly systems can and cannot do. </p><p>On the plus side, the experience did help me formulate a simple heuristic to sift through writing about our subject. As soon as I see someone call a large language model a &#8216;bullshit generator&#8217;, I know to take whatever follows with a grain of salt. </p><p>Usually that person is an academic. It should go without saying that not all critics are academics and not all academics are critics. But it seems to be critical academics whose voices carry disproportionate weight in shaping public discourse. On a personal level, they're a group I see more regularly as an academic researcher since leaving industry.</p><p>The type of person I&#8217;m describing is occasionally a technical researcher, but more often they are a humanities scholar. Normally it&#8217;s a person whose work I respect, an otherwise clever thinker who seems to have caught the bug. It&#8217;s an unfortunate state of affairs for someone who counts themselves amongst their number. </p><p>&#8216;Bullshit generator&#8217; is a kind of shorthand, one that many academics use to signal to others that they have the right opinions about the AI project. One person says it and then another. And just like that it becomes orthodoxy. Everyone you know rolls it out whenever the opportunity arises, so why shouldn&#8217;t you?  </p><p>Our meme is recycled so consistently because it feels just naughty enough. You can put the phrase in a paper or a newspaper headline and no one will tell you off. It has a forbidden fruit quality to it. Can you believe what we just said! </p><p>The sociology of the thing is curious, but it doesn&#8217;t tell us why the idea itself &#8212; that large models are useless paper tigers that don&#8217;t &#8216;<a href="https://www.learningfromexamples.com/p/does-ai-know-things">know</a>&#8217; anything &#8212; is so attractive in the first place.  </p><p>I suspect it&#8217;s because many of them dislike AI, so they don&#8217;t follow it closely. They don&#8217;t follow it closely so they still think that the criticisms of 2023 hold water. They don&#8217;t. And that&#8217;s regrettable because academics have important contributions to make. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h2>Play the classics </h2><p>I recently <a href="https://www.learningfromexamples.com/p/romantic-machines">suggested</a> that one reason for the animosity towards AI is that people feel like they&#8217;ve been duped. They had a vision of what AI ought to be in their head that doesn&#8217;t correspond to the technology in reality. </p><p>But strange as I think LLMs are, they are still <em>useful</em>. We&#8217;re talking about things that millions of people <a href="https://explodingtopics.com/blog/chatgpt-users">use</a> every single day. Companies are openly <a href="http://t.co/RSbMkhz3Xm">saying</a> job displacement is coming. Former US presidents <a href="https://x.com/BarackObama/status/1928568802901381161">agree</a>.  </p><p>But you wouldn&#8217;t think that was the case if you asked the average academic. They tend to scoff at the idea that anyone might use them for anything. You often hear them say things like: </p><ul><li><p>&#8216;It&#8217;s just linear algebra&#8217;</p></li><li><p>&#8216;LLMs don&#8217;t <em>know</em> anything&#8217; </p></li><li><p>&#8216;It&#8217;s all a PR exercise&#8217; </p></li><li><p>&#8216;Stochastic parrot, stochastic parrot!&#8217;</p></li><li><p>&#8216;Don&#8217;t they hallucinate everything?&#8217; </p></li></ul><p>The most forceful of these is one is that trotted out reflexively: hallucinations. It&#8217;s all just made up, isn&#8217;t it? Don&#8217;t the models get most basic facts wrong?   </p><p>Well no, not really. Certainly no more made up than some academic papers. You might have had a case back in 2023, but these days hallucinations are much rarer than you think.</p><p>On the Hugging Face <a href="https://huggingface.co/spaces/vectara/Hallucination-evaluation-leaderboard">hallucination leaderboard</a>, the top four models score a factual accuracy rate of more than 99% on a document summarisation benchmark. </p><p>You might say that the test isn&#8217;t fair game because LLMs do more than summarise. And you would also be right to point out that some of OpenAI&#8217;s newer reasoning models seem to have <a href="https://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf">bucked the trend</a> based on the SimpleQA and PersonQA benchmarks.</p><p>But the rest of the stats tell a different story. On the Simple QA <a href="https://blog.elijahlopez.ca/posts/ai-simpleqa-leaderboard/">leaderboard</a>, the best performing models &#8212; those that tend to supplement answers with an internet search functionality &#8212; clock in with between 90 and 95 per cent accuracy. </p><p>Fine. Even if they can regularly produce factually accurate information, they still don&#8217;t really <em>know</em> anything. </p><p>The problem with this line of thinking is that it requires a bit of philosophical wrangling, one that (for reasons unclear) the vast majority of academics seem unwilling to engage in. This is particularly frustrating because if you&#8217;re going to make forceful claims about epistemology, it seems rather unsporting to dodge the resulting debate. </p><p>When you think about these questions for more than five minutes, it&#8217;s pretty obvious that terms like &#8216;knowing&#8217; or &#8216;understanding&#8217; are slippery concepts. Never mind &#8216;truth&#8217; or &#8216;information&#8217;. I don&#8217;t feel confident to say much other than AI definitely knows <em><a href="https://www.learningfromexamples.com/p/does-ai-know-things">something</a></em>. </p><p>Occasionally the claim gets tighter and it becomes something like LLMs can't generalise from a small amount of data, but performance on the <a href="https://arcprize.org/blog/oai-o3-pub-breakthrough">ARC-AGI</a> benchmark with just a handful of examples seems to prove that isn&#8217;t actually the case. </p><p>We also have thinking or reasoning. This one basically says the machines don&#8217;t think because that&#8217;s only something that humans can do. At best, all they can do is <em>simulate</em> thinking. This one I don&#8217;t mind so much as at least it tries to engage in a substantial argument that gets at the core of the thing.</p><p>It might be the language models can only simulate thinking or reasoning. Call me a utilitarian, but what matters to me most is how effective they are in the real world. Whether or not they are simulating thinking has no bearing on whether or not the machines are capable of rearranging the world for better or worse (though if you want to read about what I think is actually happening inside LLMs you can do that <a href="https://www.learningfromexamples.com/p/academics-need-to-take-ai-seriously">here</a>). </p><p>And of course there&#8217;s the Foucaldian take: it&#8217;s all a PR exercise. Obviously, companies like to promote their product. AI is no different in that respect. But to argue that the richest firms in history are deploying trillions of dollars of capital in service of PR is a total non-starter. </p><p>You could say that they should be more sceptical of their inventions, but to propose that the entire apparatus of AI development &#8212; fighting off competition for chips, building enormous datasets for pretraining, and fine-tuning the model with the help of thousands of human reviewers &#8212; is for reputational purposes strikes me as a bit far-fetched.      </p><h2>How to criticise AI </h2><p>To be clear, I am not down on academics. I am one! I only wish my colleagues would think more critically about their own beliefs, and accept that we simply don&#8217;t have enough information to understand where the ceiling is for the AI project as it exists today. </p><p>Below are some suggestions for what better AI criticism looks like (inspired by <a href="https://dstrohmaier.com/better-ai-criticism/">this</a> excellent post) that reflects this uncertainty. It&#8217;s not exhaustive, but it gives a rough survey of useful elements for formulating critical commentary. </p><h3>Things to do </h3><ul><li><p><strong>Stay current:</strong> Base your claims on recent capabilities by staying up to date with AI research, model deployments, and real-world usage. When critiquing, use the best available models &#8212; not convenient strawmen (thankfully we are past the era of slide decks filled with GPT 3.5 gotchas). </p></li><li><p><strong>Embrace humility: </strong>Accept uncertainty as a starting point and modify your approach accordingly. No one fully understands these systems yet (including the people building them). All things being equal, curiosity should precede criticism. In the words of Erling Haaland, stay humble!</p></li><li><p><strong>Study adoption:</strong> Some struggle to believe anyone is actually using AI. But they are. Millions of them. If you want to analyse failure modes, you&#8217;ll have plenty go at by talking to the doctors, lawyers, and students who use the models. But you&#8217;ll also see that not every use-case is malicious (and that people are actually using LLMs). </p></li><li><p><strong>Sample widely:</strong> When models work, seek to understand why and under what conditions. When they fail, collect multiple instances across different contexts. Ask the same question. A single amusing error tells us little; patterns of failure (and success) across varied conditions reveal the actual boundaries of capabilities. </p></li><li><p><strong>Be creative:</strong> If LLMs don't fit neatly into existing epistemologies, maybe it&#8217;s time to make new ones. Rather than forcing these systems into old categories or dismissing them for not fitting, have some fun by developing new conceptual tools. Create the language and frameworks we need to understand AI. </p></li></ul><h3>Things to avoid</h3><ul><li><p><strong>Reductive claims:</strong> Related to the above, saying &#8216;it&#8217;s just pattern matching&#8217; explains nothing on its own. If you must make reductive claims, embed them in substantive arguments about what follows from that reduction. Ask whether your reduction captures what matters. Then explain why. </p></li><li><p><strong>Forecasting with confidence:</strong> The history of AI is littered with assured proclamations about what machines will &#8216;never&#8217; do. Current limitations are empirical facts worth documenting, but extrapolating them into fundamental barriers rarely ends well. </p></li><li><p><strong>Treating AI as a monolith:</strong> Remind yourself that different architectures, training methods, and deployments yield vastly different capabilities. And note that systems are often composites. Understanding which component does what is crucial for meaningful critique. </p></li><li><p><strong>Cherry-picking:</strong> Only citing failures while ignoring successes or dismissing benchmarks that contradict your thesis sounds more like advocacy than scholarship. Intellectual honesty means engaging with the full empirical record, especially the parts that surprise you. </p></li><li><p><strong>Credentialism:</strong> Yes, peer review still matters. But dismissing research because it comes from industry labs or preprint servers rather than traditional journals is self-defeating. In a fast-moving field, the most important findings often emerge outside conventional channels. </p></li></ul><h2>Uncharted waters </h2><p>Many moments in the history of thinking machines can be described by the maxim <em>fake it until you make it.</em> Too often what looked to be impressive performance was contingent on the man behind the curtain. That&#8217;s a thread that runs right through the invention of the difference engine to the emergence of parallel distributed processing in the 1980s. </p><p>But that isn&#8217;t happening today. Yes, today&#8217;s large models are complexes of data, human input, hardware, and clever algorithms. But they do actually work well for the most part, which is why millions of people use them every single day. In that sense our moment is unprecedented in the history of AI. </p><p>But right now, many academics who speak to policymakers or the press badly underestimate the capabilities of the best models. They dismiss LLMs out of hand and don&#8217;t engage with the substance of the technology. </p><p>Media narratives skew sensational or simplistic, and policymakers end up getting the wrong end of the stick. This is clearly bad if you want to make sure AI is integrated into society in the most socially beneficial way possible.   </p><p>Accepting the reality of the situation is the best way to produce timely and relevant work. But that requires getting familiar with the technology so that the public debate is grounded in clarity.</p><p>AI is a social, cultural, and philosophical event. These are qualities that should make the technology the business of academics. Some are already doing great work, but more are needed to ask the questions the engineers don&#8217;t. What do humans do in a world with advanced AI? What kinds of collective failure modes exist when we all begin to use LLMs? And how should these systems be trained, evaluated, governed? </p><p>These are human problems, but too many scholars have absented themselves from the conversation. They think refusing to engage is a form of critique, when in fact it&#8217;s a form of abdication.</p><p>If they wanted to, academics could help define the terms of safe development. They could map the new epistemologies these systems generate, trace their impacts, and build the intellectual scaffolding we need to live alongside them. </p><p>But for that to happen they need to accept the AI project for what it is, not what they wish it to be. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Critique of Pure Reasoning Models]]></title><description><![CDATA[Reasoners aren't perfect, but they don't need to be]]></description><link>https://www.learningfromexamples.com/p/academics-need-to-take-ai-seriously</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/academics-need-to-take-ai-seriously</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 10 Jun 2025 10:17:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0z3-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0z3-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0z3-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png 424w, https://substackcdn.com/image/fetch/$s_!0z3-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png 848w, https://substackcdn.com/image/fetch/$s_!0z3-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png 1272w, https://substackcdn.com/image/fetch/$s_!0z3-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0z3-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png" width="1456" height="1001" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1001,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3459479,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/164468494?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0z3-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png 424w, https://substackcdn.com/image/fetch/$s_!0z3-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png 848w, https://substackcdn.com/image/fetch/$s_!0z3-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png 1272w, https://substackcdn.com/image/fetch/$s_!0z3-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cb12a45-4582-46ab-834c-7050a1cdab8f_1958x1346.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Temptation of Saint Anthony by Grunewald from 1515 (detail)</figcaption></figure></div><p>Reasoning models are funny things. You make one by taking a vanilla large language model, asking it to produce a chain of outputs on its way to some final goal, and encouraging it to follow these steps in sequence before generating an answer.  </p><p>They work like a charm, mostly, and are behind some of the more impressive examples of AI applications. Reasoners are sitting pretty at the top of lots of well-heeled benchmarks, and have even caused some critics to <a href="https://aiguide.substack.com/p/on-the-arc-agi-1-million-reasoning">think again</a> about the limits of the current paradigm. </p><p>Still, not everyone buys it. The more sceptical amongst AI watchers like to argue that what looks like thinking is just a trick of the eye. It&#8217;s a mirage, or as Apple put it in a <a href="https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf">paper</a> last week, an <em>illusion</em>. </p><p>To make their case, researchers used puzzles to test how large reasoning models handle increasing problem complexity. They took puzzles like the Tower of Hanoi, converted them into textual descriptions, and fed them into some of the best models. </p><p>The results didn&#8217;t reflect well on reasoning models, showing that &#8212; when faced with a sufficiently high level of complexity &#8212; they pass a tipping point beyond which performance collapses. Despite having more compute to play with, the research shows that models tend to throw in the towel when they deem the problem to be sufficiently thorny. </p><p>Lots of people can&#8217;t help but think that this time large language models really are in trouble. Doesn&#8217;t this prove they aren&#8217;t actually thinking? Are reasoning models useless? Shall we cancel the short timelines? </p><p>In short, no. This is a persuasive but flawed bit of research, one that let a desire to say something provocative get in the way of what might have been solid work. One obvious problem is that they authors don&#8217;t even define &#8216;reasoning&#8217; or &#8216;thinking&#8217; in the paper. It seems odd to call what LLMs are doing an illusion if you don&#8217;t bother to explain what they are pretending to do. </p><p>Likewise, some of the &#8216;high complexity&#8217; problems require more reasoning than fits in the context length (writing out a reasoning trace for Tower of Hanoi with 20 disks would take months for a human). This is probably why the models call it a day when the researchers&#8217; prompts collide with RLHF&#8217;d objectives like &#8216;be concise&#8217;.  </p><p>Finally, and this is the big one, they don&#8217;t let the models use tools. The test is about what we might call &#8216;pure reasoners&#8217; without access to the simple system elements that would make these puzzles trivial for any consumer-grade LLM. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><h2>Scale models</h2><p>We&#8217;ve been here before. First large language models were useless toys. Then they got bigger, and boy did they get better. Eventually they could handle textual <em>and</em> visual inputs. More recently, the advent of reasoning models dislodged some of the tougher benchmarks like ARC-AGI 1. </p><p>The same pattern repeats itself. Some experiments seem to erect an insurmountable barrier for the large model paradigm. Then a new factor like scale, multimodality or reasoning helps models blast through the wall. Sceptics keep calling the end of the line, but the train doesn&#8217;t seem to notice. </p><p>They forget AI is a moving target. Today&#8217;s models are vastly more complex than those of just a few years ago. Sure, they are bigger and they use chain-of-thought techniques, but they also contain specialised modules that allow for tool use, memory, sand-boxed computation, and internet search. </p><p>The models are kind of sticky, which is probably their most underrated characteristic. You can build on top of them, giving them new capabilities that help overcome what used to seem like irreparable flaws. </p><p>Apple&#8217;s test stumped them because it only dealt with the raw model without its supporting infrastructure. With no tools, search or visual processing, they were playing blindfolded with one hand tied behind their back. </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nsas!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" width="84" height="27.391304347826086" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:150,&quot;width&quot;:460,&quot;resizeWidth&quot;:84,&quot;bytes&quot;:12198,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162870944?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a011107-4790-4b64-9f4c-4b8fcace22de_460x330.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>The mythos of large language models is all about scale. The jump from models like GPT-2 (1.5 billion parameters in 2019) to GPT-3 (175 billion parameters in 2020) famously showed us that making models bigger and training on more data led to remarkable gains in generalisation without task-specific training. </p><p>Scaling brought better fluency, coherence, and coverage of knowledge. Large models began to work well on tasks they hadn&#8217;t explicitly seen before, probably because they had absorbed a head-spinning number of patterns through their pre-training process. </p><p>Yes there were some scale maximalists, but by the mid 2020s lots of researchers generally accepted that simply making models bigger wasn&#8217;t going to cut it. The performance curves for some challenges were flattening despite the growing size of models. </p><p>Even until last year, it wasn&#8217;t clear that LLMs would be capable of clearing certain evals designed to test for general reasoning capabilities. The most famous of these tests was the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) created by deep learning researcher Fran&#231;ois Chollet. </p><p>The test was originally launched in 2019, but the top rated scores hadn&#8217;t improved much up to the launch of last year&#8217;s competition. That was because Chollet designed the test to include novel problems, which means&#8212;even ingesting massive chunks of the internet&#8212;large models were unlikely to have seen a critical mass of similar examples in their training process. </p><p><a href="https://www.youtube.com/watch?v=UakqL6Pj9xo">According</a> to Chollet, because language models <em>only</em> apply existing templates to solve problems, they get stuck on tests that human children would be able to manage comfortably. But the test also stood for so long because it wasn&#8217;t particularly well known, which narrowed the pool of researchers trying to solve it. Then came the prize launch in the summer of 2024, which offered cash and kudos for passing the test. </p><p>When the test was launched, I <a href="https://www.learningfromexamples.com/p/prizes-complexity-security-twie">suggested</a> that a model would pass it in fairly short order: </p><blockquote><p>&#8220;The (quite literally) million dollar question is whether ARC-AGI will stand the test of time<strong>. If I had to guess, I would expect major progress on the challenge within the next year or so. </strong>This is because a) bigger models with some clever algorithmic improvements seem to be doing something other than simple pattern matching; and b) there&#8217;s already been some improvement on the benchmark since it was released.&#8221;</p></blockquote><p>I didn&#8217;t have access to any kind of special information when I made that prediction. I just thought that models were already much better than they had any right to be, and that it was unwise to bet against systems that can leverage more compute (which turned out to happen at inference time via reasoning models). </p><p>A few early entrants improved baseline scores to more respectable levels, but it wasn&#8217;t until OpenAI&#8217;s o3 model scored 87.5 percent on the ARC-AGI benchmark (albeit via a very costly process) at the end of last year that we could say it was passed. </p><p>Chollet himself, a well known critic of large language models&#8217; ability to reason, said &#8216;all intuition about AI capabilities will need to get updated,&#8217; while Melanie Mitchell (also a long-time sceptic) <a href="https://aiguide.substack.com/p/did-openai-just-solve-abstract-reasoning">called</a> it &#8216;quite amazing&#8217;.  </p><p>With models performing well on the original test, Chollet set about on a follow-up benchmark called ARC-AGI 2. Released in May earlier this year, <a href="https://arcprize.org/blog/announcing-arc-agi-2-and-arc-prize-2025">ARC-AGI 2</a> &#8216;raises the bar for difficulty for AI while maintaining the same relative ease for humans.&#8217; Currently, the best performing model is a reasoning version of Anthropic&#8217;s Claude 4 Opus at 8.6%. </p><h2>Apples and oranges </h2><p>In Apple&#8217;s <a href="https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf">paper</a> about illusory thinking, their analysis of the reasoning traces (the step-by-step thoughts the model generates) show that the model&#8217;s &#8216;thinking&#8217; is prone to lots of different failure models. </p><p>They bundled behaviour into three regimes of problem complexity, which each saw the models perform with varying degrees of success: </p><ul><li><p><strong>Easy problems:</strong> As expected, the models performed best on these &#8212; in several cases achieving solid scores. When they did fail, it often involved finding a correct solution in the chain-of-thought before exploring wrong alternatives. This &#8216;overthinking&#8217; has been observed <a href="https://www.arxiv.org/abs/2502.08235">elsewhere</a>, and leads reasoners to sometimes talk themselves out of the correct answer by the end of the chain. </p></li><li><p><strong>Moderate problems:</strong> Again, the models could succeed here. Failures, when they did happen, tended to involve generating a slew of incorrect intermediate steps from the get-go. When they worked, only later in a convoluted reasoning chain did they find a correct solution (not exactly efficient but in general this seems fine to me). </p></li><li><p><strong>Hard problems:</strong> This is the &#8216;collapse&#8217; scenario described by Apple. The model&#8217;s chain-of-thought isn&#8217;t pretty to look at, consisting of numerous steps that seem to be incorrect or irrelevant. There is no point in the chain where it finds a workable approach (though as above this is likely because some of the correct answers exceed token limits). </p></li></ul><p>One curious example the researchers describe is the Tower of Hanoi puzzle (a classic problem that requires moving disks between pegs under strict rules). Apple&#8217;s team tested their LLMs on Tower of Hanoi puzzles of increasing disk numbers, finding &#8212; perhaps unsurprisingly &#8212; that performance fell as the number of disks grew. </p><p>Then they tried an &#8216;algorithm injection&#8217; experiment where they gave the model the correct algorithm in the prompt (essentially walking it through the steps it should take), to see if that helped on harder cases. </p><p>The result? It didn&#8217;t help at all. Even when told exactly how to solve the puzzle, the reasoning model could not execute the steps reliably once the problem became sufficiently complex. The group doesn&#8217;t really explain what they think is happening here, but they do suggest it represents &#8216;limitations in performing exact computation&#8217;. </p><p>This is no doubt true, but large language models have always been pretty bad at performing exact computation. That&#8217;s why people ask ChatGPT to &#8216;use code&#8217; when making calculations if they want to get a reliable answer. Had the model been allowed to use the algorithm via some plug-in, it would have been a different story. </p><h2>Differential reasoning  </h2><p>Ok, despite the test not being a total wash, there are some limitations to reasoning models. Why is that? </p><p>I&#8217;ll begin by saying no-one really knows for sure. Not the researchers in the AI labs, not the markets, not academics, and certainly not me. All we have are intuitions. Mine is that it involves the <a href="https://www.learningfromexamples.com/p/does-ai-know-things">symbol grounding problem</a>, which concerns how symbols like words can acquire intrinsic meaning. </p><p>Reasoning systems, like all LLMs, find correlations and produce patterns. But as Apple&#8217;s work reminds us, they can also produce explanations that don&#8217;t necessarily correspond to the basic facts of reality. They contain oodles of knowledge, but it&#8217;s more like raw ore than refined metal.  </p><p>When reasoners go doolally, it&#8217;s because they struggle to reliably connect concepts to an underlying model of the world. <em>Reliably</em> is the key word here. I do think the models can &#8216;reason&#8217;, providing your definition is something like &#8216;the systematic chaining of relationships between internal representations to reach a conclusion that satisfies a given set of constraints.&#8217; </p><p>This rather broad definition accounts for reasoning by navigating the web of learned similarities among representations, where an agent steps through them until a pattern that satisfies the goal appears. That is how I think an LLM reasons, which I do think <em>is</em> <em>reasoning</em> &#8212; but it&#8217;s not exactly the same as what people do. </p><p>Think about two basic ways of knowing about a tree. The concept &#8216;tree&#8217; makes sense because it isn&#8217;t &#8216;bush&#8217;, &#8216;pole&#8217; or &#8216;cloud&#8217;. Large models gobble up billions of sentences, notice the connections between tree and its neighbours (leaf, bark, shade, roots), and build a high-dimensional map where the concept&#8217;s position is fixed by everything it is not. </p><p>Ask a model &#8216;what climbs a tree?&#8217; and the word squirrel lights up because it sits only a few degrees away in that semantic constellation. This is grounding by <strong>difference</strong>, where each representation is determined by the relative positions of other representations.  </p><p>Purists would say this isn&#8217;t really &#8216;grounding&#8217; at all because the model is only grappling with the meaning of symbols by using other symbols. Compare that to my own experience of walking up to a tree, touching the trunk, and feeling the cambium under my fingernail. </p><p>That multisensory encounter anchors the word tree to a slice of the physical world. A logger, a robin, and a child building a tree-house all ground the concept in lived affordances (in that you can chop it down, nest in it, or climb it). This is grounding through <strong>reference</strong>. </p><p>LLMs don&#8217;t mistake a tree for a toaster because their vector space keeps those poles far apart under default conditions. Instead, they <em>may</em> hallucinate a &#8216;glass roof&#8217; that provides &#8216;ample shade,&#8217; because no tactile or optical reality is acting as a check on these associations. Humans catch it instantly because reference knowledge (e.g. glass is transparent, shade needs opacity) is wired into our practical intuition about the world.</p><p>Based on these ideas, we can try to make sense of the failure modes described by Apple: </p><ul><li><p><strong>In easy tasks,</strong> the tree of possibilities is more likely to contain useful examples the model has seen before. The model grabs the answer early, then keeps sampling neighbours until it drifts off course. Once the local similarity signal weakens, nothing tells the sampler it has overshot the target. </p></li><li><p><strong>In moderate tasks, </strong>the answer lies a few hops away. The model finds its way through a cloud of wrong patterns until it lands on a cluster that lines up with the goal state. More tokens buy it more opportunities to search the manifold of token differences.</p></li><li><p><strong>In hard tasks, </strong>the model doesn&#8217;t cover itself in glory. Beyond a certain depth there is no nearby cluster that satisfies all constraints, which means the sampling process has nothing to feed on. At harder puzzles the model stops thinking either because the next steps all look equally bad (or because it can&#8217;t fit the answer in its output due to token limits and RLHF constraints).</p></li></ul><p>Grounding provides the foundation for reasoning, but it isn&#8217;t the same thing as reasoning itself. Rather, you need grounding to tether reasoning to the world it claims to explain. So when models ground using difference, I like to think about this process as a kind of <strong>differential reasoning. </strong></p><p>This is why models work so well in some instances and fail badly in others. The frontier is jagged because they reason in a different way to people. </p><h2>Beyond pure reasoners</h2><p>I don&#8217;t see this observation as a major bottleneck for AI development. In fact, I think two possible responses put developers in a remarkably strong position: <strong>systematisation</strong> and <strong>agency</strong>.   </p><p>Systemisation is about making the core model a node within a bigger apparatus. We keep the language model in place, but surround it with specialist gadgets. Web search look-up, a code sandbox, a vision encoder, and a knowledge base. The model doesn&#8217;t need to have all the answers, it just needs to decide when and how to invoke the right tool. </p><p>In practice this is already how people tame hallucinations. On the Simple QA <a href="https://blog.elijahlopez.ca/posts/ai-simpleqa-leaderboard/">leaderboard</a>, the best performing models &#8212; those that tend to supplement answers with an internet search functionality &#8212; clock in with between 90 and 95 per cent accuracy. </p><p>Each add-on is there for a reason. External search keeps the model up to date, an execution engine allows it to deal with hard arithmetic or code, multimodal functionality lets it ground words in pixels, and long-term memory means it can recall prior interactions. </p><p>Apple&#8217;s experiment stripped all that away. Re-running their Tower-of-Hanoi test with a tool-using agent would let the language core sketch a plan, hand the plan to a symbolic solver, and verify the result before answering. </p><p>Systematisation aside, a second approach might even allow models to ground via reference. Instead of static prompts, drop the model into an environment where it can act, observe consequences, update its policy, and store new skills. This is the play in David Silver and Rich Sutton&#8217;s &#8216;Era of Experience&#8217; <a href="https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf">paper</a>, where the reward signals come from the environment rather than a human ratifier guessing from the sidelines:</p><blockquote><p>&#8216;Such agents will be able to actively explore the world, adapt to changing environments, and discover strategies that might never occur to a human. These richer interactions will provide a means to autonomously understand and control the digital world.&#8217;</p></blockquote><p>In the short run, bolting on tools will keep pushing the envelope of what today&#8217;s text-trained models can accomplish. But in the long run, the core of the models still lack first-hand reality checks. Grounded experience offers a more durable solution, but only if we close the loop so that actions have consequences the agent can&#8217;t ignore.</p><p>The great AI researcher Marvin Minsky argued that the human brain was what computer scientists call a &#8216;kludge&#8217;. He thought that grey matter was an inelegant solution to the challenges faced by early humans, cobbled together from specialised parts over the course of millennia. </p><p>I like the kludge concept because it suggests intelligence is a product of both specific mechanisms and their patterns of interaction. For AI, the implication is that lots of small, dedicated modules can be linked together to form a system that benefits from their associations.</p><p>It&#8217;s a useful idea for seeing what the future looks like. Yes, today&#8217;s pure reasoners have limits. That&#8217;s why we ensconce them in systems that prevent those frailties from manifesting far more often than they might otherwise do. But really that&#8217;s just a stop gap, a way to get extremely capable models that can perform a kind of referential reasoning via the back door. </p><p>Sooner than you might think, the labs will produce a tool-using LLM that works in the wild and gets better based on what it sees. When that happens &#8212; and it will happen &#8212; today&#8217;s pure reasoners will look like toy models. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Taste is all you need]]></title><description><![CDATA[Discernment in the age of the machine]]></description><link>https://www.learningfromexamples.com/p/taste-is-all-you-need</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/taste-is-all-you-need</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 03 Jun 2025 10:02:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WvrR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WvrR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WvrR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WvrR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WvrR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WvrR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WvrR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg" width="1100" height="834" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:834,&quot;width&quot;:1100,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WvrR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WvrR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WvrR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WvrR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83559397-75be-46d2-a683-f07596f323b8_1100x834.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Gallery of Cornelis van der Geest by Willem van Haecht (1628)</figcaption></figure></div><p>Back in February I wrote <a href="https://www.learningfromexamples.com/p/the-slop-must-flow">about</a> slop. I argued that what we call &#8216;slop&#8217; is the unfortunate byproduct of widening the means of creative production. It&#8217;s a kind of runoff that pools wherever friction is low, one that accumulates when the marginal cost of producing a creative artefact converges on zero. </p><p>Since then I&#8217;ve been wondering about what slop means for visual culture. An obvious reading flows from Herb Simon&#8217;s adage that an abundance of information leads to a scarcity of attention. Slop in this sense is a pollutant that stops us from wading through the digital commons.      </p><p>This is surely true, but this kind of analysis risks getting the wrong end of the stick. </p><p>At its core, the age of AI content is one where more people are able to produce creative things. Some will be good and some will be bad, but in absolute terms there will be more great art than ever. </p><p>How could there not be? We&#8217;re talking about tools that give kids in their bedrooms the same kind of firepower as a small production studio. People thought that moment was a couple of years away, but recent <a href="https://deepmind.google/models/veo/">developments</a> suggest shorter timelines might be more accurate. </p><p>The problem is that you have to claw your way through the sludge to get to the good stuff. All things being equal, I personally do not want to see more inane, self-referential, and stupid stuff on the internet.</p><p>Of course, all things aren&#8217;t equal. Slop is less target than collateral damage. It&#8217;s the thing that we put up with because doing so means that anyone can make stuff that was once prohibitively expensive. </p><p>The future I see is a sloppier one, but it&#8217;s also one with more good art than there has ever been before. More thoughtful films. More provocative performances. More high quality writing.</p><p>But the slop-for-creative-abundance bargain only works if you can sort the great from the good. For the deluge to be worth the price, you need to be able to reliably choose the stuff that best aligns with the type of person you want to be &#8212; and the type of person you&#8217;d like others to see you as. </p><p>You need to have good taste.  </p><h2>We&#8217;re going to have to paddle a little</h2><p>Each new creative tool &#8212; moveable type, cheap lithography, desktop publishing or blogging platforms &#8212; has followed the same pattern. A glut of new work. Howls about falling standards. And a recalibration of what counts as good.  </p><p>The lesson, if such a thing can be drawn from a study of slop across the ages, is that art gets cheaper to make over time. That lowers barriers to entry, which in turn allows more people to produce more art. </p><p>Even today, before there was &#8216;AI slop&#8217; there was &#8216;Netflix slop&#8217;. Slop the elder can be tricky to describe, but it&#8217;s basically something like technically sound narrative emulsified for mass consumption. </p><p>Netflix is a useful thing in that it reminds us that a surplus of content creates new social worlds, but it doesn&#8217;t completely do away with the shared cultural artefacts that demarcate the &#8216;mainstream&#8217;. </p><p>The point is that emerging cultural touchstones <a href="https://www.learningfromexamples.com/p/useful-fictions">maintain</a> the capacity for making and breaking tastes. We live under the illusion that personal preferences are <em>already</em> unmoored from the taste of others. </p><p>Except that&#8217;s not quite true. </p><p>Yes, your X feed is not the same as mine. Your Amazon Prime recommendations look very different. But we are using the same types of viewing devices in the same familiar settings. We watch in the same kind of way. </p><p>The screen is the monoculture. It forces a shared cadence (swipe, tap, binge, skip!) even when the titles differ. That alone represents a flattening of behaviour into something we might reasonably call the mainstream. </p><p>As for what we watch, it&#8217;s true that there are a thousand shows that will never cross your path. But there are many that do, those that become a talking point in the right circles (or the <a href="https://www.learningfromexamples.com/p/useful-fictions">basis</a> for public policy if you live in England). These moments are thinner and faster than <em>Dallas</em> cliff-hangers, but they still constitute a form of mass culture. </p><p>In the broadcast age, the bundle of shared reference points that let strangers triangulate a conversation were large and stable. Today, they still exist &#8212; in that at any given moment there is a <em>thing </em>that lots of people are likely to know about &#8212; but its constituent parts are small and fast-moving. </p><p>Content (not to be confused with art) may already be more readily available than ever, but it&#8217;s still funnelled towards the viewer by platforms that stand in for tastemakers. You may never voluntarily watch a MrBeast video, but you still know who he is. </p><p>Subgroups flourish inside this architecture, but they are still constrained by it. Each micro-culture gets its own fifteen minutes of fame because &#8216;mainstream&#8217; and &#8216;niche&#8217; exist as something like phases of attention. The greater the volume of content, the more frantic the jostling for the moving middle. </p><p>Video and audio generation models will drag this process to its inevitable conclusion. There is simply no way today&#8217;s cultural settlement remains stable in a world where it&#8217;s possible to create studio-quality stuff from your bedroom. Yes, we already have YouTube, but fan-made films aren&#8217;t exactly at the same level as your average Hollywood flick. </p><p>The excess of choice will mean we can no longer ride the current and assume the best bits will wash ashore. We&#8217;re going to have to paddle a little. </p><h2>In my slop era<strong> </strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4yhH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4yhH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4yhH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4yhH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4yhH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4yhH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;La belleza del d&#237;a: &#8220;La ola&#8221;, de Frantisek Kupka - Infobae&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="La belleza del d&#237;a: &#8220;La ola&#8221;, de Frantisek Kupka - Infobae" title="La belleza del d&#237;a: &#8220;La ola&#8221;, de Frantisek Kupka - Infobae" srcset="https://substackcdn.com/image/fetch/$s_!4yhH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4yhH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4yhH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4yhH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F479f189c-0cb9-476d-8392-10b714b63266_1200x630.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The Wave</em> by Frank Kupka (1902). </figcaption></figure></div><p>As the world gets sloppier it produces a desire for people to find the things that matter most to them. This is the impulse that is driving the emergence of the &#8216;tech-literature guy&#8217;, the person who is as familiar with Kate Chopin as they are with mechanistic interpretability. </p><p>People want to find steady points of reference amidst all the content. One way to do that is to revisit (or I suppose visit may be more accurate) the classics. You might begrudge the tech literature guy, but I generally think more reading is an unambiguously good thing. </p><p>Where it gets more interesting is as signal. What&#8217;s the point of reading one-thousand pages of <em>Infinite Jest</em> if you don&#8217;t tell anyone about it? That was important before the age of generative media, but slop has sharpened the point.  </p><p>As the cost of making falls the cost of finding begins to rise. That premium breeds anxiety. Editors, commissioning boards, and gallery curators once filtered the sludge before it reached the public. Abundance overwhelms those dams and forces us to train an inner critic. </p><p>Think about it. Preference is basically a matching problem, and the profusion of AI tools creates the potential for art which is tailor-made for you specifically. That&#8217;s all well and good, but it comes with a catch: hyper-personalisation removes the last outside check on quality. Institutions, critics, and even your artier friends can&#8217;t pick the best possible option for <em>you</em>. </p><p>More creative artefacts will be produced. Slop will keep bubbling to the surface. The old referees will keep losing ground. But the faculty that tells you &#8216;this matters&#8217; or &#8216;that doesn&#8217;t&#8217; is immune to scale and indifferent to recommendation engines.</p><p>Taste can&#8217;t be automated, which is why its value appreciates in direct proportion to the noise that surrounds it. So if the age of slop feels destabilising, that&#8217;s just the weight of responsibility shifting to the individual. </p><p>I think the fixation with AI slop &#8212; and the fact that many commentators seem to completely misread what it is and why it exists &#8212; is born of anxiety. It&#8217;s anxiety about what happens in a world where the creative classes are no longer the cultural gatekeepers. </p><p>But it&#8217;s also anxiety about the need to be discerning. When there&#8217;s more to choose from then ever before, taste appreciates as cultural currency. If you&#8217;re worried about how to pick the bitter from the better, then you&#8217;re going to have a thing or two to say letting any old person make stuff and share it with the world. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Learning From Examples! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Yes, linear algebra can 'know' things ]]></title><description><![CDATA[Five ways of thinking about thinking machines]]></description><link>https://www.learningfromexamples.com/p/does-ai-know-things</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/does-ai-know-things</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 27 May 2025 10:08:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Entx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Right now I&#8217;m running a pledge drive to (eventually) make <em>Learning From Examples</em> a full-time thing. We're making progress, but there&#8217;s still a way to go. If you&#8217;ve been enjoying the writing, this is the best time to show your support. <strong>A $5 pledge doesn&#8217;t cost anything today</strong>, but does tell me you&#8217;re in when I eventually flip the switch on paid subscriptions. To everyone who has pledged so far: I can&#8217;t believe how generous you&#8217;ve been. Thank you.  </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Entx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Entx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Entx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Entx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Entx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Entx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg" width="742" height="489" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:489,&quot;width&quot;:742,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:190874,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Entx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Entx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Entx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Entx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c514174-4a37-4fd1-87a7-343e6ba337c1_742x489.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Cultivating the Cosmic Tree</em> by Hildegard von Bingen.</figcaption></figure></div><p>A few years ago, I was in a seminar about medieval alchemists. </p><p>The researcher, a historian of science, was interested in what those who practiced the arcane knew about the natural world. Probing to get something out of a quiet group, they asked a question I still think about: what is knowledge? </p><p>Things took a circular turn. Knowledge turned out be one of those things that resists precise definitions, a lumpy mixture of truth, belief, rationality, and instinct. One popular idea was that knowledge is processed information, and that information becomes knowledge when we understand it.</p><p>This sounds neat enough, but there are some problems here. You can of course internalise false information. You can believe true things for the wrong reasons. And you know how to recognise a face but not how you do it. </p><p>Our medieval alchemists <em>knew</em> things, didn&#8217;t they? They knew about mercury, transmutation, and the four elements. These were coherent ideas, but they were ultimately wrong. </p><p>&#8216;Understanding&#8217; is just as slippery. We might feel that we understand something because it fits our worldview, not necessarily because it corresponds to some fact of reality that we know to be true. The broader point is that knowledge and knowing are not so clear cut. Even information, which seems stable enough, is messy, contingent, and shaped by who&#8217;s looking and what they expect to find. </p><p>In this essay, I think through some of these problems and apply them to AI. I want to make sense of claims that even the best models don&#8217;t really &#8216;know&#8217; anything, and the counterclaims that argue for the opposite position. </p><p>What follows are five different ways of knowing and what they mean for getting to grips with thinking machines: </p><ol><li><p>Knowledge as <strong>representation</strong></p></li><li><p>Knowledge as <strong>practice</strong> </p></li><li><p>Knowledge as <strong>situation </strong></p></li><li><p>Knowledge as <strong>power</strong> </p></li><li><p>Knowledge as <strong>emergence</strong> </p></li></ol><p>I argue that AI, by which I mean frontier models, partly satisfies the criteria associated with each of these ways of knowing. It doesn&#8217;t tick all the boxes, but it does enough that I suspect AI knows more than many people give it credit for. </p><h2>Knowledge as representation</h2><p>In the Western imagination, the oldest description of knowledge is as a representation of reality. Way back in the 4th century BCE, Plato and friends mulled over ways of knowing in <em>Theaetetus</em>. </p><p>One of the more compelling ideas put forward is that of knowledge as a combination of true beliefs about the world and a compelling reason for that belief. </p><p>But they don&#8217;t buy it, at least not in every circumstance. The dialogue rejects the idea on the grounds that you must have the right kind of justification. It needs to be one that belongs to the person making the claim, it needs to show why the belief is true, and it needs to explain the belief by getting at a root cause. </p><p>I am speedrunning, but today we call this type of knowledge the justified true belief (JTB) model. In JTB, we might say that knowing that the Earth orbits the Sun would mean (a) it&#8217;s true (the Earth really does orbit the Sun), (b) one believes it, and (c) one has justification (e.g. scientific evidence or sound reasoning) for that belief. </p><p>Knowledge in this view is a mental representation that corresponds to reality, one that is underwritten by a justification. It&#8217;s essentially a mental mirror of the world that is true and warranted. </p><p>The model&#8217;s heyday ended with a famous <a href="https://fitelson.org/proseminar/gettier.pdf">paper</a> by the philosopher Edmund Gettier in 1963. Gettier devised clever scenarios (now known as Gettier cases) showing a person could have a belief that is true and well-justified, yet we would hesitate to call it knowledge because the truth resulted from luck or coincidence. </p><p>For instance, imagine you glance at a normally reliable clock which by chance stopped 12 hours ago. Your belief about the current time is true and justified by said clock, but only accidentally so. This is sort of what Plato was getting at: it is possible for a justification to become disconnected from the truth, which melts the causal link between the elements that make up real knowledge. </p><p>There are many critiques of the idea of knowledge as representation. Far too many to get into here. But one of them is worth mentioning: the idea that knowledge doesn&#8217;t really mirror the world. </p><p>John Dewey <a href="https://philpapers.org/rec/LINCEQ#:~:text=value%20of%20experimentation%20into%20account,will%20make%20a%20case%20for">defined</a> knowing as an active <em>&#8216;</em>organism&#8211;environment interaction&#8217; rather than a static representation. Other pragmatist thinkers like Richard Rorty <a href="https://www.philiphautmann.at/richard-rorty-and-the-quest-for-truth/">agreed</a>. They might say that knowledge is better seen as a tool for coping with the world rather than a reflection of it. </p><h3>What about AI? </h3><p>Let&#8217;s first assume we&#8217;re dealing with an agent capable of computer use, one that I&#8217;ve asked to do my shopping. We have to jump through a couple of hoops here, but my view is that a sufficiently capable model knows some things in a JTB sense.  </p><p>It carries a representation (&#8216;Harry prefers Shop A&#8217;) that guides its belief in what I want it to buy. The representation is true in that it correctly knows what I want and it has a good reason for thinking that because it has a log of my previous choices.  </p><p>JTB only asks for a belief-state that produces the right behaviour under the right evidence. If our agent carries a memory module that always makes it choose Shop A for me, that module plays the functional role of a belief system. (But only if we adopt what Dennett calls the intentional stance, where we treat a system as if it literally has beliefs, desires, and rationality whenever that helps you forecast what it will do.) </p><p>So, accepting some mental gymnastics a system can just about tick the JTB box. What it can&#8217;t do is satisfy Plato&#8217;s account, which has much harsher conditions for real knowledge. There are a few reasons for this, but the main one is that the agent needs an explanation (that is, <em>logos</em>) that reveals the cause or essence of the thing and shows why it cannot be otherwise. </p><p>Of course, we don&#8217;t tend to apply these conditions to all types of human knowledge. I haven&#8217;t actually conducted experiments to prove the Earth orbits the Sun, and my maths skills are a little too rusty to figure that out using first principles.  </p><p>That&#8217;s why the JTB version of knowledge as representation became popular: because it stops us from throwing the baby out with the bathwater by saying humans know very little.</p><p>But the JTB recipe proved brittle. Gettier showed that beliefs can be true and well-justified, yet still only accidentally correct. In other words, an AI may pass the minimalist JTB test the same way we often do (provisionally and with a dollop of luck) but that&#8217;s not the same as self-explanatory knowledge. </p><h2>Knowledge as practice </h2><p>But is knowledge something we only hold in our head? We ride a bike, catch a ball, spot a friend&#8217;s face in a crowd. We do those things instinctively. The know-how resides in coordinated muscles and perceptual cues rather than in explicit propositions. </p><p>Because so much competent action relies on background noises that resist full articulation, treating knowledge as solely internal representations misdescribes everyday expertise. This idea is best captured by Michael Polanyi&#8217;s famous adage: &#8216;We can know more than we can tell.&#8217;</p><p>This view moves from knowing-<em>that</em> (viewing knowledge as factual or declarative) to knowing-<em>how</em> (the practical mastery that comes from doing and experiencing). Instead of a mirror held up to reality, knowledge is seen more like an ability we use in the world.</p><p>The philosopher Gilbert Ryle <a href="https://www.timothydavidson.com/Library/Books/Ryle-1949-The%20Concept%20of%20Mind/Ryle-1949-The%20Concept%20of%20Mind.pdf">made</a> a similar point in 1949. Ryle skewered the assumption that all knowing-how (like swimming) is really just a complex kind of knowing-that (knowing facts or rules about how to swim). After all, one could know how to swim even if one cannot articulate the physics of swimming. </p><p>In cognitive science, what we call the &#8216;phenomenological tradition&#8217; emphasises the body as the locus of knowing the world. Maurice Merleau-Ponty <a href="https://voidnetwork.gr/wp-content/uploads/2016/09/Phenomenology-of-Perception-by-Maurice-Merleau-Ponty.pdf">thought</a> that our body knows how to reach for a cup or navigate space in ways we don&#8217;t usually conceptualise abstractly. Perception, in other words, is itself an active engagement with the world. </p><p>Others took this idea further. They <a href="https://direct.mit.edu/books/monograph/3956/The-Embodied-MindCognitive-Science-and-Human">argued</a> for an &#8216;enactive&#8217; view where knowledge emerges through dynamic interaction of an organism with its surroundings, rather than through constructing internal representations detached from action.</p><p>The philosopher Jerry Fodor famously doubted these ideas. Focusing on Polanyi, he <a href="https://www.lse.ac.uk/Economic-History/Assets/Documents/Research/FACTS/reports/tacit.pdf?from_serp=1#:~:text=enabled%20this%2C%20and%20tacit%20knowledge,intend%20to%20mean%20by%20it">reckoned</a> we should be careful calling something &#8216;knowledge&#8217; if it can&#8217;t be verbalised or symbolised. Polanyi might respond that tacit knowledge underpins explicit knowledge. It supplements, rather than supplants, the process by which we make sense of explicit information.</p><h3>What about AI? </h3><p>In the 1960s, Hubert Dreyfus <a href="https://www.learningfromexamples.com/p/the-economy-of-magic">argued</a> that early artificial intelligence efforts faltered because they misunderstood human knowledge as rule-bound rather than learned. He argued that a chess master &#8216;just spots&#8217; the right move rather than crunching numbers, acts from a sort of embodied familiarity, and cannot fully articulate the know-how guiding the act.</p><p>Dreyfus was talking about the symbolic school of AI that uses hard-coded rules, but he was also sceptical of machine learning because it struggled to get at the underlying meaning. </p><p>Modern systems, he might say, process patterns and find correlations, but they produce explanations that don&#8217;t necessarily correspond with the basic facts of reality. </p><p>This is a version of the symbol grounding problem, which concerns how symbols like words can acquire intrinsic meaning (rather than being defined only in terms of other symbols). A model might wax lyrical about molecular biology, but we don&#8217;t know whether &#8216;molecular&#8217; or &#8216;biology&#8217; are stable concepts that refer to in-the-world properties. </p><p>The problem, of course, is that there&#8217;s no real way to determine whether or not today&#8217;s systems have some sort of sophisticated world model that emerged as a fortunate byproduct of next token prediction. Maybe they do and maybe they don&#8217;t. My personal view is that the proof is in the pudding. If AI starts to consistently discover new facts about the world without any hand-holding, then it&#8217;s probably we who misunderstand the nature of knowing.</p><p>For now, though, let&#8217;s be clear about what we&#8217;re saying as it relates to practised knowledge. There are basically three major claims here: </p><ol><li><p>Skilled action emerges through repeated copying. </p></li><li><p>Know-how is stored in dispositions rather than explicit propositions.</p></li><li><p>Embodiment anchors meaning in the world.</p></li></ol><p>As I see it, the most sophisticated models satisfy both the first and the second criteria. My shopping bot <em>does</em> improve by trial and error, and its knowledge<em> does </em>emerge from experience. </p><p>The last claim is where it falls short. The shopping agent still lacks a full sensorimotor experience of being that Dreyfus and the phenomenologists argue is necessary for the richest kind of human know-how. </p><p>Until an agent can live in an environment, develop a durable feel for what matters, and draw on that feeling in the real world, its tacit knowledge stays closer to habit than to embodied skill. Of course, it&#8217;s not all that clear to me that practised knowledge must also be embodied &#8212; but that&#8217;s a debate for another time. </p><h2>Knowledge as situation<strong> </strong></h2><p>Donna Haraway coined the term &#8216;situated knowledges&#8217; in a 1988 <a href="https://philpapers.org/archive/harskt.pdf">essay</a> arguing against the illusion of pure objectivity in science. The idea is that all knowledge is partial, that the act of knowing is not only dynamic but also shaped by the relative position of the knower. </p><p>Here the knower is influenced by the historical, cultural, and physical conditions that shape their ability to form knowledge. It&#8217;s an idea closely related to Thomas Nagel&#8217;s &#8216;view from nowhere&#8217; schtick, the philosophical ideal of stripping away local standpoints until you see the world independent of any particular observer. </p><p>Science often tries to reach this altitude. We describe colour as wavelength, love as neurochemistry, or the self as a biological organism. Each move feels like progress toward objectivity. </p><p>But Nagel argues the project is impossible to complete and perilous to over-extend. The very act of thinking is anchored in a &#8216;view from now-here&#8217; that emanates from a physical entity with a history of being.</p><p>More concretely for our purposes, the historian of science Thomas Kuhn foregrounded the role of perspective with his concept of paradigms. In <em>The Structure of Scientific Revolutions</em>, Kuhn showed that scientists&#8217; interpretation of data is conditioned by the ensemble of theories, methods, and exemplars they have at their disposal.</p><p>In Kuhn&#8217;s view, what scientists see as facts (and even what they observe through instruments) is filtered through the pores of conventional understanding. When paradigms are washed away by the storms of scientific revolution, scientists resurface to find a different world of knowledge.</p><p>Critics of these ideas charge philosophers with relativism. </p><p>They often ask: if all knowledge is perspective-bound, can we say any fragment of knowledge is better or more true than another? This was at the heart of the 1990s Science Wars&#8482; in which critics like Alan Sokal (he of <a href="https://www.theguardian.com/science/2003/jun/05/badscience.research">Sokal affair</a> fame) and Paul Gross <a href="https://sites.cardiff.ac.uk/harrycollins/the-science-wars/">accused</a> postmodern theorists of undermining rationality. </p><p>Defenders of the situated view shot back that recognising perspective is not the same as nihilistic relativism. Haraway distinguishes her view from &#8216;anything goes&#8217; relativism by pointing out that claims still have to be accountable to evidence &#8212; but one must still recognise that all observers are somewhere.</p><p>A related debate turns on perspectivism. Kuhn documented how scientific facts are constructed in specific settings, but scientists often respond that nature ultimately constrains our perspectives. Wishing that the Earth revolves around the Sun does not make it so.</p><p>The idea is that there&#8217;s tension between realism and constructivism. Contemporary philosophers often occupy a middle ground that acknowledges, while truth isn&#8217;t subjective, all access to truth is mediated by perspective. </p><h3>What about AI? </h3><p>Large language models draw on billions of tokens of data. Every output is called forth from somewhere, usually conjured up with one eye on a small slice of the internet. Each line has been conditioned by the feedback of human annotators who up-weighted some output and shooed away others.</p><p>What we get is a statistical amalgamation of situated viewpoints anchored in very specific conditions. When we request a historical summary of the siege of Khartoum, we&#8217;re probably getting the anglophone perspective. In standpoint-theory terms, the model amplifies perspectives that dominate public text and omits those that circulate orally, behind paywalls, or in low-resource languages. </p><p>In this sense, the model&#8217;s knowledge is deeply situated &#8212; though not precisely in the same sense it might be for humans:</p><ul><li><p>For humans the standpoint is lived. You occupy a body, a culture, a history. Those coordinates shape what you can legitimately claim to know.</p></li><li><p>For an LLM the standpoint is inherited. Every sentence reflects the conditions under which it was constructed: data sources, content policies, platform design, technical affordances, and the risk posture of developers.  </p></li></ul><p>That makes the model&#8217;s outputs situated in origin (they reflect the decisions of its makers) but non-situated in experience (the model itself has no lived point of view). It is anchored in the contingencies of its design rather than in a knower&#8217;s embodied life. </p><h2>Knowledge as power </h2><p>Taking a leaf out of the pragmatist&#8217;s book, one way to make sense of knowledge is to skirt questions about what knowledge is and instead focus on what it does. </p><p>This tradition, associated with social theorists like Michel Foucault, describes a great entanglement between  what we know and our ability to shape the world around us. It&#8217;s a school of thought that turns Bacon&#8217;s famous adage, &#8216;knowledge is power&#8217;, upside down. </p><p>The move is to take the phrase both literally and critically. Those who define what is known often hold power, and conversely, power structures determine what counts as knowledge.</p><p>Foucault famously <a href="https://www.taylorfrancis.com/chapters/edit/10.4324/9781003320609-37/discipline-punish-michel-foucault">said</a> that &#8216;there is no power relation without the correlative constitution of a field of knowledge, nor any knowledge that does not presuppose and constitute at the same time power relations.&#8217; </p><p>Whenever power is exercised (e.g. the state&#8217;s power over citizens, a teacher&#8217;s power over students or a doctor&#8217;s power over patients), it does so by relying on knowledge (e.g. census data, educational curricula or medical diagnoses). That knowledge in turn reinforces power relations by making them appear natural or necessary.</p><p>The problem here is a doozy. If all knowledge is an effect of power, then Foucault&#8217;s own analyses are equally compromised. On what grounds, critics ask, can they claim any critical privilege? </p><p>Instead, we might say that language contains built-in validity claims that actors must implicitly raise whenever they seek mutual understanding. Power, through say propaganda, can warp those claims &#8212; but the possibility of identifying distortion presupposes an ideal of undistorted communication.</p><p>It should be clear that we&#8217;re not really talking about another type of knowledge here. Knowledge as power isn&#8217;t really a fourth species to sit alongside those that we&#8217;ve already discussed, but it is useful to think about for getting to grips with how conceptions of knowledge have changed &#8212; and how knowledge actually circulates in the world. </p><h3>What about AI? </h3><p>I think this type of knowledge, if we can even call it that, is probably the one that gets the most air time in discussions of AI today. At least implicitly, much critical work on AI is essentially Foucauldian. It&#8217;s interested in how the AI project uses personal information in service of a developer&#8217;s institutional goals. </p><p>Of course, this is basically true of any organisation trying to make money &#8212; especially those of the information age. More specific criticisms tend to involve studying how AI can actually exert political influence in the real world (though my view is concerns like &#8216;misinformation&#8217; are <a href="https://www.nature.com/articles/s41586-024-07417-w">overpriced</a>). </p><p>This type of analysis also considers the behaviour of the models and how design decisions influence outputs. As in discussions about situated knowledge, content filters, post-training techniques, and other classifiers all encode value judgements about what&#8217;s fair game and what&#8217;s not. </p><p>Reinforcement-learning and policy tuning rely on human raters following a set of guidelines. The result is a model that treats some claims as polite, others as disallowed, and still others as &#8216;unproven&#8217;. All reflect the institutional risk posture of the builder. </p><p>That is not to say LLMs are corrupt. Rather, it simply reminds us that the knowledge produced by a model is the outcome of many small (and often invisible) governance choices.</p><h2>Knowledge as emergence </h2><p>The final conception of knowledge is the most recent. It holds that knowledge is an emergent phenomenon produced by interactions and networks. This view says that knowledge, whether in science or society, is the product of distributed systems. </p><p>A core idea here is actor-network theory (ANT). Developed by Bruno Latour, ANT suggests that all knowledge is collectively produced. In science, for example, knowledge might emerge from the interplay between researchers, instruments, institutions, and even non-human entities like microbes.</p><p>An actor-network theorist would say a claim only becomes &#8216;knowledge&#8217; when a stable network has formed that supports it. You need others to reproduce your experiment. You need journals to publish it. And you need others to cite it. Until that happens, it&#8217;s not proper knowledge.</p><p>Closely related is the idea of distributed cognition, a concept from Edwin Hutchins&#8217; study of maritime navigation. Hutchins <a href="https://uberty.org/wp-content/uploads/2015/07/Edwin_Hutchins_Cognition_in_the_Wild.pdf">believed</a> that the &#8216;cognitive unit&#8217; was the whole system of people, charts, instruments, and communication. Only together, he proposed, did they possess the knowledge needed to move the ship from A to B. </p><p>We see the idea of networked knowledge in crowdsourcing. On Wikipedia, for example, no single contributor knows everything &#8212; but a large network of contributors can collectively produce an encyclopedia. </p><p>James Surowiecki puts forward in <em>The Wisdom of Crowds</em> that under the right conditions, aggregating information from many individuals can yield remarkably accurate knowledge. We need look no further for evidence than prediction sites like Manifold, where averaging many educated guesses often beats individual forecasts.</p><p>There are many more examples like this, but for this post I&#8217;ll leave you with Andy Clark and David Chalmers&#8217; <a href="https://www.jstor.org/stable/3328150">extended mind hypothesis</a>. This theory reckons that cognitive processes (and by extension knowledge) can extend into the environment. They think that a notebook, for example, is effectively an external memory that we can use to remember important information. </p><p>Knowledge as emergence is a cool idea, but &#8212; like all of those we have discussed &#8212; it is not without some problems. </p><p>Remember, in the representational view, knowledge is generally thought to correspond with some underlying truth and rationale to back it up. In a distributed system these two come apart. The network may deliver accurate results, but the warrant is&#8230;nowhere. </p><p>No single node can supply the justificatory story, and the pattern that does support the answer is often opaque. Because justification is diffuse, there is no clear locus for responsibility or error-correction. </p><h3>What about AI? </h3><p>Emergent knowledge most neatly corresponds to AI. </p><p>As we know, a language model is a dense network that encodes knowledge distilled from massive reservoirs of information. Nothing in the network is built to store explicit facts. Training simply nudges billions of weights so the system gets better at continuing text. </p><p>Out of those adjustments, higher-level regularities surface that allow the model to answer questions, draft code, and translate idioms. But no individual parameter &#8216;contains&#8217; these abilities. If you tweak a few, the behaviour can re-form along alternate paths. </p><p>What the model knows arises as a pattern, one that can only exist based on the whole web of connections. Knowledge, in this sense, is an emergent property of the network&#8217;s collective dynamics &#8212; exactly the phenomenon systems theorists have in mind. </p><p>Consider what a model knows about the City of Light. It will tell you that Paris is in France, but there&#8217;s no single internal register that stores that fact. </p><p>Instead, what we see is that there are circuits responsive to the string &#8216;Paris&#8217; and others that correspond to a concept of &#8216;Frenchness&#8217;. But that isn&#8217;t the same as the symbolic entry Paris &#8594; France. The connection between those two circuits is computed from a mesh of activations and embeddings.</p><p>More confusing still is that the same fact is encoded in many routes. Ablate one circuit and another pathway compensates. That redundancy is useful for robustness, but it means the fact isn&#8217;t localised in a way that carries its own explicit warrant.</p><h2>Does AI know things? </h2><p>Knowledge is porridge. It&#8217;s warm, thick, nutritious, and bodily. But it&#8217;s also unglamorous. It&#8217;s hard to separate into its constituent parts and takes the shape of its container. </p><p>Thinking about knowledge shows us the limits of language. Much reading later, the only thing I feel truly comfortable saying about the topic is that precise definitions don&#8217;t have much currency here.</p><p>That all said, the point of this exercise was to answer a question: does AI &#8216;know&#8217; things? The answer is yes, so long as you squint hard enough and don&#8217;t worry too much about fine-print. </p><p>So, here are the scores on the doors:</p><ul><li><p><strong>Representation</strong>: AI represents information as knowledge. It mirrors reality well enough to act, yet because those patterns lack a transparent justification, they stop short of the most severe definitions of representational knowledge. </p></li><li><p><strong>Practice</strong>: Frontier models replay usage patterns rather than consulting explicit rules. Its know-how is powerful but disembodied, missing the sensorimotor feedback that lets human skill refine itself in the world.</p></li><li><p><strong>Situation</strong>: Its knowledge is deeply situated, though not like ours. Every answer is filtered through the languages, cultures, and editorial choices made by its makers and embedded in the training data. </p></li><li><p><strong>Power</strong>: Design choices like alignment tuning and corporate policy dictate which claims the model is allowed to produce or suppress. What it knows is bounded by what it is permitted to say.</p></li><li><p><strong>Emergence:</strong> LLMs are emergence made manifest. Answers arise from the collective dynamics of billions of small units rather than from any single stored fact. The model&#8217;s knowledge, like all networked knowledge, is robust but opaque.</p></li></ul><p>So, AI can know if we take &#8216;knowing&#8217; to mean reliably producing and acting on patterns that line up with the world. But if our definition must include embodied feeling, self-owned explanation, and an unambiguous locus of responsibility, then we have to think again.  </p><p>Across all five frames the machine scores a partial hit. It mirrors facts, rehearses practical routines, speaks from a data-bound position, carries its builders&#8217; priorities, and generates answers through emergence.  </p><p>But when all&#8217;s said and done, I find it hard to believe that AI doesn&#8217;t know anything. It may not know like we do, it may not know everything, but it does know <em>something</em>. </p>]]></content:encoded></item><item><title><![CDATA[Romantic Machines]]></title><description><![CDATA[Is AI heir to the Enlightenment?]]></description><link>https://www.learningfromexamples.com/p/romantic-machines</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/romantic-machines</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 20 May 2025 09:22:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!EZhk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One day, hopefully in the not too distant future, I&#8217;d like to work on this newsletter full time. I want to keep writing about cultural and intellectual history, and I&#8217;d love to complete fifty entries in the <a href="https://www.learningfromexamples.com/s/ai-histories">AI Histories</a> series (47 more to go). To help me do that, all you have to do is pledge $5 to support the project before I turn on paid subscriptions sometime later this year. It&#8217;s free now, but your pledge will be hugely important to the future of <em>Learning From Examples.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EZhk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EZhk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png 424w, https://substackcdn.com/image/fetch/$s_!EZhk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png 848w, https://substackcdn.com/image/fetch/$s_!EZhk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png 1272w, https://substackcdn.com/image/fetch/$s_!EZhk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EZhk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png" width="1456" height="870" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:870,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8972483,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/163763354?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EZhk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png 424w, https://substackcdn.com/image/fetch/$s_!EZhk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png 848w, https://substackcdn.com/image/fetch/$s_!EZhk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png 1272w, https://substackcdn.com/image/fetch/$s_!EZhk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcc8b6f4-6dc3-4644-88c4-8bb677d6f425_3160x1888.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">John Martin&#8217;s <em>Sadak in Search of the Waters of Oblivion</em>, 1812 (detail)</figcaption></figure></div><p>In 2018, Henry Kissinger <a href="https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/">wrote</a> about AI in <em>The Atlantic.</em> </p><p>He was concerned about our place in a world filled with machines smarter than we are, and wondered whether their coming heralded the end of days for the long age of rationalism. </p><p>&#8216;What,&#8217; he asked, &#8216;will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?&#8217; </p><p>Kissinger is describing a kind of extreme deskilling, one that eventually atrophies the human capacity for reason. In a maximalist version of this scenario, most science is conducted by AI and human researchers become a sort of priestly class charged with interpreting the machine.   </p><p>The idea is that AI is the final product of the Enlightenment, one that eventually succeeds so thoroughly in harnessing its ideals that it eats the project from the inside out. I&#8217;ll write about the &#8216;new Enlightenment&#8217; another time, but in this post I want to focus on AI (by which I mean frontier LLMs) as the heirs to the age of reason.   </p><p>When I think about AI, I don&#8217;t think about science. I certainly don&#8217;t think about rationality. I think about the weirdness of systems that work for reasons we don&#8217;t quite understand. I think about odd exchanges with Claude 3.5 Sonnet and the struggle to figure out what is happening inside the models.   </p><p>I see AI as a clever but erratic organism, one much better at making connections between philosophical ideals or winging film criticism than at assisting in experimental research. You can see this in places where AI is used most: writing, art, coding (that&#8217;s creative, isn&#8217;t it?) and companionship. </p><p>Everyone knows the models are good fun and useful for the right things, but even the most confident user is aware that they aren&#8217;t the most rational constructs. They give you different answers every time, are liable to go loco, and occasionally make stuff up.  </p><p>That doesn&#8217;t sound like a successor to the Enlightenment to me. No, AI is the descendant of another great tradition. One that prized intuition over logic, mystery over method, and the sublime over the systematic. </p><p>AI is a Romantic project.</p><h2>Deep yearning </h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dwXk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dwXk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png 424w, https://substackcdn.com/image/fetch/$s_!dwXk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png 848w, https://substackcdn.com/image/fetch/$s_!dwXk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png 1272w, https://substackcdn.com/image/fetch/$s_!dwXk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dwXk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png" width="1168" height="878" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:878,&quot;width&quot;:1168,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1673537,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/163763354?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dwXk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png 424w, https://substackcdn.com/image/fetch/$s_!dwXk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png 848w, https://substackcdn.com/image/fetch/$s_!dwXk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png 1272w, https://substackcdn.com/image/fetch/$s_!dwXk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eda6e6d-d6bb-4292-8808-f7a4b0fe2ba2_1168x878.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">William Wordsworth via Wikipedia Commons</figcaption></figure></div><p>The Romantic movement emerged in Europe in the late 18th century. It began in literature and philosophy before eventually finding a home in music, painting, and politics. A reaction to the hard-headed rationalism of the Enlightenment, at its core was a renewed attention to imagination and individual spirit.</p><p>Romantic thinkers saw the world as alive. Nature, with its own moods and meanings, became a source of wisdom. Poets like Wordsworth thought there was an underlying truth in the natural world that rationalism could not account for. Philosophers like Schelling believed the cosmos itself was an expression of living thought.   </p><p>These images were the product of what the historian Eric Hobsbawm called the dual revolution: the political upheaval of the French Revolution and the economic transformation of the Industrial Revolution. Romanticism emerged in their wake. It looked for coherence in a world that felt unstable amidst the flowering of new forms of cultural, political, and economic life.  </p><p>The Romantics turned to ruins and folklore, to distant lands and ancient texts. It was a project about longing, about what it feels like to look for something that you can&#8217;t quite find. The Germans might call it <em>sehnsucht</em>: a yearning for the indefinite. </p><p>In painting, the search looked like stormy landscapes, crumbling abbeys, and wandering figures made small by nature. In music, it took the form of swelling emotion heard in the symphonies of Beethoven or the operas of Wagner. </p><p>Romanticism lives in the residue of the past. But where other approaches to cultural remembrance like Classicalism sought to rehabilitate, inherent in Romantic thought is a weary acceptance that revival is impossible. </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nsas!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" width="106" height="34.56521739130435" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:150,&quot;width&quot;:460,&quot;resizeWidth&quot;:106,&quot;bytes&quot;:12198,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162870944?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a011107-4790-4b64-9f4c-4b8fcace22de_460x330.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>These ideas are those that animate the AI project. Neural networks are, of course, both very large and very difficult to understand. They are systems whose inner worlds resist comprehension, digital simulacra bludgeoned into submission by a conditioning regime that would make Pavlov blush. </p><p>They are the product of planetary-scale logistics and magnets for trillions of dollars in capital. The systems are digital reservoirs filled to the brim by half a century of information deluge.   </p><p>We&#8217;re drawn to AI for what it says about our place in the world: we are small.</p><p>This is what is happening when you hear AI people talk about building a &#8216;digital god&#8217; or terrifying &#8216;shoggoths&#8217;. It&#8217;s not about religion or transcendence in any formal sense. But it is a reaction against intelligence as optimisation, one that gives voice to the urge to find meaning in that which we can&#8217;t understand. It&#8217;s a Romantic impulse in that it insists on the unknowable. </p><p>A version of the same instinct sits behind how people respond to models like Claude 3.5 Sonnet. The model was pretty good, sure, but the vibes were just&#8230;better than the competition. That&#8217;s probably the result of clever post-training, but it&#8217;s curious that Sonnet 3.7 doesn&#8217;t quite strike the same chord. </p><p>Or take GPT-2.  When it was released all the way back in 2019, it wasn&#8217;t especially coherent. And it certainly wasn&#8217;t particularly useful by today&#8217;s standards. What it did have was what I can only describe as a kind of <a href="https://gwern.net/gpt-2">lyricism</a>. It didn&#8217;t sound like a person, but it didn&#8217;t always sound like a machine either. </p><p>It was Romantic. I&#8217;m not saying that without being a little sentimental, but I do think GPT-2 provoked an emotional resonance that newer models struggle to match despite (or because of?) their polish. That sense of intrigue &#8212; not knowing what exactly you like about a half-cooked language model &#8212; is the same impulse that made the Romantics tick. </p><h2>Chasing the sublime </h2><p>This essay opens with <em>Sadak in Search of the Waters of Oblivion. </em>Painted by John Martin in 1812, the piece shows a man scrambling through a mountainous inferno. He&#8217;s alone and desperate, set against a backdrop that reminds us what small things humans are.  </p><p>It captures what Edmund Burke would call the sublime: something massive, obscure, and overwhelming. A thing that is terrifying and awe-inspiring in equal measure. Size is part of the equation, but the sublime is really about the mind confronting something it can&#8217;t fully process. </p><p>Towering mountains, violent storms, dark nights were all things that overwhelmed the senses to produce a kind of thrilling terror. Later thinkers like Kant <a href="https://psyche.co/guides/how-to-think-about-the-sublime-in-the-natural-world">suggested</a> that the sublime came from the mind&#8217;s capacity to grasp its own failure to comprehend the infinite.</p><p>It&#8217;s a useful concept for reckoning with AI at the limit. Trained on more words than any single person could hope to read in a thousand lifetimes, today&#8217;s giant neural networks are systems that wrinkle the brain. To create one, you use ultraviolet light to fabricate chips with billions of features. Then you drop thousands of them in a data centre somewhere and ask them to multiply matrices until the sand starts to think.  </p><p>The networks themselves exist as mathematical constructs on top of the material world. They contain trillions of parameters adjusted via gradients that reflect patterns in high-dimensional space. If you printed them out, they would take you years to read. Researchers generally don&#8217;t know what function or piece of information certain parameters correspond to. </p><p>Interpretability research tries to close this gap by mapping individual neurons or circuits to linguistic, visual, or conceptual patterns. Some progress has been made, with Anthropic recently <a href="https://transformer-circuits.pub/2025/attribution-graphs/biology.html">showing</a> that certain neurons activate in response to concepts like cities or characters. But this is the exception, not the rule. </p><p>These are systems that write poetry, draft emails, and explain quantum mechanics whose internal structure resists human inspection. They reflect us in their language, but their information processing mechanisms are not our own. We know everything about AI, except how it works. </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nsas!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" width="110" height="35.869565217391305" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:150,&quot;width&quot;:460,&quot;resizeWidth&quot;:110,&quot;bytes&quot;:12198,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162870944?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a011107-4790-4b64-9f4c-4b8fcace22de_460x330.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Romanticism was a critique of modernity. As factories rose and cities swelled, its thinkers questioned the idea that technological progress always led to flourishing. Despite the marvels of technology, they saw a world in which each day made it harder to be human. </p><p>It&#8217;s a concern you are probably familiar with, one that has been reimagined and repackaged by critics of modernity for some time. Robert Putnam&#8217;s <em>Bowling Alone </em>famously charted the collapse of community in late 20th-century America, showing how people had grown increasingly isolated from neighbours, churches, unions, and civic groups. </p><p>A little later, Daniel Rodgers&#8217; <em>Age of Fracture</em> argued that the collective frameworks that once structured cultural thought (stuff like shared social norms and stable points of reference) were losing out to the irresistible power of the cult of the individual. </p><p>Technology didn&#8217;t cause these problems, but it did give them form. As AI models stand in for conversation, companionship, and other things we lack, it is hardly surprising that they generate such animosity. They are part of the long story of what it means to be human in a world increasingly built by machines. </p><p>This anxiety is most keenly felt in disconnect between what AI is and what it does. </p><p>We&#8217;re talking about systems trained on massive datasets that are capable of superhuman feats of pattern matching. But anyone who&#8217;s spent time in a circular conversation with ChatGPT is unlikely to describe it as rational. Not to mention they can be easily jailbroken, much to the delight of prompt engineers and the chagrin of developers. </p><h2>Romantic technology </h2><p>This is why AI is a Romantic technology. Because it is vast and resists understanding. Because it is emotionally and aesthetically resonant. Because it replays our fantasies about creation and asks us to reckon with our place in the world. </p><p>These qualities are responsible for the allergic reaction that some people have to AI. When you see someone write off the entire LLM project as &#8216;bullshit generators&#8217;, it&#8217;s because they were looking for a product of the age of reason but got one from the age of romance.</p><p>To get the most out of AI, we need to manage our expectations. Treat large models as dilettantes rather than librarians. Better to accept that their value lies in surprise and estrangement and pair them with the drier stuff that keeps us honest. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Who gets to censor AI?]]></title><description><![CDATA[All things in moderation. Especially content moderation]]></description><link>https://www.learningfromexamples.com/p/who-gets-to-censor-ai</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/who-gets-to-censor-ai</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 13 May 2025 09:25:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!98Ox!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I hope to eventually work on this newsletter full time. To help me do that&#8212;to give you more essays, more history, and more good times&#8212;I need your help. All you have to do is pledge $5, so that I know I have your support for the moment I turn on paid subscriptions. We&#8217;re currently on track to hit the goal of <strong>100 pledges by May 31</strong>, but there&#8217;s still much further to go. If you like what you read here, this is the best moment to help Learning From Examples. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!98Ox!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!98Ox!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png 424w, https://substackcdn.com/image/fetch/$s_!98Ox!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png 848w, https://substackcdn.com/image/fetch/$s_!98Ox!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!98Ox!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!98Ox!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png" width="1456" height="875" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:875,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1829136,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162870944?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!98Ox!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png 424w, https://substackcdn.com/image/fetch/$s_!98Ox!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png 848w, https://substackcdn.com/image/fetch/$s_!98Ox!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!98Ox!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8709aef-2ec1-4921-b05e-dcde6e002668_1906x1146.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Rick and Ilsa in <em>Casablanca </em>(1942)</figcaption></figure></div><p><em>Casablanca</em> is one of those films whose reputation precedes it. For anyone unfamiliar, it&#8217;s about a guy named Rick who runs a bar in Morocco in the Second World War. Rick drinks whiskey, plays nice with soldiers, and complains about his lost love Ilsa. </p><p>She turns up in short order, of course, which gives us the classic line &#8216;Of all the gin joints in all the towns in all the world, she walks into mine.&#8217; But all is not well. Ilsa is married to someone else. </p><p>Her new man is Victor Laszlo, a resistance fighter looking to undermine the Nazi war effort. A lot of moping ensues before Rick&#8212;who is now hoping for another shot with Isla&#8212;eventually decides to help her flee the country with Laszlo. </p><p><em>Casablanca </em>is a good bit of American propaganda. It came out in 1942 having successfully answered a bunch of <a href="https://theconversation.com/you-must-remember-this-casablanca-at-75-still-a-classic-of-wwii-propaganda-87113">questions</a> from US officials like: &#8216;will this picture help win the war?&#8217; Few will be surprised to hear that the Germans are the bad guys who get their just deserts and the Allies are morally upright fellas who come out on top. </p><p>The film is a nice example of what happens when a director is asked to toe the line. Rick and Isla, we are told, are madly in love. But they don&#8217;t really act like it. There&#8217;s almost no physical affection between them, even in the flashbacks when we&#8217;re meant to believe they&#8217;re head over heels. </p><p>That&#8217;s because the makers saw no alternative. Since 1934 the American film industry had operated under the Motion Picture Production Code, a set of self-imposed moral guidelines designed to pre-empt federal censorship. Better known as the Hays Code, the system banned profanity, nudity, and the sympathetic portrayal of crime or adultery. Its function was to present a virtuous version of American life. </p><p>When the U.S. entered the Second World War, Washington didn&#8217;t need to build a censorship apparatus from scratch. Hollywood had already built one for them. <em>Casablanca</em> was produced within that system, which is why Rick and Ilsa&#8217;s affair can be implied but never shown. The Hays Code forbade adultery from being depicted positively, which forces the film to turn romance into ambiguity. </p><p>To show an affair would be to endorse it, so the love story becomes elliptical. Rick and Ilsa stare at each other like people remembering dreams. Their lines seem half finished. &#8216;We&#8217;ll always have Paris&#8217; works because the audience is only shown the squeaky clean bits of their time in the city of light. </p><p>Rick&#8217;s penchant for whining meant that <em>Casablanca</em> never did it for me, but it&#8217;s hard to deny that&#8212;at least with respect to the love affair&#8212;the result is powerful. Passion becomes tension and desire becomes sacrifice. Rick, standing on the airfield, lets Ilsa go because the script won&#8217;t let him have her. </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nsas!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" width="132" height="43.04347826086956" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:150,&quot;width&quot;:460,&quot;resizeWidth&quot;:132,&quot;bytes&quot;:12198,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162870944?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a011107-4790-4b64-9f4c-4b8fcace22de_460x330.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>The Hays Code was a programme of self-censorship, one that Hollywood imposed on itself because it saw no alternative. By the early 1930s, the film industry was facing a storm of moral outrage from church groups, women's leagues, and state censorship boards for corrupting the public. There was too much sex, too much crime, too many independent women. </p><p>Federal regulation began to loom after a string of celebrity scandals agitated lawmakers, much like the &#8216;video nasties&#8217; craze of the 1980s in the UK that I <a href="https://www.learningfromexamples.com/p/useful-fictions">wrote about</a> a few weeks ago. Fearing the worst, the studios decided to take pre-emptive action to keep the government out and the box office open. </p><p>Enter Will Hays, a Presbyterian elder and former U.S. Postmaster General. He was respectable, religious, and well-connected. But perhaps more importantly, he was happy to be the face of Hollywood's clean-up operation. </p><p>Under his leadership, the industry adopted rules designed to keep films &#8216;morally wholesome.&#8217; It banned profanity, nudity, interracial romance, &#8216;excessive and lustful kissing,&#8217; and any portrayal of crime or adultery that might seem enjoyable. By 1934, the studios agreed to bind themselves to a new enforcement arm, the Production Code Administration, which had the power to deny a film its seal of approval. No seal meant no distribution. The government backed off and religious groups declared victory. The studios kept control as long as they told the right story.</p><p>The Hays Code smuggled in a particular view of human nature, one that imagined audiences as easily swayed, morally porous, and incapable of dealing with ambiguity.  It saw the movie theatre become a moral classroom where crime was always punished and sex always implied. It should go without saying that the authorities always came out on top. </p><p>Civil society was in a contest to decide who gets to define decency and on what grounds. That&#8217;s why content moderation is rarely just about content. It&#8217;s about the worldview on the other side of the filter and the assumptions about what people can handle, what might corrupt them, what kinds of lives are worth representing. Good intentions or otherwise, behind every censorship regime is a vision of what people are and what they are capable of. </p><h2>From movie to model</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1vpg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1vpg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1vpg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1vpg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1vpg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1vpg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Next Chapter: Should \&quot;Gone With the Wind\&quot; Be Gone? - Hudson Valley  Writers Guild&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Next Chapter: Should &quot;Gone With the Wind&quot; Be Gone? - Hudson Valley  Writers Guild" title="The Next Chapter: Should &quot;Gone With the Wind&quot; Be Gone? - Hudson Valley  Writers Guild" srcset="https://substackcdn.com/image/fetch/$s_!1vpg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1vpg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1vpg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1vpg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8b72f75-2964-4542-8e3e-28a678a6b960_1024x768.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Poster for <em>Gone With the Wind.</em></figcaption></figure></div><p>The belief that people are deeply impressionable is the same assumption that animates part of today&#8217;s panic over AI. Take misinformation. Powered by generative models that create fake news and recommender systems that serve it en masse, AI is ushering in a new age of epistemic insecurity.</p><p>Except it isn&#8217;t. Despite the fact that it feels plausible (how can it not when AI certainly has the capability to do these things?), pretty much every <a href="https://www.nature.com/articles/s41586-024-07417-w">credible study</a> out there shows that not to be the case. Next time you hear someone call themselves a &#8216;misinformation expert&#8217;, I encourage you to take a look at their data for yourself. </p><p>Despite some recent high-profile <a href="https://edition.cnn.com/2025/01/07/tech/meta-censorship-moderation">moves away</a> from moderation, the last ten years generally followed the path laid out by the Hays Code. Control the flow of information. Shape the message. Protect the public from themselves. Epistemic security for all is certainly a laudable goal, but one that history shows is likely to be tricky to successfully legislate for.</p><p>The dynamics work differently with respect to recommender systems (more like engines for distribution) and generative models (more akin to movie-making). The latter, which I&#8217;ll focus on the rest of this post, uses a whole bunch of mechanisms to shape outputs. Probably the most visible example are filters that screen responses before they are produced, but the entire development process&#8212;from pre-training to post-training modifications to alignment mechanisms&#8212;all determine how a model can act. </p><p>There are lots of good use-cases for these sorts of tools as they relate to shaping model behaviour. Preventing them from helping people construct biological weapons or conducting sophisticated cyber attacks is straightforwardly good. But aside from uncontroversial red lines, questions about what sort of values an AI model ought to abide by are horribly fraught.  </p><p>What counts as harm is based on a narrow vision of humanity that assumes we&#8217;re all psychologically fragile. Should a model refuse to generate content critical of religion? Should it avoid satire? Should it prioritise safety over openness? The problem with answering these questions is that you inevitably disappoint someone. Even attempts at pluralistic alignment tend to work at the group level rather than the personal.  </p><p>Whether it&#8217;s the Hays Code or content filters, every moderation system encodes assumptions about what knowledge is valuable and who can be trusted with it. AI systems generally prize civility over confrontation, consensus over dissent, and safety over ambiguity. The result is a machine flattens reality into something that is acceptable in the broadest possible terms. </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nsas!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png" width="132" height="43.04347826086956" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:150,&quot;width&quot;:460,&quot;resizeWidth&quot;:132,&quot;bytes&quot;:12198,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162870944?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a011107-4790-4b64-9f4c-4b8fcace22de_460x330.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Nsas!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 424w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 848w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1272w, https://substackcdn.com/image/fetch/$s_!Nsas!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e86f8a1-ceb0-4b5e-97c4-6b4f740ff3dc_460x150.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>In <em>Discipline and Punish</em>, Foucault argued that power works by shaping what&#8217;s normal. By teaching us what questions to ask and how they ought to be answered, moderation contains a certain productive logic. The Hays Code produced a moral universe in which sin could be hinted at but never rewarded, where some desires were real but unspeakable. </p><p>Normality manifests in the AI project as models that sound like the HR department. That makes a lot of sense when you remind yourself that the goal of developers is to protect themselves, that are they incentivised to make models that don&#8217;t ruffle too many feathers. Many decisions are made by small teams under pressure to minimise risk, drawing on institutional reference points that feel legally non-explosive. </p><p>The result is a voice that sounds sterile, but its one that is already starting to change. As people begin to actually use AI, the labs are starting to realise that character counts. That was the bet behind models like Claude Sonnet 3.5 (RIP) and to a lesser extent GPT-4.5, which suggests that nobody wants to pay for a model that sounds like its doing corporate mediation. </p><p>As science and technology studies thinkers like to point out, power is about how systems are built. Moderation lives in the pipes. It&#8217;s in the training data, the fine-tuning, the user interface, and the invisible refusals. Haraway would call it situated knowledge, a particular view of the world that in this instance we might describe as elite, Western, liberal, and professional.  </p><p>If knowledge is situated, then so is censorship. The Hays Code reflected the anxieties of a particular slice of American society, usually some combination of white, affluent, pious, and powerful. These groups were asserting a claim about who should shape the moral imagination of the nation, one that was eventually formalised by Hays. What looked like universal decency was in fact a tight set of values elevated through institutional leverage.</p><p>Better still, the rules were enforced by the studios themselves. Compliance was built into the production process as scripts were pre-approved, endings rewritten, and scenes cut before they were even filmed. It was a funny kind of censorship machine that could ignore overt repression because the system has internalised constraint. </p><p>The boundaries felt natural because the people creating the content were also managing its limits. Far from forcing movie-makers to put out bad films, critics generally agree that the Hays Code ran alongside the Golden Age of Hollywood. Some of my all time favourites like <em>All About Eve</em>, <em>The Philadelphia Story,</em> and <em>Gone With the Wind</em> were all made under the Hays Code. </p><p>In part, that&#8217;s because writers used the rules as a source of inspiration. A murder couldn&#8217;t go unpunished, so noir turned fatalism into an art form. Romance couldn&#8217;t be consummated, so filmmakers mastered the language of implication. That doesn&#8217;t mean censorship is good, but it does mean that old adage&#8212;constraint breeds creativity&#8212;has something going for it. </p><h2>We'll always have Paris</h2><p>There&#8217;s a scene near the end of <em>Casablanca</em> where Ilsa says to Rick &#8216;But what about us?&#8217; He looks at her and replies &#8216;We'll always have Paris.&#8217; It&#8217;s a line that shouldn&#8217;t really work, one that ought to feel at best evasive and at worst hollow. </p><p>But it lands because it gestures to something we&#8217;re not allowed to see. The affair is gone and the love sublimated into memory. It&#8217;s a perfect product of the Code: romantic sacrifice wrapped in moral clarity. </p><p>Content moderation has never just been about keeping people safe. It&#8217;s about who gets to decide what&#8217;s acceptable based on a particular view of what people are like and what they need protecting from. That view shapes what gets censored and what gets created.</p><p>Like the film studios of the 1930s, today&#8217;s model-makers are engaged in a programme of content moderation or self-censorship (delete as appropriate) to protect the public and to protect themselves. Better to impose rules that shape what models can say than risk government intervention. </p><p>The Hays Code shows that self-imposed moral constraints, driven by pressure from the right places, can end up defining an entire cultural era. What began as a defensive gesture became a generative force that reshaped the moral atmosphere of American life. </p><p>Something similar is happening with AI, except the stakes are higher. These systems are both interactive and intimate. They&#8217;re tutors, confidants, creative partners, and companions. That means that every moderation decision, every filtered answer, and every refusal carries emotional weight. </p><p>Culture shapes cultural artefacts, and cultural artefacts shape culture. That is as true for Hollywood as it is AI. Both self-censor to avoid the wrath of the state, both mould tastes, and both set the limits of imagination. </p><p>The question is who gets to set AI&#8217;s rules. Is it some vague sense of &#8216;the public&#8217; <a href="https://www.anthropic.com/research/collective-constitutional-ai-aligning-a-language-model-with-public-input">through</a> focus groups and polling? Is it the trust and safety teams inside the labs? Or in the end will it be government? I see three major ways forward, though there are no doubt others:</p><ul><li><p><strong>Top-down licensing</strong>: Governments eventually mandate a safety threshold. Firms compete on performance inside a fixed compliance box, like age ratings for film. </p></li><li><p><strong>User-selectable guardrails:</strong> A marketplace of &#8216;safety profiles&#8217; in which you pick your own filter, sort of like Google&#8217;s sliders on its Vertex platform. </p></li><li><p><strong>Open-weight maximalism</strong>: Open models proliferate, guardrails become optional, and governments tighten application-layer controls.</p></li></ul><p>If only a handful of rule-makers define the guardrails, we inherit their biases at scale. If we embrace openness, we allow more genuinely harmful content to slip through the cracks. Even user-selectable guardrails only deal with your own AI content, rather than content generated by others. </p><p>Whatever the case, we&#8217;re left with a question with no clear answer: who do you trust to hold the pen?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Model collapse as social pathology ]]></title><description><![CDATA[Recursion, recursion, recursion...anybody?]]></description><link>https://www.learningfromexamples.com/p/contra-model-collapse</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/contra-model-collapse</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 06 May 2025 08:39:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OxRB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Quick reminder: I&#8217;m testing the waters to see whether there&#8217;s a path towards working on this newsletter full time, with a goal of<strong> 100 pledges by May 31</strong>. So far we&#8217;re well on the way, but there&#8217;s still much farther to go. Pledges are a small but hugely important step towards allowing me to spend more time on this project. This is the best moment to help Learning From Examples if you like what you read here. </p><p>I&#8217;ll also be sending the second edition of the AI Histories series on Friday. These are short pieces (about 1,000 words) that deal with an important moment in AI history. This week&#8217;s will be about the man behind genetic algorithms.    </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OxRB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OxRB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png 424w, https://substackcdn.com/image/fetch/$s_!OxRB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png 848w, https://substackcdn.com/image/fetch/$s_!OxRB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png 1272w, https://substackcdn.com/image/fetch/$s_!OxRB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OxRB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png" width="890" height="652" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1838853b-d6d6-419d-955f-39e37d97be07_890x652.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:652,&quot;width&quot;:890,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1030487,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162676340?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OxRB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png 424w, https://substackcdn.com/image/fetch/$s_!OxRB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png 848w, https://substackcdn.com/image/fetch/$s_!OxRB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png 1272w, https://substackcdn.com/image/fetch/$s_!OxRB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1838853b-d6d6-419d-955f-39e37d97be07_890x652.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The Madonna with the Long Neck</em> by Parmigianino dating from c. 1535-1540.</figcaption></figure></div><p>On the night of January 7, 1610, Galileo Galilei walked onto the balcony and looked down the lens of a telescope. Tilting the device towards the biggest planet in the solar system, he spotted three stars near Jupiter and recorded their positions in a notebook.</p><p>He looked again for the same stars six days later, but this time their positions had shifted. For Galileo, there was one likely reason for the change: they weren&#8217;t stars at all but moons orbiting Jupiter. The Italian astronomer had long bought Copernicus&#8217; theory that our planet was not the centre of the universe. Now he had proof.</p><p>The discovery delivered a blow for the Ptolemaic model of the universe, which held that every planet followed a groove in a transparent, concentric sphere around a stationary Earth. In the geocentric system, any new irregularity&#8212;be it retrograde motion or varying brightness&#8212;was patched with epicycles, deferents, and equants until the system groaned under its own complexity.</p><p>When the little fixes became so numerous, the system eventually became too complex to function properly.  The episode is a good reminder that faulty systems don&#8217;t break right away, that models of the universe are just that: representations. When a system for representing reality begins to learn more from itself than from the world it was built to describe, that&#8217;s when you know its days are numbered. </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Baeo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Baeo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 424w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 848w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1272w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Baeo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png" width="148" height="60.87547169811321" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:218,&quot;width&quot;:530,&quot;resizeWidth&quot;:148,&quot;bytes&quot;:11942,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162676340?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Baeo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 424w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 848w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1272w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>The Ptolemaic model was created by Claudius Ptolemy, a Greco-Egyptian astronomer who lived in Alexandria in the 2nd century CE. Ptolemy synthesised centuries of  Greek astronomy into a grand system described in the <em>Almagest</em>, which became the dominant model of the cosmos in Europe and the Islamic world for over 1,400 years.</p><p>The Earth held top billing at the centre of the universe, with the Sun, Moon, planets, and stars orbiting it in perfect circles. To account for the looping paths of planets, it introduced smaller circles, or epicycles, that rode on top of larger ones. </p><p>But the model was unstable. Each discrepancy in planetary motion prompted astronomers to add another epicycle or tweak the equant. Ptolemy&#8217;s framework was flexible, so much so that it basically overfitted the heavens. </p><p>The philosopher Thomas Kuhn, who we <a href="https://www.learningfromexamples.com/p/the-fly-and-the-filter">briefly discussed</a> in last week&#8217;s essay, thought that what appealed to Copernicus was the power to jettison these contrivances from his intellectual framework&#8203;. Kuhn famously believed that most science operates within a shared system of sense-making that tells us what questions are worth asking and sets basic guidelines for how to answer them. </p><p>But over time, if too many results don&#8217;t fit the model, confidence in the paradigm begins to wane. This eventually leads to crisis, which may be resolved by a revolutionary project that brings with it a new way of doing things and new problems to solve. </p><p>The Ptolemaic system collapsed when its own patch-work of mends had grown so baroque that it no longer looked credible. Only when those &#8216;fixes&#8217; piled up did the <a href="https://www2.hao.ucar.edu/education/scientists/aristarchus-of-samos-310-230-bc">older heliocentric schemes</a> finally seem the more elegant alternative.</p><h2>Illusionary collapse</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HUij!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HUij!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HUij!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HUij!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HUij!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HUij!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg" width="702" height="652.2608695652174" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1154,&quot;width&quot;:1242,&quot;resizeWidth&quot;:702,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;scholasticism: in medieval Europe, the school of thought that used ...&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="scholasticism: in medieval Europe, the school of thought that used ..." title="scholasticism: in medieval Europe, the school of thought that used ..." srcset="https://substackcdn.com/image/fetch/$s_!HUij!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HUij!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HUij!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HUij!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b3ab1d-bdec-4f02-9a3b-c50c5104607c_1242x1154.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <em>Grandes cr&#243;nicas de Francia</em>.</figcaption></figure></div><p>Model collapse is one of those terms that people love to say. It&#8217;s often used erroneously, but for now I&#8217;ll refer to the definition formulated by Shumailov and colleagues in their 2024 <a href="https://www.nature.com/articles/s41586-024-07566-y#:~:text=generations%20GPT,term%20learning">paper</a>:</p><blockquote><p>&#8216;Model collapse is a degenerative process affecting generations of learned generative models, in which the data they generate end up polluting the training set of the next generation. Being trained on polluted data, they then mis-perceive reality.&#8217; </p></blockquote><p>The research caused something of a stir last year when it found that as models retrain on their own outputs, they lose contact with the true data distribution. For language models, they found the &#8216;perplexity score&#8217; of a model (in this case a metric evaluating the likelihood of a sequence of words) worsened significantly when trained on the outputs from a previous version. </p><p>According to the group, while the original model fine-tuned with real data achieves a perplexity score of 34, the perplexity score for models trained for five epochs on generated data increases to between 54 and 62. (A higher figure indicates worse performance because the perplexity score essentially measures the inaccuracy of predictions.)</p><p>In other words, collapse happens when models forget the variability of reality and converge on a single point&#8203;. Nature News <a href="https://www.nature.com/articles/d41586-024-02420-7#:~:text=Training%20artificial%20intelligence%20%20,generated%20text%20pervade%20the%20Internet">called</a> it a <em>&#8216;</em>cannibalistic<em>&#8217;</em> phenomenon that spelled trouble for large models trained on synthetic data. Models pollute the internet with low quality data, which is fed back into the next generation as part of the training process. </p><p>But the devil is in the details. </p><p>The researchers trained on 90% synthetic data and 10% original data for each new generation of training. Compare this approach with a <a href="https://arxiv.org/pdf/2404.01413">paper</a> from Stanford in which synthetic data is added incrementally over time. In the first round the model uses no synthetic data, half of the data is synthetic in the second round, two thirds is synthetic in the third round, and three quarters is synthetic in the fourth round.</p><p>Using this method, the Stanford group found minimal evidence of model collapse. As a result, the authors <a href="https://arxiv.org/pdf/2404.01413">said</a> that the results &#8216;demonstrate that accumulating successive generations of synthetic data alongside the original real data avoids model collapse.&#8217; Given that real usage is based on <strong>accumulation</strong> rather than <strong>replacement</strong>, the upshot is that model collapse probably doesn&#8217;t spell doom for AI&#8217;s prospects.  </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Baeo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Baeo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 424w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 848w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1272w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Baeo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png" width="148" height="60.87547169811321" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:218,&quot;width&quot;:530,&quot;resizeWidth&quot;:148,&quot;bytes&quot;:11942,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162676340?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Baeo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 424w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 848w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1272w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>There&#8217;s a bit in Robert Zemeckis&#8217; <em>Contact</em> where Jodie Foster tries to explain to Matthew McConaughey why his faith in god is misplaced. She evokes the philosophical precept &#8216;Ockham's Razor&#8217; to which he replies &#8216;Hockam&#8217;s Razor? It sounds like a slasher movie.&#8217; </p><p>The idea is as well known as it is straightforward: all things being equal, the simplest explanation tends to be the most likely. It takes its name from William of Ockham, a 14th-century Franciscan friar who argued that the Scholastic method had become  entangled in abstract distinctions. At the University of Oxford, Ockham found that theological debates often revolved around arguments that seemed more concerned with internal consistency than with observable reality. </p><p>In the Scholastic tradition of his time, philosophers engaged in elaborate debates about the nature of universals that he believed led to increasingly complex and self-referential arguments. Ockham didn&#8217;t care if &#8216;redness&#8217; was a quality that existed independent of objects, or even if &#8216;humanity&#8217; was a real essence shared by all people. </p><p>What he cared about was the specific red apple or the individual human being. The rest of it was decadent self-reference. </p><p>To convince his contemporaries, the English friar argued that many of the complex distinctions made by his peers were unnecessary. His insistence on paring down explanations to their bones was a response to the recursive character of Scholastic debates, which often built upon layers of prior commentary without re-evaluating foundational assumptions.&#8203;</p><p>The Scholastic method had become so naval-gazing that it risked losing touch with the realities it aimed to explain. Ockham's push for simplicity was an attempt to recalibrate the intellectual model by grounding it in observable phenomena and logical clarity.&#8203; </p><p>Ockham's frustration was about systems losing contact with reality. He saw a tradition that had become so fluent in its own language that it forgot what the words were for. </p><p>You can see a similar thing happening in art, where critics level that the most dreaded of artistic put-downs: <em>derivative</em>.  </p><p>Take Parmigianino&#8217;s <em>Madonna with the Long Neck</em>, which opens this essay. Completed in 1540, the painting shows us the Virgin Mary with an unnaturally elongated neck, slender fingers, and an elegant posture that defies anatomical correctness.  </p><p>Parmigianino's work parodies the compositions of his predecessors like Raphael and Leonardo da Vinci. A reaction to the naturalism of the High Renaissance, it distorts its subject to create a sense of elegance that transcends the real world.</p><p><em>Madonna with the Long Neck </em>is an example of Mannerism. Where Renaissance masters like Leonardo, Raphael, and Michelangelo aimed for harmony, proportion, and balance grounded in nature, Mannerist painters embraced distortion, exaggeration, and artificiality.</p><p>I think of Mannerism as a kind of recursive practice, an early example of art reflecting on art. Mannerist painters quoted gestures, borrowed poses, and echoed compositions from their predecessors. Often for the better but sometimes for the worse. </p><p>For Parmigianino and his contemporaries, the visual grammar of the Renaissance had become so well understood that it was no longer a guide to seeing the world. Instead, it was a set of conventions to be played with, warped, and pushed to the edge of recognisability.</p><p>Some critics at the time were unsettled. Giorgio Vasari, a contemporary of Parmigianino&#8217;s, admired the Italian&#8217;s work. Others found the painting &#8216;mannered&#8217; in the pejorative sense in that it was stylised for its own sake.</p><p>The line that Mannerism walks is the one between homage and parody and refinement and artifice. It&#8217;s a reminder that the boundary separating art and slop, original and derivative, is obvious to some but invisible to others. </p><h2>The feature map and the territory </h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lPB8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lPB8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png 424w, https://substackcdn.com/image/fetch/$s_!lPB8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png 848w, https://substackcdn.com/image/fetch/$s_!lPB8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png 1272w, https://substackcdn.com/image/fetch/$s_!lPB8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lPB8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png" width="1456" height="652" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:652,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1342199,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162676340?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lPB8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png 424w, https://substackcdn.com/image/fetch/$s_!lPB8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png 848w, https://substackcdn.com/image/fetch/$s_!lPB8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png 1272w, https://substackcdn.com/image/fetch/$s_!lPB8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f5e8e47-5122-41d6-9cef-563f19d20259_1696x760.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Output one and seventy five from ChatGPT when prompted &#8216;Create the exact replica of this image, don't change a thing&#8217;. See post <a href="https://www.linkedin.com/feed/update/urn:li:activity:7322915637205381120/">here</a>. </figcaption></figure></div><p>The French philosopher Jean Baudrillard famously described the modern world as one of simulacra in which representations take on a life of their own. In Baudrillard&#8217;s <strong>hyperreality</strong>, models circulate without stable points of reference and the map precedes the territory. </p><p>What he meant was that signs point only to other signs. The meaning of a photo, an advert or a film is borrowed from other media. We consume <strong>representations</strong> of the world and then representations of those representations. In Baudrillard&#8217;s words, the &#8216;hyperreal&#8217; is when the simulation is more dominant than the real thing.  </p><p>Like most people familiar with the Frenchman, I find Baudrillard at best head-spinning and at worst frustrating. But his work is useful to understand the relationship between model and meaning, copy and original. It helps us see that, like the Ptolemaic model, imitation looks like stability right before it leads to collapse.  </p><p>Part of the problem concerns the relationships within a system. Ian Hacking&#8217;s &#8216;looping effect&#8217;, for example, shows how feedback moulds relations to allow original category (A) to become perceived category (B) solely through the actions of those who perceive it as (B).    </p><p>Hacking noted that when people are labelled (say, with respect to a psychiatric condition), they may alter their behaviour in light of that classification. As a <a href="https://link.springer.com/referenceworkentry/10.1007/978-1-4614-5583-7_608#:~:text=The%20looping%20effect%20dynamic%20is,People%20do%20not">result</a>, &#8216;categorizing people opens up new ways to think of themselves, new ways to behave, and new ways to think about their pasts&#8217;. </p><p>The very act of modelling changes the modelled. In economics, George Soros has made the same point under the name of reflexivity wherein market participants&#8217; beliefs not only reflect but shape economic fundamentals. The man who broke the Bank of England argues that financial bubbles arise because forecasts influence prices, which alter the reality those forecasts tried to predict. </p><p>The point is that knowledge is recursive. A model generates signals and those signals influence modelling. Without a firm anchor to direct experience, the loop begins to tighten. First a little. Then a lot.  </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Baeo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Baeo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 424w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 848w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1272w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Baeo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png" width="148" height="60.87547169811321" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:218,&quot;width&quot;:530,&quot;resizeWidth&quot;:148,&quot;bytes&quot;:11942,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/162676340?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Baeo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 424w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 848w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1272w, https://substackcdn.com/image/fetch/$s_!Baeo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d261a7-1efa-4c62-9cbd-474155c7c8dd_530x218.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>There are two ways that people generally describe model collapse in AI. The first, explained above, deals with training-time collapse. It&#8217;s about what happens when you repeatedly train a model on its own outputs (or the outputs of other models) over and over again. </p><p>The other, which is not actually an instance of model collapse, deals with inference-time feedback. A popular recent example of this idea can be seen in the picture at the beginning of this section. It shows an experiment in which a model is asked to recreate an image exactly as it appears. Seventy-five times later it bears no likeness to the original. </p><p>Some commentators <a href="https://www.linkedin.com/feed/update/urn:li:activity:7322915637205381120/">reckon</a> inference-time feedback spells doom for the AI project. They think that because these examples show that AI generates sloppy outputs when prompted with originals, the entire internet will be flooded with low quality training material, and subsequently the next generation of models is doomed. </p><p>But as we know, if each new model is trained on a combination of human-generated and synthetic data, collapse can be arrested as grounding puts a lid on the feedback loop. If AI-generated data wholly replace human data, the model becomes something like a funhouse hall of mirrors. But if we keep accumulating real text or images, outputs stay solid&#8203;. </p><p>Both things going on here&#8212;the conflation of inference-time feedback and training-time collapse, and the weird ideas about how developers actually make models&#8212;point to the idea that talk of collapse is itself self-referential. </p><p>The claim becomes a kind of collapse in its own right. It&#8217;s recycled, repeated, and stripped of nuance with each iteration as it bubbles up the surface of your LinkedIn feed. People cite the same handful of examples, amplify each other's anxieties, and forget to check whether those fears still correspond to reality. </p><h2>The Ptolemaic problem</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t5QB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t5QB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg 424w, https://substackcdn.com/image/fetch/$s_!t5QB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg 848w, https://substackcdn.com/image/fetch/$s_!t5QB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!t5QB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t5QB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg" width="658" height="485.604" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:369,&quot;width&quot;:500,&quot;resizeWidth&quot;:658,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!t5QB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg 424w, https://substackcdn.com/image/fetch/$s_!t5QB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg 848w, https://substackcdn.com/image/fetch/$s_!t5QB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!t5QB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970cc3dd-1f01-43b6-bc55-3ea3ba1dc8ef_500x369.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">An illustration of a Ptolemaic geocentric system by Portuguese cosmographer and cartographer Bartolomeu Velho, 1568 (Biblioth&#232;que Nationale, Paris). </figcaption></figure></div><p>By 1613, Galileo had seen the new world and embraced its implications. Writing with dramatic flair, he <a href="https://press.uchicago.edu/ucp/books/book/chicago/O/bo8540269.html">scolded</a> his opponents for clinging &#8216;with such obstinacy in maintaining Peripatetic conclusions that have been found to be manifestly false&#8217;.  </p><p>Galileo&#8217;s practical experiments connected Copernicus&#8217;s theory to an underlying physical reality. Ptolemy&#8217;s cosmos was not a final scientific truth, but it was a necessary mythology to make sense of the cosmos. </p><p>This example and others tell us that model collapse is cultural pathology. It emerges when explanatory frameworks (be they cosmological, theological, artistic, or computational) become recursive without recognising their own closure. In each case, the signs are the same:</p><ul><li><p><strong>In ancient astronomy</strong>, the Ptolemaic system became so complex, with epicycles stacked on epicycles, that it eventually collapsed under the weight of its own internal logic. </p></li><li><p><strong>In medieval theology</strong>, Scholasticism spun increasingly subtle distinctions in Aristotelian metaphysics, which reformers diagnosed as a recursive loop detached from experiential grounding. </p></li><li><p><strong>In Renaissance art</strong>, Mannerism mimicked and magnified the styles of the High Renaissance. It turned the visual language of naturalism into stylised abstraction that some said was too far from the world.</p></li><li><p><strong>In modern computing</strong>, model collapse is a fear born from a similar structure. It holds that AI models trained on AI outputs will spiral into self-reference, losing the diversity and grounding once present in human data.</p></li></ul><p>Collapse is about forgetting what the model was originally for. A model breaks when its internal structure becomes so dominant that contradiction, novelty, or reality itself can no longer puncture it.</p><p>This is why model collapse is unlikely to pose trouble for the AI project. Developers know what the feedback loop for building resilient systems looks like. They are aware of the the pain points, the need for grounding mechanisms, and the most appropriate way to mix synthetic and real examples to avoid self-reinforcing drift. </p><p>Collapse is about a failure to distinguish between imitation and insight, reflection and source, and map and territory. It happens when we forget. When we forget to check our models against the world, when a system forgets its purpose, and when we forget to question our assumptions. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.learningfromexamples.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The Fly and the Filter]]></title><description><![CDATA[On mysticism, machine learning, and missing out]]></description><link>https://www.learningfromexamples.com/p/the-fly-and-the-filter</link><guid isPermaLink="false">https://www.learningfromexamples.com/p/the-fly-and-the-filter</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Tue, 29 Apr 2025 09:26:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ns-e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ns-e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ns-e!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ns-e!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ns-e!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ns-e!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ns-e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg" width="1456" height="957" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:957,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;undefined&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="undefined" title="undefined" srcset="https://substackcdn.com/image/fetch/$s_!ns-e!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ns-e!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ns-e!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ns-e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b23c6b6-8f9a-49a3-9543-616ee67bd771_2691x1769.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Icon of Saint Vendimianus of Bythinia from the Menologion of Basil II.</figcaption></figure></div><p>Emily Dickinson paid attention. </p><p>The American poet, who lived most of her life in near-seclusion in New England, wrote nearly 1,800 verses. The collection is about the small stuff. The slant of afternoon light, the tremor of breath, a blade of grass minding its own business.  </p><p>Dickinson&#8217;s work shows us that the world is ripe for remaking. In many of her untitled poems, she rearranges a profound moment around a tiny slice of the ordinary. In one well-known example, Dickinson writes that the last thing one unfortunate subject hears is the buzz of a fly&#8217;s wings before their soul leaves their body:</p><blockquote><p>I heard a Fly buzz - when I died -<br>The Stillness in the Room<br>Was like the Stillness in the Air -<br>Between the Heaves of Storm -</p></blockquote><p>The fly reminds us that the mind has a mind of its own, that even the important bits sometimes play second fiddle to the things that shouldn&#8217;t matter. That&#8217;s the power of <em>attention</em>, the thing that lets us make our own worlds and live inside them. </p><p>Attention has been a spotlight, a filter, a resource, and a currency. It&#8217;s a shape-shifting concept that connects monks and mindfulness to search engines and self-driving cars, one whose latest afterlife is a mechanism for information processing. </p><p>Take an image recognition system like a convolutional neural network (CNN). A CNN works by sliding small filters, which you can think of as screens, across an image. At each step, the network activates certain neurons when a particular feature is detected (like edges or textures) while ignoring others. </p><p>Each cluster of neurons responds to only a portion of the whole picture. With enough layers, these local features are combined into more abstract representations: a curve becomes a nose, a circle becomes an eye, and eventually the system recognises a face.</p><p>In machine learning as in the brain, understanding is compositional. We can&#8217;t attend to everything at once, so we choose to work locally. We obsess over a paragraph, a turn of phrase, or a figure in a painting. The more intense our focus, the more meaningful the pattern.</p><p>The literary critic I.A. Richards called this style of analysis 'practical criticism&#8217;. Richards gave poems to students without any contextual clues, reporting the results of a 1929 experiment designed to <a href="https://www.english.cam.ac.uk/classroom/pracrit.htm">encourage</a> focus on 'the words on the page' rather than preexisting beliefs: </p><blockquote><p>For Richards this form of close analysis of anonymous poems was ultimately intended to have psychological benefits for the students: by responding to all the currents of emotion and meaning in the poems and passages of prose which they read the students were to achieve what Richards called an 'organised response'. This meant that they would clarify the various currents of thought in the poem and achieve a corresponding clarification of their own emotions.</p></blockquote><p>Richards&#8217; approach relates to the idea that knowledge is situated. An important bit of the science and technology studies canon, the point is that one can never see the whole system &#8212; only a perspective within it. All vision is <a href="https://www.jstor.org/stable/3178066">partial perspective</a>. </p><p>That is true for people, organisations, and systems. It&#8217;s also true for scientific discovery, a point made clear in Thomas Kuhn&#8217;s famous work on the formation and stabilisation of scientific paradigms. We might say that pre-paradigmatic science is chaotic, but once we get a filter (in this instance, a scientific framework) we start to see things that we couldn&#8217;t before. </p><p>Paradigms are ways of selectively attending to data. They obscure the full picture so we might see more clearly, all the better to help us sort signal from noise. Like image recognition systems or Dickinson&#8217;s poetry, paradigms work because they pay attention to the right things.     </p><h3><strong>Divining attention </strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yQHB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yQHB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yQHB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yQHB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yQHB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yQHB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg" width="1456" height="943" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:943,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Hunter (Catalan Landscape) - 1923-24 by Joan Mir&#243; - LadyKflo&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Hunter (Catalan Landscape) - 1923-24 by Joan Mir&#243; - LadyKflo" title="The Hunter (Catalan Landscape) - 1923-24 by Joan Mir&#243; - LadyKflo" srcset="https://substackcdn.com/image/fetch/$s_!yQHB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yQHB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yQHB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yQHB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4de18a6b-2b50-4d59-b9fa-39ba5269491a_2000x1295.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The Hunter</em> by Joan Mir&#243; (1923). </figcaption></figure></div><p>Attentive awareness goes back some. One could credibly make the case that it allowed early humans to make meaning and abstract the world around them in useful ways. After all, the hunter-gatherer&#8217;s life depended on noticing the flicker of prey in the undergrowth, subtle shifts in the wind, and tracks barely pressed into mud.  </p><p>In the long run, the ability to focus supported the emergence of complex rituals, artistic practice, and forms of mysticism. The French anthropologist Marcel Mauss said that &#8216;underlying all our mystic states are corporeal techniques, biological methods of entering into communication with God&#8217;.</p><p>William Golding&#8217;s novel <em>The Inheritors</em> deals with the collision between <em>homo sapiens </em>capable of selective attention and Neanderthals who live moment to moment. While the book anachronistically gives <em>homo neanderthalensis</em> religion, it does a wonderful job at describing how beings with limited attentive capacity might have made sense of the world.  </p><p>Attentive practice and mysticism became more important as humans traded hunting for agriculture. In ancient Egypt, priests observed the movements of stars and sacred rites with exacting focus. They believed that attention maintained the balance between order and chaos. </p><p>In the Vedic traditions of early India, attentiveness underpinned the meditative practices aimed at perceiving the hidden unity of <em>Brahman</em>, the ultimate reality. Across the Mediterranean, the mystery cults of Orphism taught that salvation depended on vigilant self-awareness during life and death. </p><p>Ancient Greek philosophy eventually developed its own form of contemplative vigilance. The Stoics called it <em>prosoche</em>, a steady stream of active attention for living in accordance with the rational <em>logos</em> that pervades the cosmos. To attend was to consciously participate in the divine structure of reality. </p><p>Early Christian monastics saw <em>attentio</em> (or <em>nepsis</em>, meaning &#8216;watchfulness&#8217;) become a virtue. The first generations of Desert Fathers, a group of ascetics who lived in Roman Egypt, <a href="https://www.jstor.org/stable/1389826#:~:text=But%20we%20must%20turn%20to,ATTENTION%20AND">singled out</a> attentiveness as a fundamental Christian moral quality. For the Desert Fathers, prayer itself was a pure form of attention to the divine. </p><p>On the other side of the world, Buddhist contemplative traditions fashioned the training of attention into a sophisticated discipline. The Buddha&#8217;s teachings about mindfulness (<em>sati</em>) and concentration (<em>sam&#257;dhi</em>) can be read as prescriptions for holding one&#8217;s attention in the present. </p><p>Like their Buddhist counterparts, the desert ascetics emphasised lived practice over theoretical knowledge. One trained attention through recitation of the psalms; the other used the sutras. Both traditions recognised that verbal practice served as  scaffolding for attentional states. </p><p>These traditions understood that the untrained mind wanders endlessly, and prescribed attentional discipline as the antidote. Desert Father Abba Moses&#8217; <a href="https://www.ocanwa.org/single-post/2019/02/23/sit-in-your-cell-and-your-cell-will-teach-you-everything">advice</a> to &#8216;sit in thy cell and thy cell will teach thee all&#8217; sounds eerily similar to the Buddha's emphasis on solitary meditation as a vehicle for insight.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ff2w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ff2w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 424w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 848w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1272w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ff2w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png" width="78" height="38.36065573770492" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ae887fa7-4757-4580-abc5-dac290101bbc_122x60.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:60,&quot;width&quot;:122,&quot;resizeWidth&quot;:78,&quot;bytes&quot;:5815,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/160655605?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ff2w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 424w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 848w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1272w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>By the early Middle Ages, attention began to look something like a formalised discipline. What had once been the preserve of solitary ascetics or wandering monks was pulled into the orbit of the institutions that governed religious and intellectual life. </p><p>In the monasteries of medieval Christendom, attentiveness became a rule-bound discipline, essential for the copying of sacred texts. In the Islamic world, scholars preserved and systematised ancient knowledge through attentive reading and commentary. Across Europe, the rise of Scholasticism gave new life to on ordered training of the mind and the belief that attentiveness lit the path towards divine truth. </p><p>The Italian priest Thomas Aquinas treated <em>attentio</em> as a crucial operation of the soul, the means by which intellect and will could be properly directed toward God, truth, or moral goods. Paying attention was to marshal scattered faculties into a disciplined focus. It was in some ways a rational act, one that represented victory over the unruly appetites that tempted the soul.</p><p>This rational-ish style of attention gradually evolved into an intellectual virtue at the heart of Renaissance humanism. Scholars like Erasmus urged readers to cultivate active attentiveness in an age when the printing press flooded Europe with classical and scriptural texts. To read well was now to sift wisdom from error, to attend to meaning rather than memory. </p><p>Erasmus urged readers to dwell, discriminate, and refine the mind through the careful discipline of reading and thinking. A century later, Descartes pushed the logic of attentiveness further by arguing that attentive scrutiny was the foundation of certainty. </p><p>His method of doubt, famously outlined in <em>Meditations</em>, demanded a focusing of the mind&#8217;s gaze away from the noisy flux of the senses and towards only those ideas that could be grasped with absolute clarity. Once spiritual and later humanistic, attention was becoming the instrument by which knowledge could be assembled from the ground up.</p><p>In the eighteenth century, Immanuel Kant built on Gottfried Leibniz&#8217;s concept of <em>apperception</em>, which deals with how the mind becomes aware of its own perceptions, to argue that coherent experience depends on the active work of the mind. For Kant, the mind bound perceptions together into a unified world through an original act of synthesis. </p><p>Kant is an important figure in our narrative because he shifted the philosophical terrain from the contents of thought to the structures that made thought possible. It might sound a bit niche, but this distinction encouraged others to view the mind as an active organiser of experience rather than a passive receiver of impressions. </p><h3><strong>Rethinking attention</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C3pa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C3pa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png 424w, https://substackcdn.com/image/fetch/$s_!C3pa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png 848w, https://substackcdn.com/image/fetch/$s_!C3pa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png 1272w, https://substackcdn.com/image/fetch/$s_!C3pa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C3pa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png" width="1230" height="872" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:872,&quot;width&quot;:1230,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1748643,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/160655605?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!C3pa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png 424w, https://substackcdn.com/image/fetch/$s_!C3pa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png 848w, https://substackcdn.com/image/fetch/$s_!C3pa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png 1272w, https://substackcdn.com/image/fetch/$s_!C3pa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c4941ff-ab69-4465-b0e9-bb0f8286ae9b_1230x872.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Painting of Hermann von Helmholtz from Alte Nationalgalerie, Berlin.</figcaption></figure></div><p>Wilhelm Wundt founded the first psychology lab in 1879. As a young man, the German struggled with nervousness and daydreaming. He was reportedly prone to losing focus and drifting into reverie. </p><p>Wondering why he struggled to stay on track, Wundt set about to study the psychological dimensions of attention. He began with Leibniz&#8217;s <em>apperception, </em>which he treated as the voluntary direction of attention by which sensations were selected, organised, and made conscious. </p><p>Wundt was a Kantian in that he believed that experience depends on the active organisation of perception. But where Kant had treated this as a necessary and universal structure of mind, Wundt focused on the elective aspects of attention. He wanted to know how we selectively direct focus, how we amplify or suppress sensations, and how these choices could be measured experimentally.</p><p>Around the same time, physicist-turned-physiologist Hermann von Helmholtz performed a simple but important experiment. Fixating his gaze on the centre of a briefly illuminated array of letters, he found that he could covertly shift his attention to a different part of the array. Helmholtz could make out letters in this region while the rest remained a blur. </p><p>The central insight was that attention could act independently of eye movement. Attention, as it turned out, was a distinct cognitive faculty, one capable of selectively enhancing experience from within. </p><p>Around a decade later, William James confirmed attention&#8217;s role in the new psychology. In <em>The Principles of Psychology</em> (1890), James described attention as the mind&#8217;s act of taking &#8216;possession of one out of what seem several simultaneous objects or trains of thought&#8217; emphasising that &#8216;focalisation, concentration, of consciousness are of its essence.&#8217; </p><p>He argued that focus is a process of active selection, one that &#8216;implies withdrawal from some things in order to deal effectively with others.&#8217; For James, attention was the essential mental act. It made order from chaos, clarity from confusion, and identity from experience.</p><p>James had placed attention at the heart of mental life, but within a generation, psychology&#8217;s centre of gravity had shifted. In the early twentieth century, the rise of behaviourism relegated attention to the margins of scientific respectability. Figures like John B. Watson insisted that psychology must concern itself only with what could be directly observed, namely external stimuli and behavioural responses. </p><p>In line with the new paradigm, attention was redefined as the observable orientation toward certain stimuli rather than others. The idea of attention as an active, inner force shaping consciousness&#8212;so central to Wundt and James&#8212;all but disappeared for several decades. </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ff2w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ff2w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 424w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 848w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1272w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ff2w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png" width="78" height="38.36065573770492" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ae887fa7-4757-4580-abc5-dac290101bbc_122x60.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:60,&quot;width&quot;:122,&quot;resizeWidth&quot;:78,&quot;bytes&quot;:5815,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/160655605?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!ff2w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 424w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 848w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1272w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>But it wasn&#8217;t to last. A new generation of researchers argued that psychology could not make sense of complex phenomena like language, reasoning, or memory without grappling with internal mental processes. </p><p>Known as the cognitive revolution, this movement reimagined the mind as an active processor of information capable of selecting, storing, retrieving, and manipulating inputs much like a computer. In this new framework, attention returns as a central object of study. No longer dismissed as unmeasurable, it was now seen as a mechanism for managing the mind&#8217;s limited processing capacity.</p><p>Psychologist Donald Broadbent gave the new cognitive psychology one of its first working models of attention. In 1958, drawing on wartime research with pilots and radar operators, he proposed that the mind uses an early-stage filter to manage the flood of incoming information. Like a gatekeeper at a switchboard, Broadbent hypothesised that an attentional filter allows one stream of sensory input through for conscious processing while blocking the rest. </p><p>This neatly explains why, when listening to two people speak at once, we can follow one conversation and tune the other out. Broadbent's model was crisp and mechanical, treating attention as a bottleneck tuned to a selected channel. It was elegant, and for a time, dominant &#8212; but brittle. </p><p>In certain <a href="https://speechneurolab.ca/en/the-cocktail-party-explained/#:~:text=Cherry%20(1953)%20first%20described%20the,unless%20their%20content%20is%20distinct.">listening studies</a>, participants could sometimes pick up personally relevant information like their own name in an unattended conversation. This suggested that unattended material was not completely blocked. Another experiment went further, <a href="https://link.springer.com/referenceworkentry/10.1007/978-0-387-79061-9_223">demonstrating</a> that people could unconsciously piece together meaningful phrases from words split across different audio channels. </p><p>Building on these tests, English psychologist Anne Treisman proposed that attention was not a screen so much as a volume control where unattended inputs were weakened but not completely eliminated. Meaningful information could still break through if it crossed a certain threshold of relevance.</p><p>Treisman later turned her hand to visual perception, developing what became known as Feature Integration Theory (FIT). She showed that basic features like colour, shape, and orientation are first registered automatically (and separately) by the visual system. </p><p>The theory held that without attention the brain can conjoin features incorrectly&#8212; seeing, for instance, the colour of one object attached to the shape of another&#8212;in a phenomenon known as illusory conjunctions. According to FIT, attention was a necessary process for binding separate features into coherent perceptual objects. Without it, the visual world would fragment into disjointed colours, shapes, and patterns. </p><p>During these decades, psychologists began to develop new metaphors to capture how attention behaves. Michael Posner famously <a href="https://pages.ucsd.edu/~scoulson/101b/VisualAttention.pdf">likened</a> attention to a movable &#8216;spotlight&#8217; that selectively enhances processing wherever it points. Others <a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.614077/full">proposed</a> a &#8216;zoom lens&#8217; model suggesting that we can widen or narrow the scope of our focus by trading breadth for resolution. </p><p>The metaphors masked a deeper shift. Attention was less static filter than dynamic resource that the mind could steer, widen, or narrow. In the cognitive view, attention was an active force that regulated the flow of information moment by moment rather than holding focus behind a single gate as in Broadbent&#8217;s model. </p><p>Daniel Kahneman carried forward the interpretation of attention as an active process, but reframed it in terms of limited mental resources. In his 1973 <a href="https://s3.amazonaws.com/knowen-production/big_attachments/fdf0161367c4801ac8b5a6cc42e8413d/Attention+and+Effort+-+Kahneman.pdf">book</a>, the great psychologist likened attention to a finite pool of energy that could be allocated flexibly across tasks. Performing two tasks at once was difficult if they drew heavily on the same attentional reserves, but easier if one was automatic or if they tapped different cognitive systems. </p><p>By the late twentieth century, attention could be seen at work. New techniques like the electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) allowed researchers to connect attention to brain activity. Neuroimaging studies identified regions like the intraparietal sulcus and the prefrontal cortex that acted as &#8216;searchlight operators&#8217; that directed focus and regulated the flow of information. </p><p>Neural data suggested that attention was not a single mechanism but a collection of related processes like orienting to stimuli, filtering out distractions, sustaining focus over time, and switching flexibly between tasks. Attention had become a material process, one that would influence technology and our relationship with it.  </p><h3><strong>Mechanising attention</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qOKv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qOKv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png 424w, https://substackcdn.com/image/fetch/$s_!qOKv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png 848w, https://substackcdn.com/image/fetch/$s_!qOKv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png 1272w, https://substackcdn.com/image/fetch/$s_!qOKv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qOKv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png" width="1304" height="932" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ba949404-dd47-41ff-8985-494cea659e2d_1304x932.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:932,&quot;width&quot;:1304,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2130596,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/160655605?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qOKv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png 424w, https://substackcdn.com/image/fetch/$s_!qOKv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png 848w, https://substackcdn.com/image/fetch/$s_!qOKv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png 1272w, https://substackcdn.com/image/fetch/$s_!qOKv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba949404-dd47-41ff-8985-494cea659e2d_1304x932.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Noted AI benchmark &#8216;Pokemon&#8217; </figcaption></figure></div><p>Herbert A. Simon, a giant in the history of AI, said that &#8216;In an information-rich world, the wealth of information means a dearth of something else&#8230;what information consumes is the attention of its recipients. Hence a wealth of information creates a poverty of attention.&#8217; </p><p>For most of human history, information had been scarce and attention plentiful. Now, information was everywhere, and attention had become the limiting factor. In their 2001 book, <em>The Attention Economy</em>, Thomas Davenport and John Beck argued that this shift was creating a new kind of economy in which value was measured by the ability to capture and hold human attention.</p><p>The attention economy concept, though related to the psychology of attention, is not identical to it. In cognitive science, attention describes the selective processing of information under conditions of limitation. In the marketplace, attention is measured externally through clicks, views, and time spent. </p><p>The brain&#8217;s limits turn attention into a zero-sum resource. Over time, psychological findings about attention&#8212;its spans, its bottlenecks, its failures&#8212;have been absorbed into everyday language. What was once a description of mental effort has become a common framework for parsing a mode of living under the weight of information. </p><p>There are of course two parts to Simon&#8217;s famous quote. We all remember the bit about attention, but what may prove more significant is the sheer volume of information being produced. He didn&#8217;t know it then, but information would become the raw material for a new generation of machine learning systems inspired by attention itself.</p><p>In the mid-2010s, machine learning researchers began using the term &#8216;attention mechanism&#8217; to describe a method for selectively processing information. Rather than treating all inputs equally, these systems could focus on the most relevant parts of the data. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio <a href="https://arxiv.org/abs/1409.0473">introduced</a> the first influential version of this idea in neural machine translation, allowing models to dynamically align with specific words in a source sentence as they generated translations.</p><p>Instead of compressing an entire sentence into a fixed representation, Bahdanau&#8217;s model saw the system search through the source sentence during translation. The model could attend to different parts of the input at each step, much as a bilingual speaker might glance back at the original text to refine a translation.</p><p>But the major moment for attention, as all AI watchers know, came in 2017 when Google <a href="https://arxiv.org/abs/1706.03762">described</a> the transformer architecture built around the famous self-attention mechanism. Where Bahdanau&#8217;s model used attention to improve a sequential system, transformers opted to discard recurrence altogether. Instead of processing information step by step, the architecture used multiple attention mechanisms in parallel to compare every part of the input to every other part at once.</p><p>In a transformer, every word in a sequence can attend to every other word through learned weightings. For each word being processed, the model calculates attention scores between that word and all words in the input. It basically asks &#8216;How much should I pay attention to word X when processing word Y?&#8217; </p><p>This approach yields an attention map that tells us which parts of the input are most relevant to each word being processed. By using many heads that each focus on different aspects of the input, the model can pick up multiple kinds of relationships at once such as syntax or semantic context.</p><p>The result is a rich representation of language (and as it turns out, probably a lot more than that). To put it simply, language models amplify important signals and suppress less important ones. Their use of attention has even prompted loose comparisons to the way neural circuits in the brain manage focus and selection.</p><p>That isn&#8217;t to say that language models are brains, but rather that the convergence between the two reflects shared constraints if not a shared nature. Faced with too much information and finite processing power, both minds and machines must solve the same underlying problem: how to select what matters.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ff2w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ff2w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 424w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 848w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1272w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ff2w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png" width="78" height="38.36065573770492" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ae887fa7-4757-4580-abc5-dac290101bbc_122x60.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:60,&quot;width&quot;:122,&quot;resizeWidth&quot;:78,&quot;bytes&quot;:5815,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.learningfromexamples.com/i/160655605?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!ff2w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 424w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 848w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1272w, https://substackcdn.com/image/fetch/$s_!ff2w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae887fa7-4757-4580-abc5-dac290101bbc_122x60.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Attention is a product of limitation. Whether in the prayers of ascetics, the experiments of early psychologists, or the designs of machine learning engineers, the problem is the same. It&#8217;s the one about how to find coherence in a world made up of more than can be grasped. </p><p>If the story of attention tells us anything, it&#8217;s that all seeing is partial. The mind knows by focusing. Science progresses by filtering. Machines learn by selecting. No perspective, human or otherwise, can capture the whole. To attend is to miss almost everything, but that&#8217;s the whole point. </p><p>Emily Dickinson once wrote that &#8216;a letter always feels to me like immortality because it is the mind alone without corporeal friend.&#8217; What she meant was that letters offered a way for the mind&#8217;s expression to endure on its own, that they exist as fragments of the self that outlast the living. </p><p>In her poems, attention works in much the same way. A single moment, chosen and held, becomes a whole world. The mind&#8217;s spotlight falls where it falls, whether on the wings of a fly, a band of light, or a blade of grass. </p><p>Attention is not what limits our experience of the world. It&#8217;s what makes experience possible at all. Without the narrowing there would be no coherence, no meaning, and no world to perceive. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.learningfromexamples.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Learning From Examples! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>