William Cooper had a problem. It was 1688 and the London bookseller was looking for a way to guide patrons through rows of obscure titles. As he put the finishing touches to his masterwork, a shop catalogue describing books about alchemy (or ‘chymistry’), medicine, and metallurgy, he struggled to decide which works dealt with real knowledge and which were pretenders. Many texts contained experimental results about chemicals and compounds. Others made grand claims about transmutation and eternal life. Some did both.
Seeing no satisfying way to unscramble the egg, Cooper did what any good salesman would do: he wrote a disclaimer. To get ahead of the sceptics, Cooper explained to potential buyers that some books “cannot absolutely be called chymical, but have a very near affinity therunto”. What he really wanted to know was the heart’s desire of every scientist since the age of Enlightenment. How to distinguish between genuine scientific progress, and what merely appears to work?
Today, deep learning is alchemy. It’s alchemy to the hobbyist probing for the right words to evoke the weirdness of giant neural networks. It’s alchemy to the disapproving computer scientist laying the epistemic foundations for a house that has already been built. It’s alchemy to the critics who scorn the entire AI project as a menagerie of smoke and mirrors.
It was alchemy in 2018, when I was first introduced to deep learning by presenters who could feel themselves at risk of losing the audience. Seven years later, it's still a perspective that you are likely to hear from the average researcher if you’re in the right room for long enough. Sometimes AI is alchemical soup, sometimes it’s an alien mind, sometimes it’s an eldritch horror. Always it’s mysterious, inscrutable, and uncanny.
Former OpenAI chief scientist Ilya Sutskever compared AI to “alchemy [that] takes the data and extracts its secrets into the neural network.” Others, like Google’s François Chollet, used the connection to criticise what he saw as sloppy practices in which researchers rely on “folklore and magic spells” to tweak essential elements in the AI development process.
Meta’s Yann LeCun argued that an over-reliance on theory led some to “readily dismiss bona fide engineering and empirical science as alchemy.” Researchers should be guided by what works, rather than beholden to neat ideas that make sense in principle but disintegrate on contact with reality. This is the same driver behind what others refer to as the “unreasonable effectiveness” of today’s large models that shouldn’t work as well as they do.
But work well they do. This is the obvious difference between AI and alchemy, and one that becomes more significant when measured against the weight of history. The first serious comparison between the two domains emerged in the 1960s. Much like the quest to turn common metals into gold, it argued that sand could never become stuff that thinks. Even at the limit, a machine could never translate language or solve open ended problems. It simply wasn’t possible.
My Alchemical Romance
People like to feel special. Humans fashion in-groups and out-groups, sometimes with reference to some specific set of values or interests but often by virtue of a real or imagined difference between us and them. In AI, people form bonds with those who share a ‘situational awareness’ of the technology’s potential to reorder economic, cultural, and political life. They regard ‘normies’ who are oblivious to the coming storm with a mixture of envy and pity.
Like the situationally aware, to be an alchemist was to be part of a club. As the historian of science Lauren Kassell explains: “Alchemy was an arcane art. Its traditions were learned through divine inspiration, instruction by a master under an oath of secrecy, and the study of esoteric texts.”
Alchemists operated within a tradition of secret knowledge, one that found its most influential expression in the Secretum secretorum or ‘Secret of Secrets’. The book was allegedly written by Aristotle for Alexander the Great (though the work was published centuries after his death). Lynn Thorndike, an American historian, endorsed the view that the work was “the most popular book in the middle ages”.
The text established a defining tension in alchemical practice: the obligation to protect sacred knowledge while sharing useful discoveries. Gradually expanding into an encyclopaedic work covering medicine, astrology, alchemy, and magic, the Secret of Secrets came with the model card-esque warning that “whoever betrays these secrets and reveals them to the unworthy shall not be safe from misfortune.”
This idea was still popular in the 17th century. Isaac Newton, who wrote 650,000 words about alchemy, complained that his contemporary Robert Boyle was too loose with alchemical secrets. Given Boyle was essential to the formation of chemistry as a discipline—and that Newton is widely accepted as one of the most significant scientists of them all—most historians of science think that alchemy might be better viewed as a transitional practice rather than an epistemic dead end.
Consider the astrologist Simon Forman, who meticulously documented his alchemical work in London between 1580 and 1611. Forman filled thousands of pages with observations that wedded careful empirical work to celestial calculations and magical elements. His practice centered the creation of sigils, metal seals that he believed could harness the power of the stars when crafted at the right astrological moment.
Timing was everything for Forman. He argued that sigils must be made according to ‘magical hours’, and carefully recorded the motions of what he called the ‘eighth heaven’ to determine the optimal conditions for his work. He said that “all influences natural do come from and proceed from the eighth heaven and from the fixed stars therein” while basing his craft on a combination of observation, analysis, modification, and repetition.
These alchemical objects existed as part of complex networks of value. A sigil or magical seal might be worth thirty-two shillings in gold, like when the astrologer William Lilly sold one he inherited from his mistress. But the source of this value was harder to pin down. Practical efficacy in healing or protection, status as curiosities for gentlemen collectors, historical significance for antiquarians, and natural philosophical interest for those studying occult forces all played a role.
This was the economy of magic. A practitioner might simultaneously view their art as a divine gift that shouldn't be sold, practical knowledge with clear market value, and wisdom that needed to be preserved for the ages. The resulting tension—between secrecy and disclosure, between magical efficacy and scholarly documentation— created patterns of knowledge that shaped how magical expertise was passed between generations.
But the party wouldn't last forever. The emergence of new scientific institutions in the seventeenth century fundamentally altered how occult knowledge was valued and shared. Where medieval alchemists had worked under a paradigm of competitive secrecy, groups like the early Royal Society emerged to promote an ethic of open disclosure and collaborative verification. As the historian William Aemon put it:
“With the establishment of the Royal Society of London and the publication of its Philosophical Transactions, the institutional mechanisms that would govern science as a form of 'public knowledge' were in place. The ideal of public knowledge was not taken to imply then - any more than it does today - that everyone had perfectly free access to scientific knowledge. Nevertheless, the institutionalisation of science under the auspices of the Baconian programme helped to confirm the scientist's special role in society, not as the guardian of secret knowledge, but as the purveyor of new truths bearing the authority of experimental evidence.
Yet traces of the magic economy persist. Tensions between openness and secrecy, between theoretical understanding and practical effectiveness, are not so easily banished. John Maynard Keynes, who bought many of Newton’s manuscripts, wrote in 1942 that “Newton was not the first of the age of reason. He was the last of the magicians”.
By the middle of the 20th century, critics of AI began to wonder if Keynes spoke too soon.
What alchemists can’t do
Comparisons between AI and alchemy are at least 60 years old. The philosopher Hubert Dreyfus, known for his stinging critiques of efforts to bottle and replicate intelligence, argued that alchemy and AI were both fuelled by early, tantalising successes that created the illusion of progress.
Alchemy showed some promise in extracting quicksilver from what looked like dirt, which convinced practitioners they were on the verge of transmuting base metals into gold. AI chalked up a string of early victories in narrow domains (e.g. chess, basic logic proofs) that famously led researchers to believe that human-level intelligence would be achieved “within a generation”.
For Dreyfus, both were a mirage. Initial progress tricked researchers into thinking they were closing in on a grand prize, only for the bounty to recede as they uncovered layers of complexity that their assumptions could not address. Both fields needed a paradigm shift to overcome their limitations. Only one got it.
Dreyfus was, after all, talking about the symbolic school of AI in which systems are developed using hard-coded rules based on the manipulation of mathematical expressions such as equations and formulas. This approach assumes that aspects of intelligence can be replicated through the manipulation of symbols, which in this context refers to logical expressions that humans can read and interpret.
It exists in contrast to the ‘connectionist’ branch of artificial intelligence, which proposes that systems ought to learn from data by mirroring the interaction between neurons of the brain. The vast majority of things that we today call ‘AI’ are the heirs to connectionism — not symbolic AI.
This is an important distinction for getting to grips with Dreyfus’ argument that AI was much the same as alchemy. Alchemists lacked a coherent theory of chemical change, relying instead on analogical reasoning or appeals to mysticism. Dreyfus thought that ‘AI’ (by which he meant symbolic AI) similarly misunderstood cognition by assuming human intelligence could be reduced to discrete, rule-based operations.
He connected AI’s “associationist assumption” (the belief that all thought is composed of step-by-step logical processes) with alchemy’s reliance on models of the world that privileged the arcane. To make his case, Dreyfus identified three aspects of human intelligence that he believed AI systems inherently lacked:
The ability to implicitly process contextual cues (e.g. noticing an undefended chess piece without exhaustive search).
An intuitive grasp of a problem’s core structure (e.g. restructuring a logical proof by isolating critical steps).
Navigating open-ended contexts (e.g. resolving linguistic ambiguities through shared cultural knowledge).
Just as alchemy was limited by pre-modern scientific practice, AI was constrained by its reliance on formal, discrete symbol manipulation. Tasks requiring flexibility (say natural language understanding or creative problem-solving) exposed these deficiencies, which in turn led to stagnation in AI research. So the story goes.
There is of course some irony here. Dreyfus argued that no meaningful progress towards true artificial intelligence was made — just things that looked like progress if you squint hard enough. In one sense he was right. The rules-based approach ran out of steam because it tried to impose rigid order on messy reality. But in another sense he fell into the same trap as the alchemists.
Dreyfus assumed that no progress would be made because he couldn’t see a meaningful alternative to symbolic approaches. That’s not a failure of intelligence but of imagination. After all, connectionist methods weren’t much good at anything until researchers found ways of allowing them to harness massive amounts of data and compute.
Like the alchemist who could only see the chemical universe through the lens of the occult, Dreyfus didn't expect connectionist methods to overcome what he saw as the fundamental bottlenecks in the AI project. He was scoffing at the man on the tree reaching for the moon, when the rocket was already under construction just out of view.
Hindsight is of course a wonderful thing, but the episode is a good reminder that the impossible has an annoying habit of becoming anything but. In the long run, technology is the economy of magic: a vast machine of moving parts that transmutes unthinkable into mundane.
George Eliot wrote that “effective magic is transcendent nature”. That is as true for AI as it is for alchemy. The most important steps forward don't just work within nature's existing framework, but create possibilities that overcome laws and limits. The internet trumped distance. Antibiotics subdued infection. Electricity outshone darkness.
When machine surpasses human, we won’t call it alchemy. We’ll call it magic.
Thanks, useful historical perspective always helps to understand where we come from and where the blinders might be on about where we are headed.