
The Royal Institution knows how to put on a show. In 1839, Michael Faraday used the venue to introduce dazzled researchers to early photographic techniques. Over fifty years later, J. J. Thomson told London’s great and good about the electron. Around the same time, Sir James Dewar showed off liquid hydrogen in a particularly eye-catching demonstration.
Historically, the events of the Royal Institution expanded horizons. They showed what science could do right now and what it might be able to do in the future. Humphry Davy’s public demonstrations of nitrous oxide thrilled audiences by proving that gases affect the mind and body. Faraday’s famous rotating wire experiment amazed onlookers by showing that electricity could drive mechanical motion.
In June 1973, the Royal Institution held a different kind of event. Certainly it was a spectacle, but one in which the presenter insisted on the limits of science rather than its possibilities. That person was Sir James Lighthill of the University of Cambridge, a celebrated British scientist who pioneered work in applied mathematics. His subject was AI, a field which he believed had failed to deliver on its promises.
Against Lighthill sat three challengers: the roboticist Donald Michie, psychologist Richard Gregory, and AI grandee John McCarthy (who last appeared in AI Histories #15). Hosted by the BBC, hundreds of others were in the audience for the debate.
Lighthill opened the proceedings by making a distinction between automation — defined as the use of any machine to conduct human work — and ‘automatic devices that could substitute for a human being over a wide range of human activities’. He said the latter group was called ‘general purpose robots’, but these things were regrettably a ‘mirage’.
He compared the ‘AI scientist in the lab’ to the ‘sorcerer in his cell’. In his view, both dealt with theatre that captured the public imagination without much to show for it. The comparison makes a certain kind of sense when you understand Lighthill’s beliefs about what science was for. As the historian Jon Agar puts it: ‘behind James Lighthill's criticisms of a central part of artificial intelligence was a principle he held throughout his career – that the best research was tightly coupled to practical problem solving’.
Lighthill said the great breakthroughs of the computing age belonged to automation, which he said was the preserve of ‘feedback control systems that act to reduce some change in quantity from its desired value’. Then he went on to say that all computers, and by extension all AI systems, are things that ‘manipulate symbols according to rules prescribed in a program’.
These are curious distinctions that don’t make much sense to the modern reader. We know that the foundational technology of today’s AI project is the neural network, a system whose power flows from reducing the loss between predicted and expected values. All things being equal, these systems do not manipulate symbols according to some set of rules (though they can operate symbolic tools like a calculator).
Lighthill had in mind a very specific type of thinking machine, one that was usually embodied, based on hard-coded rules, and ultimately alluring yet brittle. Just like industrial automation, he argued that the strands of research that would eventually culminate in today’s large models weren’t true examples of artificial intelligence. As the old saying goes, it isn't AI if it works.
Talking heads
Like many of the clashes over thinking machines today, the debate wasn’t really about AI at all. The most popular topics of discussion were human intelligence, the nature of the brain, and the extent to which there are bottlenecks that prevent researchers from replicating human capacities in silicon. Regrettably, the debate took a somewhat circular turn from the outset.
Lighthill said that the brain can’t be replicated because it’s too complex. McCarthy countered its function certainly can. Gregory said artificial neural networks aren’t representative of contemporary neuroscience research but we should study them anyway. Michie further muddied the definition of ‘robot’ and then talked at length about the Freddy II system from his University of Edinburgh research group.
At one point, the host said he didn’t understand the issue and the researchers nodded along as if in agreement. The whole thing was a mess, frankly, and in the end the audience left more confused than they were to begin with. Say what you will about modern television, but science communication has made great progress over the last half century.
The BBC’s programme was meant to be a moment for the scientific community to respond to the publication of the Lighthill report, a piece of work commissioned by the UK’s Science Research Council in 1972 to take stock of UK AI research. The council was having difficulty assessing AI grant proposals and there were concerns that some projects were overly narrow or overly ambitious.
Lighthill was engaged to help. After spending a couple of months reading AI literature and consulting researchers, he delivered his report Artificial Intelligence: A General Survey in March 1973. In it, he outlined three categories of AI:
‘A - Advanced Automation’: This covered AI work with clear practical objectives like character or speech recognition, machine translation, and automated theorem proving. Lighthill acknowledged real, if modest, progress — but cautioned that successes were confined to toy problems.
‘B - Building Robots’: The attempt to create general-purpose machines that integrate perception, cognition, and action (often embodied in robots). Lighthill saw this category as a ‘bridge’ between the others, though McCarthy disputed this reading in the debate. McCarthy agreed, however, that projects of this type were trying to achieve the vision of AI that could perform many different tasks.
‘C - Computer-Based Central Nervous System Studies’: The use of computers to simulate and study neurobiology and psychology, like using neural networks to model parts of the brain. Here too he noted some progress and endorsed continued work in the area, but only insofar as machines could tell us about the nature of cognition.
The middle category, which is closest to modern conceptions of AI, bore the brunt of Lighthill’s criticism. In his report, he wrote that “Progress in category B [Building Robots] has been even slower and more discouraging”. A few pages later, he quipped that “AI not only fails to take the first fence but ignores the rest of the steeplechase altogether.”
Michie, who also gave a written response to the report, questioned Lighthill’s methodology and impartiality. Did he intentionally consult sceptical experts? Could someone outside the field fairly judge its worth? And how could Lighthill possibly be so confident about AI’s future prospects?
These were fair questions, but in the end the Science Research Council sided with Lighthill’s assessment. Funding for AI research in Britain was severely cut and many of the organised AI programmes that had existed were scrapped. The Edinburgh AI laboratory, which under Michie had been one of the world’s leading AI centres, saw its support plummet. As one retrospective put it, the once bustling lab was reduced to “just Michie, a technician, and an administrative assistant”.
The report was widely circulated and discussed internationally. In the United States, around the same time, DARPA (the main US defence research funder for AI) was undergoing its own shift. In 1974, partly due to new federal directives and disappointment with certain AI projects, DARPA started applying tighter scrutiny to AI research. It eventually published a Lighthill-style report of its own that drew similar conclusions.
But despite the rhetoric, the global AI research community actually continued to grow in the 1970s. The historian Thomas Haigh pointed out that if one looks at metrics like number of active researchers, conference participation, and publications, interest in AI kept increasing in the wake of the Lighthill report and subsequent allusions to the first ‘AI winter’.
Lighthill’s focus was largely on the symbolic approach to AI development that relies on explicit symbols, logic, and rules to represent knowledge and solve problems (discussed in more detail in AI Histories #9). In the 1960s and early 1970s, symbolic AI encompassed areas like rule-based reasoning, search algorithms, logic and theorem proving, structured knowledge representation, and even early robotics and natural language processing.
Against it stood connectionism, the ancestor of modern deep learning where networks of individual units learn from data. Connectionism was already facing something of a challenging time of its own, after Marvin Minsky published his famous takedown of the paradigm in 1969 (the subject of AI Histories #7).
In the report, there was little mention of connectionist approaches because the paradigm wasn’t prominent in the UK AI scene at the time. Artificial neural networks (probably the most famous incarnation of connectionism) appear in Lighthill’s discussion as tools for brain modelling in the central nervous system category — but not in the category that deals with AI in a way that we might understand it today.
It’s a useful distinction for helping us to understand the legacy of the Lighthill report. Connectionism was already under attack and symbolic AI methods had now joined it in the firing line. What the report really captured was the limits of a single paradigm, one that would eventually be sidelined when neural networks re-emerged to solve many of the problems Lighthill thought insurmountable.
In that sense, his comparison of the AI scientist to the sorcerer in his cell wasn’t entirely misplaced. Symbolic AI did produce persuasive but brittle demonstrations that resembled a magic trick. And connectionism, when it eventually displaced symbolic methods, had its own reputation for alchemy fed by critics and boosters alike.
What Lighthill missed was that science sometimes advances through sorcery, that alchemy was less a dead end and more a transitional practice. The systems of the 1960s and 1970s may not have been general-purpose, but their success in toy environments did inspire a generation of researchers to enter the field. Today’s deep learning systems are not an offshoot of early rule-based research, but the magic of symbolic demos — a ‘mirage’ as Lighthill put it — suggested that the problems of the AI project could eventually be solved.