“Now tell me, just what have you and Marv been up to — Gloria has received just as much information as I have”
— Louise’s letter to Ray Solomonoff, July 1956
It was a good question, one asked by Ray Solomonoff’s girlfriend Louise in the summer of 1956. Gloria was the wife of the famous mathematician Marvin Minsky, then a Harvard Junior Fellow, whose work we last revisited in AI Histories #7.
Ray Solomonoff, meanwhile, has yet to feature in the series but is generally regarded as the inventor of algorithmic probability. In 1956 he was a graduate of the University of Chicago and was working at Technical Research Group in New York.
Minsky and Solomonoff were spending the summer at Dartmouth College with a group of scientists organised by John McCarthy. The guests, which also included Herbert Simon, Allen Newell, and Claude Shannon (all figures we’ll get to in our series) were working on what his wife Grace Solomonoff later called ‘The Mysterious Science’. It was a fitting way of describing ‘thinking machine’ work, which for a time resisted easy classification.
Part of the draw of the workshop was to hash out what exactly thinking machines were and how the emerging discipline was referred to. ‘Artificial intelligence’ was already on the proposal, but the attendees were more likely to describe their work as cybernetics, automata theory, or complex information processing.
You might think that what we call the thing isn’t particularly important, and you’d be right to suggest that definitional questions about the nature of the AI project can be tedious. Even today, you hear people talking up the idea that LLMs aren’t AI, which is a phrase just one step removed from ‘real AI has never been tried yet’.
But from a historical perspective it does matter. The field or discipline of artificial intelligence clearly did not begin in 1956; many of the technologies and techniques that are still essential to today’s AI project are much longer in the tooth than the middle of the 20th century (see AI Histories #6, AI Histories #10, or AI Histories #13).
The Dartmouth project, to borrow historian Thomas Haigh’s phrase, was about giving AI a brand of its own. That isn’t to cast aspersions on the quality of the AI project, but to recognise that brands are useful for creating and stabilising many forms of creative or intellectual life, for making it clear who owns what and what certain things actually refer to.
In commercial terms, a brand tells outsiders what they’re buying; in research politics, it tells funders what they’re backing and graduate students what tribe they’re joining. Even McCarthy himself later wrote that “one of the reasons for inventing the term `artificial intelligence’ was to escape association with ‘cybernetics.’ … I wished to avoid having either to accept Norbert Wiener [a major figure in cybernetics] as a guru or having to argue with him.”
An immodest proposal
The goals of the project were famously lofty. On the original proposal from the year before, McCarthy, Minsky, Shannon, and Nat Rochester, wrote:
“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”
Where Wiener’s cybernetics smacked of analogue servos and feedback loops, artificial intelligence was harder to place. It was wide enough to house symbolic logic, neural nets, and whatever else it needed to, yet focused enough to attract cash (the initial workshop was paid for by the Rockefeller Foundation) and energise its researchers.
The workshop opened on 18 June 1956. Most sessions took place in the top floor classroom of Dartmouth’s mathematics building. John McCarthy, Marvin Minsky, and Solomonoff were there every day; though records show that many of the days weren’t particularly well attended.
The work itself was exploratory. W. Ross Ashby demonstrated his electromechanical homeostat, a machine that could keep its needles centred by rewiring itself. On another afternoon the group stopped to check the word ‘heuristic’ in a hallway dictionary, the whole meeting standing around the lectern until a definition could be agreed.
The word was invoked through the summer of 1956. The idea was that, instead of trying to analyse the brain to develop machine intelligence, participants could focus on the operational steps needed to solve a problem using heuristic methods to identify the steps. Herb Simon and Allen Newell’s logic theory machine, for example, used heuristic guides to initiate the algorithmic steps (the set of instructions to actually carry out the problem solving).
The duo held a session on their device, which saw workshop organiser John McCarthy give them a glowing write up:
Newell and Simon, who only came for a few days, were the stars of the show. They presented the logic theory machine and compared its output with protocols from student subjects. The students were not supposed to understand propositional logic but just to manipulate symbol strings according to the rules they were given.
When attendees wrote their first post-workshop papers, the logic theory machine and the idea of list processing led the introductions. The term ‘artificial intelligence’ now points to symbol manipulation first and everything else second, a development that we still wrestle with today when people tell you only symbolic systems can be considered ‘AI’.
McCarthy’s phrase had floated through two months of loose talk and hard disagreement without breaking, and by the time the early papers began to cite the Dartmouth meeting the words were already doing administrative work. They marked grant lines, course titles, and the edges of a new research community. AI is still the Mysterious Science in that it promises the moon but leaves the specifics open to interpretation.
Of course that is entirely by design. Search, neural nets, and probabilistic induction all live underneath its umbrella. Our own moment uses labels like ‘AGI’ and ‘superintelligence’ for size, testing whether they can marshal funding and talent while staying loose enough to survive the revisions that real progress always demands.
Dartmouth’s lesson is that a field can begin with unanswered questions and unfinished business, so long as it finds an organising principle that allows for disagreement, divergence, and dogma to coexist peacefully.