Nicolas Rashevsky was born in Ukraine in 1899. He was the eldest son of a sugar-factory owner, and studied theoretical physics at St. Vladimir Imperial University (now Taras Shevchenko National University of Kyiv) before receiving his doctorate in 1919. By the time he finished his studies, the country he had grown up in was already gone.
Russia’s disastrous experience in the First World War had shaken the empire. The October Revolution of 1917 brought Lenin’s Bolsheviks to power, and by 1918 the country was embroiled in a brutal civil war. On one side stood the Reds, the new Soviet regime and its Red Army. Arrayed against them was a loose coalition known as the Whites made up of monarchists, liberals, Cossack hosts, and other anti-Bolshevik groups. Rashevsky served with the Whites, who were eventually defeated when their last southern stronghold in Crimea collapsed in November 1920.
Faced with few good options, the remnants of the Whites evacuated across the Black Sea to Constantinople. Within a year, Rashevsky was teaching physics and mathematics at the city’s Robert College before making the trip west to take up a position at the Russian University in Prague in 1921. After three years in Prague, he emigrated to the United States in 1924 to work at Westinghouse Research Laboratories in Pittsburgh where he published on colloidal particles and the physical chemistry of cell division.
In 1934, he was invited to take up a position in the physiology department at the University of Chicago. It was a good fit for Rashevsky, who developed the first mathematical description of nerve excitation in 1933 before formalising a mathematical model of the neuron in Mathematical Biophysics in 1938. By this time the idea that neurons were discrete cells communicating via specialised junctions was widely accepted in the world of neurophysiology. As we saw all the way back in AI Histories #1, that development was largely thanks to the work of Spanish histologist Ramon y Cajal.
While Rashevsky did not cite Cajal in his work, he did take Cajal’s findings — that individual neurons exist and interact at synapses — as the basis for representing neural circuits. Rashevsky took for granted the anatomical foundation provided by the Spaniard: that a neuron was a cell body with an axon, dendrites, and synapses transmitting impulses unidirectionally.
Mathematical biology
In 1939 Rashevsky founded the Bulletin of Mathematical Biophysics, the first international journal devoted to mathematical biology. In the early issues, many papers were written by Rashevsky himself and his close collaborators on topics ranging from neuron models to cell metabolism to population dynamics. The most famous of these was Pitts and McCulloch’s ‘A Logical Calculus of the Ideas Immanent in Nervous Activity’, an essential paper in the AI canon and the subject of AI Histories #17.
Historian Roberto Cordeschi explains the relationship between Rashevsky’s earlier work and the McCulloch-Pitts: “Rashevsky had tried, in his 1938 Mathematical Biophysics, to analyze neural phenomena mathematically. In 1943, McCulloch and Pitts introduced Boolean algebra to describe nets of formal neurons.”
Rashevsky’s neuron was written in the language of physics, through coupled differential equations for abstract ‘excitatory’ and ‘inhibitory’ variables that rose and fell over time. To know whether a model neuron would fire, you had to work through those equations step by step and track changes in continuous variables. The appeal of the McCulloch–Pitts version was its simplicity. Instead of wrestling with changing quantities, they reduced the problem to a rule: if the inputs cross a threshold, the neuron fires; if not, it stays silent.
Rashevsky’s style left him stranded between two camps. To most biologists, his equations looked too abstract and too far removed from experimental life. To most mathematicians and logicians, his differential equations — formulas tracking how quantities change step by step over time — looked too messy. His neurons lived in the analogue world of continuous change, not the logical universe of on/off switches described by the McCulloch-Pitts model.
For this reason, many of the influential later neural network formalisms can be traced more directly to McCulloch and Pitts than to Rashevsky. Frank Rosenblatt’s perceptron (AI Histories #7) was essentially a network of McCulloch-Pitts neurons with adjustable weights and a learning rule. So when Marvin Minsky and Seymour Papert put the boot into neural networks in 1969, they talked about perceptrons as binary threshold units rather than continuous, analogue models favoured by Rashevsky.
That said, today’s neural networks have crept back towards Rashevsky’s way of thinking. Instead of only treating neurons as simple on/off switches, some modern models describe how activity flows continuously over time. These developments were not directly inspired by Rashevsky — they came from control theory and physics as in AI Histories #6 — but in a way they vindicate Rashevsky’s intuition that continuous dynamics are fundamental to understanding neural computation.
Rashevsky’s work didn’t feature in the emerging computer science-oriented AI stream, but its conceptual legacy persisted. The notion of treating the brain as a network that can be quantitatively analysed is something that AI inherited from Rashevsky and the others who followed his lead.
The Russian’s career shows us that AI’s origins don’t just run through logic and computing. By writing down the first equations of neural activity, he opened the possibility of treating the brain as a system that could be formalised, analysed, and perhaps replicated. McCulloch and Pitts made the idea simple and ultimately portable, but Rashevsky made it conceivable in the first place. If the history of AI is usually told as a story of mathematicians and engineers, Rashevsky encourages us to consider whether it was also a story of physicists and biologists trying to translate the dynamics of living systems into mathematics.
Sometimes it’s simply a matter of who gets the credit. Pitts stayed close to Rashevsky’s private circle, but the AI field at large credited the younger man’s paper as foundational for both its symbolic and connectionist schools. In the long run, Rashevsky’s contributions were folded into the background while recognition for launching the AI project went to his collaborators.
Despite this, some historians have argued for Rashevsky’s inclusion in the prehistory of AI and cognitive science. Jonnie Penn said his work “informed the origins of cognitive science in the 1950s”. Tara Abraham’s work re-evaluated Rashevsky’s contributions and reasons for marginalisation from the biology community, which she said followed from the fact that he had “little contact with empirical biological research”. And Gualtiero Piccinini and Sonya Bahar argue that “The mathematical modeling of neural processes can be traced back to the mathematical biophysics pioneered by Nicolas Rashevsky”.
Acknowledging Rashevsky enriches our appreciation of AI’s pre-history. It underscores that the quest to make mind mathematical did not start with Turing or von Neumann or Pitts and McCulloch. Rashevsky, like Ramon y Cajal before him, exists as a representative of one of AI’s many past lives. Including Rashevsky in this lineage reminds us that AI’s conceptual foundations were being laid well before the dawn of the computer age.