10 Comments
User's avatar
Ghosted's avatar

I don't see enough people noticing that reasoning by reference is nothing more or less than reasoning by difference with extra modal inputs. Instead of extrapolating patterns in a sample of pure language, it's extrapolating patterns in a sample of multisensory data. The feeling of bark under your fingers is, in principle, no less a quantifiable data point than the frequency with which "bark" occurs near "tree" in the world's English text.

Even the most irreplaceable advantage of animals, the "physical intuition" written into the wet-firmware by 3.5BY of evolution, isn't qualitatively different. There's merely a lot of it. Evolution is basically reinforcement learning, and prediction error is the basis of animal cognition too.

Expand full comment
Nick Potkalitsky's avatar

You are incredible!!! Such a joy to read!!! Ai as a Saussurian structuralist. Love the term “differential reasoning.” Have you encountered Florida’s work? What do you think of his agency without intelligence thesis?

Expand full comment
Harry Law's avatar

I know Floridi but somehow managed to miss read agency without intelligence. Will check it out!

Expand full comment
Sam B's avatar

Love the reasoning 🙌🏽

Expand full comment
Harry Law's avatar

Cheers Sam!

Expand full comment
Bill Taylor's avatar

I always find it strange when people criticize LLMs for failing at math-ish problems …. logic, deductive reasoning, the Hanoi Tower problem, etc. After all, LLMs are trained on language, not math. It’d be like my teenager studying for the English portion of the SAT, but then at test time she’s given the math portion of the SAT. She would complain that it’s not a fair comparison…. and she’d be right.

I have often countered that we should just connect LLMs up to LMMs (large math models, not yet invented). Or maybe just to a simple calculator function. But from me this was more like a snarky one-line retort; your article shows the realities of those ideas as they form up.

Expand full comment
Harry Law's avatar

Yes indeed! I think the fact we can plug additional elements into an LLM system (vision modules, search, calculator etc.) is hugely under-appreciated because it takes the pressure off individual models to be all singing and all dancing. Re: Apple, it's strange. I think they weren't a million miles away from producing something great, but made some rather odd rhetorical choices that undermined the work.

Expand full comment
David Bachman's avatar

The problem with the Apple paper wasn't that they didn't let the models use tools. It was that their measure of successful "reasoning" was completely wrong. The Towers of Hanoi is an assignment in many college "Intro to Proof" or "Discrete Math" classes. There a solution isn't a sequence of moves, it's an algorithm to generate a sequence of moves. In the Apple paper, when the sequence of moves required to solve n disks got too large, the model didn't even try to produce the sequence: it just gave the algorithm instead, which is what any truly intelligent person would do. That's an example of extremely sophisticated reasoning, not a failure as the researchers claimed. More on this in my post, "No ChatGPT isn't rotting your brain" here:

https://profbachman.substack.com/p/no-chatgpt-isnt-rotting-your-brain-cd7

Expand full comment
Katsch's avatar

Good write up. Couple of thoughts: the point about lacking a definition for thinking/reasoning while claiming failure is fair, and I’ve seen a few people raise it. It does seem odd that that’s a response to the criticism, though, rather than something we’ve pushed for from the people that developed and promoted the models they labelled as ‘reasoning’ in the first place. And I think there’s still some force to the paper as a check against the seemingly popular idea that just adding more scale will produce conditions from which more sophisticated capabilities will emerge — as you say, the path towards increased sophistication, at least for now, seems to be via connection to modules, which I don’t think we can just scale into existence.

Expand full comment
Harry Law's avatar

Totally fair. Agree that it ought to go both ways with conceptions of reasoning - I'd like to see the labs explain what they think is happening in much more detail!

Expand full comment