You are incredible!!! Such a joy to read!!! Ai as a Saussurian structuralist. Love the term “differential reasoning.” Have you encountered Florida’s work? What do you think of his agency without intelligence thesis?
Good write up. Couple of thoughts: the point about lacking a definition for thinking/reasoning while claiming failure is fair, and I’ve seen a few people raise it. It does seem odd that that’s a response to the criticism, though, rather than something we’ve pushed for from the people that developed and promoted the models they labelled as ‘reasoning’ in the first place. And I think there’s still some force to the paper as a check against the seemingly popular idea that just adding more scale will produce conditions from which more sophisticated capabilities will emerge — as you say, the path towards increased sophistication, at least for now, seems to be via connection to modules, which I don’t think we can just scale into existence.
Totally fair. Agree that it ought to go both ways with conceptions of reasoning - I'd like to see the labs explain what they think is happening in much more detail!
I always find it strange when people criticize LLMs for failing at math-ish problems …. logic, deductive reasoning, the Hanoi Tower problem, etc. After all, LLMs are trained on language, not math. It’d be like my teenager studying for the English portion of the SAT, but then at test time she’s given the math portion of the SAT. She would complain that it’s not a fair comparison…. and she’d be right.
I have often countered that we should just connect LLMs up to LMMs (large math models, not yet invented). Or maybe just to a simple calculator function. But from me this was more like a snarky one-line retort; your article shows the realities of those ideas as they form up.
Yes indeed! I think the fact we can plug additional elements into an LLM system (vision modules, search, calculator etc.) is hugely under-appreciated because it takes the pressure off individual models to be all singing and all dancing. Re: Apple, it's strange. I think they weren't a million miles away from producing something great, but made some rather odd rhetorical choices that undermined the work.
You are incredible!!! Such a joy to read!!! Ai as a Saussurian structuralist. Love the term “differential reasoning.” Have you encountered Florida’s work? What do you think of his agency without intelligence thesis?
I know Floridi but somehow managed to miss read agency without intelligence. Will check it out!
Love the reasoning 🙌🏽
Cheers Sam!
Good write up. Couple of thoughts: the point about lacking a definition for thinking/reasoning while claiming failure is fair, and I’ve seen a few people raise it. It does seem odd that that’s a response to the criticism, though, rather than something we’ve pushed for from the people that developed and promoted the models they labelled as ‘reasoning’ in the first place. And I think there’s still some force to the paper as a check against the seemingly popular idea that just adding more scale will produce conditions from which more sophisticated capabilities will emerge — as you say, the path towards increased sophistication, at least for now, seems to be via connection to modules, which I don’t think we can just scale into existence.
Totally fair. Agree that it ought to go both ways with conceptions of reasoning - I'd like to see the labs explain what they think is happening in much more detail!
I always find it strange when people criticize LLMs for failing at math-ish problems …. logic, deductive reasoning, the Hanoi Tower problem, etc. After all, LLMs are trained on language, not math. It’d be like my teenager studying for the English portion of the SAT, but then at test time she’s given the math portion of the SAT. She would complain that it’s not a fair comparison…. and she’d be right.
I have often countered that we should just connect LLMs up to LMMs (large math models, not yet invented). Or maybe just to a simple calculator function. But from me this was more like a snarky one-line retort; your article shows the realities of those ideas as they form up.
Yes indeed! I think the fact we can plug additional elements into an LLM system (vision modules, search, calculator etc.) is hugely under-appreciated because it takes the pressure off individual models to be all singing and all dancing. Re: Apple, it's strange. I think they weren't a million miles away from producing something great, but made some rather odd rhetorical choices that undermined the work.