Discussion about this post

User's avatar
Denis's avatar

Always read your weekly posts, but this one was exceptional, and I am trying to share it widely. And coming here to express my appreciation!

It is scary that we might think we are close to AGI and "Human-like" intelligence with models that do not display empathy, which is surely one of the most definitive human traits.

It's a reminder that when we test and talk about models, we usually talk about how they perform on purely cognitive tasks (e.g. "how should I position this argument?").

Models sometimes appear to display emotional intelligence in their answers. The answers may be written to work well on a specific individual, which makes them appear to have some empathy, because this is something that empathetic humans tend to do better. But the experiment you share above shows that this is not actually empathy, just cognitive skills masquerading as empathy.

It is scary that we already have AI models making decisions that hugely impact people's lives (e.g. predicting recidivism, causing people to spend many more years in jail) when these models have no empathy.

Expand full comment
1 more comment...

No posts