2 Comments
User's avatar
Denis's avatar

Always read your weekly posts, but this one was exceptional, and I am trying to share it widely. And coming here to express my appreciation!

It is scary that we might think we are close to AGI and "Human-like" intelligence with models that do not display empathy, which is surely one of the most definitive human traits.

It's a reminder that when we test and talk about models, we usually talk about how they perform on purely cognitive tasks (e.g. "how should I position this argument?").

Models sometimes appear to display emotional intelligence in their answers. The answers may be written to work well on a specific individual, which makes them appear to have some empathy, because this is something that empathetic humans tend to do better. But the experiment you share above shows that this is not actually empathy, just cognitive skills masquerading as empathy.

It is scary that we already have AI models making decisions that hugely impact people's lives (e.g. predicting recidivism, causing people to spend many more years in jail) when these models have no empathy.

Expand full comment
Harry Law's avatar

Thanks for the kind words – and for sharing the post! As for empathy, it's interesting to think about the variation here (i.e. Yi vs GPT-4). Suspect this is down to the RLHF process for the latter – but could be a number of things. Whatever the case, getting a look at variances in the base models would be a nice exercise.

Expand full comment