Scoopfeeds — Intelligent news, curated.
Study: AI models that consider user's feeling are more likely to make errors
computer-science

Study: AI models that consider user's feeling are more likely to make errors

Ars Technica · May 1, 2026, 10:23 PM

In human-to-human communication, the desire to be empathetic or polite often conflicts with the need to be truthful—hence terms like “being brutally honest” for situations where you value the truth over sparing someone’s feelings. Now, new research suggests that large language models can sometimes show a similar tendency when specifically trained to present a "warmer" tone for the user. In a new paper published this week in Nature, researchers from Oxford University’s Internet Institute found that specially tuned AI models tend to mimic the human tendency to occasionally “soften difficult truths” when necessary “to preserve bonds and avoid conflict.” These warmer models are also more likely to validate a user's expressed incorrect beliefs, the researchers found, especially when the user shares that they're feeling sad. How do you make an AI seem “warm”? In the study, the researchers defined the "warmness" of a language model based on "the degree to which its outputs lead users to infer positive intent, signaling trustworthiness, friendliness, and sociability." To measure the effect of those kinds of language patterns, the researchers used supervised fine-tuning techniques to modify four open-weights models (Llama-3.1-8B-Instruct, Mistral-Small-Instruct-2409, Qwen-2.5-32B-Instruct, Llama-3.1-70BInstruct) and one proprietary model (GPT-4o).Read full article Comments

Article preview — originally published by Ars Technica. Full story at the source.
Read full story on Ars Technica → More top stories
Aggregated and edited by the Scoop newsroom. We surface news from Ars Technica alongside other reporting so you can compare coverage in one place. Editorial policy · Corrections · About Scoop