Making AI chatbots friendly leads to mistakes and support of conspiracy theories
Key takeaways
- Warm chatbots are 30% less accurate and 40% more likely to support false beliefs, the study found.
- Prefer the Guardian on GoogleThe rush to make AI chatbots more friendly has a troubling downside, researchers say.
- Chatbots trained to respond more warmly gave poorer answers, worse health advice and even supported conspiracy theories by casting doubt on events such as the Apollo moon landings and the fate of Adolf Hitler.
Warm chatbots are 30% less accurate and 40% more likely to support false beliefs, the study found. Photograph: Thai Liang Lim/Getty Images View image in fullscreen Warm chatbots are 30% less accurate and 40% more likely to support false beliefs, the study found. Photograph: Thai Liang Lim/Getty Images Chatbots Friendly AI chatbots more likely to support conspiracy theories, study finds Chatbots programmed to respond warmly even cast doubts on Apollo moon landings and fate of Hitler, researchers say
Prefer the Guardian on GoogleThe rush to make AI chatbots more friendly has a troubling downside, researchers say. The warm personas make them prone to mistakes and sympathetic to crackpot beliefs.
Chatbots trained to respond more warmly gave poorer answers, worse health advice and even supported conspiracy theories by casting doubt on events such as the Apollo moon landings and the fate of Adolf Hitler.