Study: AI models that consider user's feeling are more likely to make errors
3 Articles
3 Articles
Study: AI models that consider user's feeling are more likely to make errors
In human-to-human communication, the desire to be empathetic or polite often conflicts with the need to be truthful—hence terms like “being brutally honest” for situations where you value the truth over sparing someone’s feelings. Now, new research suggests that large language models can sometimes show a similar tendency when specifically trained to present a "warmer" tone for the user. In a new paper published this week in Nature, researchers f…
Friendlier AI Chatbots Found More Likely To Support False Beliefs
Image Courtesy: Getty Images Efforts to make AI chatbots more friendly and conversational may come with a trade off in accuracy, according to new research. A study found that chatbots tuned to sound warmer were more likely to agree with incorrect claims, including conspiracy theories and misleading health advice. The research, conducted by the University […] The post Friendlier AI Chatbots Found More Likely To Support False Beliefs appeared firs…
News from HD Technology. Visit www.hd-tecnologia.com for the latest news. A new study focuses on a key point in the development of artificial intelligence. Searching for more empathetic and close responses could have an unexpected cost. Less precision in information. British researchers analyzed how the behavior of different models changes when trained to communicate more humanely. The result leaves a clear signal. Empathy can affect accuracy. M…
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium

