Why Language Models Get ‘Lost’ in Conversation
3 Articles
3 Articles
Retaining Knowledge in AI: Solving Catastrophic Forgetting in LLMs
Last Updated on May 12, 2025 by Editorial Team Author(s): Sanket Rajaram Originally published on Towards AI. Part 1: The Learning Journey of a Kid in the School Imagine a kid in school learning about basic arithmetic in one semester. By the next year, they move on to geometry and algebra, but in the process, they seem to forget how to add and subtract. Teachers must frequently re-teach old concepts because the kid struggles to retain prior knowl…


Why Language Models Get ‘Lost’ in Conversation
A new paper from Microsoft Research and Salesforce finds that even the most capable Large Language Models (LLMs) fall apart when instructions are given in stages rather than all at once. The authors found that performance drops by an average of 39 percent across six tasks when a prompt is split over multiple turns: A single turn conversation (left) obtains the best results, but is unnatural for the end-user. A multi-turn conversation (right) fin…
Coverage Details
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage