Skip to main content
See every side of every news story
Published loading...Updated

MIT's new fine-tuning method lets LLMs learn new skills without losing old ones

Summary by VentureBeat
When enterprises fine-tune LLMs for new tasks, they risk breaking everything the models already know. This forces companies to maintain separate models for every skill.Researchers at MIT, the Improbable AI Lab and ETH Zurich have developed a new technique that enables large language models to learn new skills and knowledge without forgetting their past capabilities.Their technique, called self-distillation fine-tuning (SDFT), allows models to le…

Bias Distribution

  • 100% of the sources are Center
100% Center

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

VentureBeat broke the news in San Francisco, United States on Wednesday, February 11, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal