Skip to main content
See every side of every news story
Published loading...Updated

These dinner-plate sized computer chips are set to supercharge the next leap forward in AI

WaferLLM software cuts inference latency by 90% and doubles energy efficiency on wafer-scale chips, enabling faster, more efficient AI processing for large language models.

Summary by TechXplore
It's becoming increasingly difficult to make today's artificial intelligence (AI) systems work at the scale required to keep advancing. They require enormous amounts of memory to ensure all their processing chips can quickly share all the data they generate in order to work as a unit.

4 Articles

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 100% of the sources are Center
100% Center

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

ed.ac.uk broke the news in on Thursday, November 20, 2025.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal