Skip to main content
See every side of every news story
Published loading...Updated

Microsoft rolls out next generation of its AI chips, takes aim at Nvidia's software

  • On Jan 26, Microsoft unveiled Maia 200 in San Francisco, deploying it this week in the Iowa data center as `the most efficient inference system Microsoft has ever deployed`.
  • Tech giants are designing their own chips to cut reliance on NVIDIA, and Microsoft built Maia 200 to compete with Amazon Web Services and Google while addressing surging demand from generative AI developers.
  • Built on Taiwan Semiconductor Manufacturing Co.'s 3-nanometer process, Maia 200 contains over 100 billion transistors, delivers over 10 PFLOPS and around 5 PFLOPS , and links four chips per server with up to 6,144 chips wired together.
  • Microsoft is already using Maia 200 to power its Superintelligence team, Microsoft 365 Copilot and Microsoft Foundry, while developers, academics, frontier AI labs and open-source contributors can apply for a Maia 200 SDK preview.
  • The launch raises the stakes in competition with Nvidia, Amazon Web Services and Google Cloud as Microsoft says Maia 200 will `dramatically shift the economics of largescale AI` and uses Ethernet networking instead of InfiniBand.
Insights by Ground AI
Podcasts & Opinions

45 Articles

Lean Right

The processor is already implemented in Azure data centres in the United States for services such as Microsoft 365 Copilot and OpenAI GPT models. It is the second generation of the company's own chip.

·Portugal
Read Full Article
EFEEFE
Reposted by
focoinformativo.sitefocoinformativo.site
Center

Microsoft introduces Maia 200, its most efficient AI chip. It promises 30% more performance and competes with Amazon and Google in the cloud market.

Read Full Article
Lean Right

Microsoft has announced the launch of Maia 200, an artificial intelligence accelerator chip (IA) specifically designed for modern reasoning and extensive language models.Maia 200 is optimized specifically for AI inference, offers the most efficient dollar performance as a result of a new system and silicon architecture designed to maximize inference efficiency.It provides 10.1 PFLOPS in 4-bit precision (FP4) and about 5 PFLOPS in 8 bits (FP8). A…

·Buenos Aires, Argentina
Read Full Article
Lean Right

May 200 is manufactured by TSMC using 3 nanometers production technology and will use high-speed band memory processors, despite an older and slower generation than Nvidia's next chips

·Brazil
Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 69% of the sources are Center
69% Center

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

HotHardware broke the news in on Monday, January 26, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal