Microsoft rolls out next generation of its AI chips, takes aim at Nvidia's software
- On Jan 26, Microsoft unveiled Maia 200 in San Francisco, deploying it this week in the Iowa data center as `the most efficient inference system Microsoft has ever deployed`.
- Tech giants are designing their own chips to cut reliance on NVIDIA, and Microsoft built Maia 200 to compete with Amazon Web Services and Google while addressing surging demand from generative AI developers.
- Built on Taiwan Semiconductor Manufacturing Co.'s 3-nanometer process, Maia 200 contains over 100 billion transistors, delivers over 10 PFLOPS and around 5 PFLOPS , and links four chips per server with up to 6,144 chips wired together.
- Microsoft is already using Maia 200 to power its Superintelligence team, Microsoft 365 Copilot and Microsoft Foundry, while developers, academics, frontier AI labs and open-source contributors can apply for a Maia 200 SDK preview.
- The launch raises the stakes in competition with Nvidia, Amazon Web Services and Google Cloud as Microsoft says Maia 200 will `dramatically shift the economics of largescale AI` and uses Ethernet networking instead of InfiniBand.
45 Articles
45 Articles
Does This New Chip Threaten Nvidia?
Key PointsMicrosoft says Maia 200 will serve multiple models, including the latest GPT models from OpenAI.Nvidia's data center business is still expanding quickly, even as hyperscalers invest in custom silicon.Over time, in-house inference chips could pressure pricing, raising the bar for Nvidia to stay ahead of the curve.10 stocks we like better than Nvidia › Microsoft (NASDAQ: MSFT) recently introduced a new in-house AI (artificial intelligenc…
The processor is already implemented in Azure data centres in the United States for services such as Microsoft 365 Copilot and OpenAI GPT models. It is the second generation of the company's own chip.
Microsoft introduces Maia 200, its most efficient AI chip. It promises 30% more performance and competes with Amazon and Google in the cloud market.
Microsoft has announced the launch of Maia 200, an artificial intelligence accelerator chip (IA) specifically designed for modern reasoning and extensive language models.Maia 200 is optimized specifically for AI inference, offers the most efficient dollar performance as a result of a new system and silicon architecture designed to maximize inference efficiency.It provides 10.1 PFLOPS in 4-bit precision (FP4) and about 5 PFLOPS in 8 bits (FP8). A…
May 200 is manufactured by TSMC using 3 nanometers production technology and will use high-speed band memory processors, despite an older and slower generation than Nvidia's next chips
Coverage Details
Bias Distribution
- 69% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium















