Google Cloud Pushes Hard on AI Agents and Hardcore Computing
Google said the new chips are 2.8 times faster for training and 80% better for inference per dollar, as it targets lower AI costs.
- On Wednesday, Google unveiled its eighth-generation Tensor Processing Units at Cloud Next in Las Vegas, splitting the product line into the TPU 8t for training and the TPU 8i for inference.
- Google Senior Vice President and Chief Technologist for AI and Infrastructure Amin Vahdat said the specialized chips address the rise of AI agents, which require distinct hardware for efficient training and serving.
- The training-focused TPU 8t delivers 124% more performance per watt than prior generations, while the TPU 8i improves that metric by 117% using high-bandwidth memory to overcome the "memory wall."
- Both accelerators will become generally available later this year as instances on Google Cloud Platform, offering 2.8 times the training performance of the previous Ironwood TPU at the same price.
- Moving beyond x86 processors, Google is utilizing its homegrown Arm-based Axion CPUs for TPU hosts, joining rivals Amazon, Microsoft, and Nvidia in developing custom silicon to reduce dependence on external suppliers.
42 Articles
42 Articles
Google unveiled on Wednesday two new chips for artificial intelligence (AI), one to train the powerful new generative AI models, the other for the fast and economical use of daily life, whose demand could explode with the rapid global deployment of autonomous AI agents.
San Francisco.- The Google Cloud division of Alphabet Inc. presented the latest generation of its tensile processing unit (TPU), its own development chip designed to optimize and streamline AI computing services. The new range will be available in two versions, announced the company at its Google Cloud Next event, where it also presented a $750 million fund to boost the adoption of AI in companies and showed tools for the creation of AI agents. …
Google unveils two new TPUs designed for the "agentic era"
Most of the companies that have fully committed to building AI models are gobbling up every Nvidia AI accelerator they can get, but Google has taken a different approach. Most of its cloud AI infrastructure is based on its line of custom Tensor processing units (TPUs). After announcing the seventh-gen Ironwood TPU in 2025, the company has moved on to the eighth-gen version, but it's not just a faster iteration of the same chip. The new TPUs come…
Google doesn't pay the Nvidia tax. Its new TPUs explain why.
Every frontier AI lab right now is rationing two things: electricity and compute. Most of them buy their compute for model training from the same supplier, at the steep gross margins that have turned Nvidia into one of the most valuable companies in the world. Google does not.On Tuesday night, inside a private gathering at F1 Plaza in Las Vegas, Google previewed its eighth-generation Tensor Processing Units. The pitch: two custom silicon designs…
Google launches $750M partner fund at Cloud Next 2026 to finance agentic AI deployments across Accenture, Deloitte, KPMG
Summary: Google announced a $750 million fund at Cloud Next 2026 to finance partners’ agentic AI development, the largest single partner investment from any hyperscaler. Accenture has built 450+ agents, Deloitte committed its “largest investment yet,” KPMG pledged $100M, PwC $400M, and NTT DATA dedicated 5,000 engineers. With partners capturing up to $7.05 for every […] This story continues at The Next Web
Coverage Details
Bias Distribution
- 55% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium


















