Google releases Gemma 3n models for on-device AI
4 Articles
4 Articles
Google AI Releases Vertex AI Memory Bank: Enabling Persistent Agent Conversations
Developers are actively working to bring AI agents to market, but a significant hurdle has been the lack of memory. Without the ability to recall past interactions, agents treat each conversation as if it’s the first, leading to repetitive questions, an inability to remember user preferences, and a general lack of personalization. This results in frustration for both users and developers. Historically, developers have attempted to mitigate this …
New Microsoft AI Model Brings 10x Speed to Reasoning on Edge Devices, Apps
Microsoft has released Phi-4-mini-flash-reasoning, a compact AI model engineered for fast, on-device logical reasoning. This new addition to the Phi family is designed for low-latency environments, such as mobile apps and edge deployments, offering performance improvements of up to 10 times throughput and two to three times lower latency compared to its predecessor. The 3.8 billion parameter open model maintains support for a 64k token context l…


Google releases Gemma 3n models for on-device AI
Google has released its Gemma 3n AI model, positioned as an advancement for on-device AI and bringing multimodal capabilities and higher performance to edge devices. Previewed in May, Gemma 3n is multimodal by design, with native support for image, audio, video, and text inputs and outputs, Google said. Optimized for edge devices such as phones, tablets, laptops, desktops, or single cloud accelerators, Gemma 3n models are available in two sizes …
Coverage Details
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
To view factuality data please Upgrade to Premium