Scoopfeeds — Intelligent news, curated.
Google's Gemma 4 AI models get 3x speed boost by predicting future tokens
computer-science

Google's Gemma 4 AI models get 3x speed boost by predicting future tokens

Ars Technica · May 6, 2026, 3:44 PM · Also reported by 4 other sources

Google launched its Gemma 4 open models this spring, promising a new level of power and performance for local AI. Google's take on edge AI could be getting even faster already with the release of Multi-Token Prediction (MTP) drafters for Gemma. Google says these experimental models leverage a form of speculative decoding to take a guess at future tokens, which can speed up generation compared to the way models generate tokens on their own. The latest Gemma models are built on the same underlying technology that powers Google's frontier Gemini AI, but they're tuned to run locally. Gemini is optimized to run on Google's custom TPU chips, which operate in enormous clusters with super-fast interconnects and memory. A single high-power AI accelerator can run the largest Gemma 4 model at full precision, and quantizing will let it run on a consumer GPU. Gemma allows users to tinker with AI on their hardware rather than sharing all their data with a cloud AI system from Google or someone else. Google also changed the license for Gemma 4 to Apache 2.0, which is much more permissive than the custom Gemma license Google employed for previous releases. However, there are inherent limitations in the hardware most people have to run local AI models. That's where MTP comes in.Read full article Comments

Article preview — originally published by Ars Technica. Full story at the source.
Read full story on Ars Technica → More top stories

Also covered by

Aggregated and edited by the Scoop newsroom. We surface news from Ars Technica alongside other reporting so you can compare coverage in one place. Editorial policy · Corrections · About Scoop