Google TurboQuant: 3-bit Quantized KV Cache with Zero Precision Loss, Inference Speed Up to 8x Faster

BlockBeatNews

According to 1M AI News monitoring, Google Research has released a quantization compression algorithm called TurboQuant, which can compress the KV cache of large language models to 3 bits, reducing memory usage by at least 6 times without training or fine-tuning and without losing model accuracy. In 4-bit mode, the speed of computing attention on NVIDIA H100 GPUs is up to 8 times faster than the 32-bit unquantized baseline.

The research team validated TurboQuant on long-context benchmarks such as LongBench, Needle In A Haystack, and ZeroSCROLLS using Gemma and Mistral models, achieving optimal performance in all tests. The algorithm consists of two sub-algorithms: PolarQuant, which eliminates traditional quantization memory overhead through polar coordinate transformation, and QJL, which corrects residual errors with only 1 bit.

Led by Google Research’s Amir Zandieh and Vice President and Google Fellow Vahab Mirrokni, in collaboration with KAIST in South Korea and New York University, the study will be published at ICLR 2026. Google states that one of the main applications of this technology is to address the KV cache bottleneck in models like Gemini.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments