Skip to content

November 2024

KV Cache: A Caching Mechanism To Accelerate Transformer Generation

Last Updated on 2024-11-01 by Clay

During the decoding process of large language models, especially in Auto-regressive models, decoding must be performed step-by-step until the entire sequence is generated. Within this process, there are caching techniques that can help reduce computation and improve decoding speed; one such technique is known as the KV Cache.

Read More »KV Cache: A Caching Mechanism To Accelerate Transformer Generation