KV Cache: A Caching Mechanism To Accelerate Transformer Generation
During the decoding process of large language models, especially in Auto-regressive models, decoding must be performed step-by-step until the entire sequence is generated. Within this process, there are caching techniques that can help reduce computation and improve decoding speed; one such technique is known as the KV Cache.
Read More »KV Cache: A Caching Mechanism To Accelerate Transformer Generation