Skip to content

Clay

[Paper Reading] Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding

Currently, most of the time spent during LLM inference is bottlenecked by the need to generate tokens sequentially. This highlights a limitation imposed by GPU memory bandwidth — for every single token decoded, the model’s entire weight must be loaded, even though the actual floating-point computation is minimal. This leads to underutilization of the GPU’s computational capabilities.

Read More »[Paper Reading] Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding

Personal Interpretation of Cogito Trained with Iterated Distillation and Amplification (IDA)

Cogito V1 is a model I recently came across on Reddit that demonstrated impressive performance. It was also recommended by my colleagues just a day earlier. I decided to try it out on a RAG task I was working on, and the results were quite astonishing — most notably, it refrained from hallucinations when relevant reference materials were retrieved and was able to effectively synthesize information from multiple sources. Among the models I’ve tested, only Gemma-3 gave me a similar experience without requiring fine-tuning.

Read More »Personal Interpretation of Cogito Trained with Iterated Distillation and Amplification (IDA)

Implementation Notes on Integrating Speculative Decoding with KV Cache

Introduction

Speculative Decoding and KV Cache are both acceleration techniques applicable to Transformer models. The former uses a faster draft model to speculatively generate several subsequent tokens, which are then validated in a batch by the target model to reduce the cost of autoregressive decoding. The latter leverages the causal attention mechanism of Transformers—where past tokens do not attend to future tokens—to cache previously computed results and avoid redundant calculations during inference.

Read More »Implementation Notes on Integrating Speculative Decoding with KV Cache

Why Do We Forget What We Learn? Understanding the Forgetting Curve

Preface

I’ve always tried to keep myself in a state of continuous learning. Yet, there are days when work gets hectic or friends invite me out, and by the time I get home, I’m too exhausted to study. I just play PS5 for a while, take a quick shower, and go to bed. While these days are relaxing and carefree, deep down I worry that if I don’t study regularly, I’ll begin to forget what I’ve learned — just like the saying goes: “Learning is like rowing upstream; not to advance is to drop back.”

Read More »Why Do We Forget What We Learn? Understanding the Forgetting Curve
Exit mobile version