Skip to content

17 11 月, 2024

Self-Speculative Decoding 完整實作: LayerSkip Model, Bayesian Optimization, and Adaptive Draft-Exiting Mechanism(附 gemma-2-9b-it 實驗結果)

Last Updated on 2024-11-17 by Clay

在過去的一週裡,我抽空按照論文 Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding 的思路嘗試復現了一遍自推測性解碼Self-Speculative Decoding),包含以下模組:

  • 跳層解碼的 Decoder-only Transformer 模型(主要以 Llama 和 Gemma-2 兩種架構為主)
  • 自適應草稿離開機制
  • 貝氏優化探索最佳跳層策略(尋找怎樣的搭配才會是最好的草稿模型)
  • Self-Speculative Decoding —— 完成只靠模型自身的加速
Read More »Self-Speculative Decoding 完整實作: LayerSkip Model, Bayesian Optimization, and Adaptive Draft-Exiting Mechanism(附 gemma-2-9b-it 實驗結果)
Exit mobile version