Skip to content

June 2024

[Machine Learning] Note Of SiLU Activation Function

Last Updated on 2024-06-06 by Clay

Introduction

SiLU (Sigmoid Linear Unit) activation function is similar to Swish function, Swish just have additional trainable beta parameter. Many large language model (LLM) also adopt this approach, primarily in some exploratory models that use activation functions other than ReLU, such as the classic Llama architecture.

Read More »[Machine Learning] Note Of SiLU Activation Function

Note Of Unsloth Accelerate Fine-tuning Open Source Project

Last Updated on 2024-06-05 by Clay

Introduction

For several months, I have benefited greatly from the Unsloth project, primarily because a significant part of my job involves fine-tuning large language models (LLMs). Fine-tuning LLMs is extremely time-consuming; aside from data collection, the biggest time sink is the endless GPU-powered fine-tuning process.

Read More »Note Of Unsloth Accelerate Fine-tuning Open Source Project

[Paper Reading] Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting

Last Updated on 2024-07-25 by Clay

Introduction

The accelerated framework is proposed by Huawei Noah's Ark Lab, it replaces the small model used in the original speculative decoding with the shallow sub-network of the large model. Additionally, it employs an extra-trained adapter and the model’s own decoding head to generate speculative tokens, which are then verified by the large model. The subsequent operations are quite similar to the original speculative decoding process.

Read More »[Paper Reading] Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting