Skip to content

Machine Learning

Note Of Unsloth Accelerate Fine-tuning Open Source Project

Introduction

For several months, I have benefited greatly from the Unsloth project, primarily because a significant part of my job involves fine-tuning large language models (LLMs). Fine-tuning LLMs is extremely time-consuming; aside from data collection, the biggest time sink is the endless GPU-powered fine-tuning process.

Read More »Note Of Unsloth Accelerate Fine-tuning Open Source Project

[Paper Reading] Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting

Introduction

The accelerated framework is proposed by Huawei Noah’s Ark Lab, it replaces the small model used in the original speculative decoding with the shallow sub-network of the large model. Additionally, it employs an extra-trained adapter and the model’s own decoding head to generate speculative tokens, which are then verified by the large model. The subsequent operations are quite similar to the original speculative decoding process.

Read More »[Paper Reading] Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting

[Solved] Where Does Loss Calculation Begin When Multiple `response_template` Exist in Training Data Using SFTTrainer?

Problem

SFTTrainer is a LLM fine-tuning tool provided by HuggingFace team, that can easily adjust many hyper-parameters and config at the fine-tuning task. In the process, response_template is the special string template we need to pass into the tool, any response right by it will be computed the loss.

Read More »[Solved] Where Does Loss Calculation Begin When Multiple `response_template` Exist in Training Data Using SFTTrainer?

[Paper Reading] Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection

Introduction

RAG-based LLM is a well-known architecture in current usage of Large Language Models (LLM). It involves “retrieval” to provide the model with prior knowledge that it lacks during training, enabling the model to answer questions in the context of specific information.

Read More »[Paper Reading] Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection