Skip to content

September 2024

OpenAI Triton Note (1): Vector Addition

Last Updated on 2024-09-08 by Clay

Introduction

Triton is an open-source GPU programming language compiler released by OpenAI in 2021. Over recent years, it has become increasingly popular among developers for writing and optimizing parallel programs on GPUs. Compared to traditional libraries such as CUDA or OpenCL, Triton offers a Python-like syntax, making it more readable and easier to learn.

Read More »OpenAI Triton Note (1): Vector Addition

[PyTorch] BERT Architecture Implementation Note

Last Updated on 2024-09-07 by Clay

Introduction

My advisor used to tell me, “Don’t just use other people’s libraries; you have to write your own to truly understand.” Back then, I didn’t have much time to implement various technologies I was interested in since I was fully occupied with my dissertation. However, I often recall his earnest advice even now, and it prompted me to finally attempt the implementation of BERT, a classic encoder-only transformer model.

Read More »[PyTorch] BERT Architecture Implementation Note

Using the Integrated Outlines Tool for Decoding Constraints in the vLLM Inference Acceleration Framework

Last Updated on 2024-09-07 by Clay

Recently, I integrated several applications of Outlines into my current workflow. Among them, the one I use most frequently is with vLLM. However, for some reason, its documentation has not been merged into the vLLM GitHub repository, so while designing the process, I had to constantly refer to the source code of a rejected PR for guidance XD

Read More »Using the Integrated Outlines Tool for Decoding Constraints in the vLLM Inference Acceleration Framework

Implementation of Using Finite-State Machine to Constrain Large Language Model Decoding

Last Updated on 2024-09-05 by Clay

This is a simple Python implementation, used to test Finite-State Machine (FSM) constraints for a Large Language Model (LLM) to decode responses in a specific format. It also serves as an introduction to the concept behind the Outlines tool. Of course, my implementation is far simpler compared to the actual Outlines tool.

Read More »Implementation of Using Finite-State Machine to Constrain Large Language Model Decoding

Structuring Model Outputs Using the Outlines Tool

Last Updated on 2024-09-03 by Clay

When applying Large Language Models (LLMs) in real-world scenarios, it’s often not just about letting the model generate text freely. We might want the model to return specific structures, such as multiple-choice questions or providing a rating. In such cases, transformers-based models can directly use the outlines tool.

Read More »Structuring Model Outputs Using the Outlines Tool

Implementing Streamed Output Token Generation Using TextStreamer and TextIteratorStreamer in HuggingFace Transformers

Last Updated on 2024-09-01 by Clay

Introduction

Generative models are becoming increasingly powerful, and independent researchers are deploying one open-source large language model (LLMs) after another. However, when using LLMs for inference or generating responses, waiting for a longer output can be quite time-consuming.

Read More »Implementing Streamed Output Token Generation Using TextStreamer and TextIteratorStreamer in HuggingFace Transformers