Skip to content

Machine Learning

OpenAI Triton Note (1): Vector Addition

Introduction

Triton is an open-source GPU programming language compiler released by OpenAI in 2021. Over recent years, it has become increasingly popular among developers for writing and optimizing parallel programs on GPUs. Compared to traditional libraries such as CUDA or OpenCL, Triton offers a Python-like syntax, making it more readable and easier to learn.

Read More »OpenAI Triton Note (1): Vector Addition

[PyTorch] BERT Architecture Implementation Note

Introduction

My advisor used to tell me, “Don't just use other people's libraries; you have to write your own to truly understand.” Back then, I didn’t have much time to implement various technologies I was interested in since I was fully occupied with my dissertation. However, I often recall his earnest advice even now, and it prompted me to finally attempt the implementation of BERT, a classic encoder-only transformer model.

Read More »[PyTorch] BERT Architecture Implementation Note

Using the Integrated Outlines Tool for Decoding Constraints in the vLLM Inference Acceleration Framework

Recently, I integrated several applications of Outlines into my current workflow. Among them, the one I use most frequently is with vLLM. However, for some reason, its documentation has not been merged into the vLLM GitHub repository, so while designing the process, I had to constantly refer to the source code of a rejected PR for guidance XD

Read More »Using the Integrated Outlines Tool for Decoding Constraints in the vLLM Inference Acceleration Framework

Implementation of Using Finite-State Machine to Constrain Large Language Model Decoding

This is a simple Python implementation, used to test Finite-State Machine (FSM) constraints for a Large Language Model (LLM) to decode responses in a specific format. It also serves as an introduction to the concept behind the Outlines tool. Of course, my implementation is far simpler compared to the actual Outlines tool.

Read More »Implementation of Using Finite-State Machine to Constrain Large Language Model Decoding

Implementing Streamed Output Token Generation Using TextStreamer and TextIteratorStreamer in HuggingFace Transformers

Introduction

Generative models are becoming increasingly powerful, and independent researchers are deploying one open-source large language model (LLMs) after another. However, when using LLMs for inference or generating responses, waiting for a longer output can be quite time-consuming.

Read More »Implementing Streamed Output Token Generation Using TextStreamer and TextIteratorStreamer in HuggingFace Transformers

Evaluating LLM Defense Capabilities Using the Microsoft BIPIA Framework

Currently, LLM services cover a wide range of fields, and Prompt Injection and Jailbreak threats to LLMs are growing by the day. A few months ago, a customer service LLM even provided incorrect information, leading to a loss of customer rights (although that wasn't caused by a prompt attack).

Microsoft's open-source BIPIA (Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models) evaluation method, although tested six months ago without significant updates since, remains a simple and convenient testing method for the tasks I have at hand.

Read More »Evaluating LLM Defense Capabilities Using the Microsoft BIPIA Framework

Using AutoModel.from_pretrained() In Transformers To Load Customized Model Architecture

To this day, many AI applications and open-source projects are developed based on the HuggingFace transformers package. A large number of models and packages are written to be compatible with the transformers format, and even share the same functions and methods, which makes them more widely accepted.

Under this premise, I came across an open-source training framework that conveniently wraps the automatic reading of Transformer architectures. However, one unavoidable problem is I want to use my custom model for experiments. I tried several solutions, hoping that when using AutoModel.from_pretrained(), by simply providing the local path to my model, I could successfully use my custom model architecture. This article records the method that worked.

Read More »Using AutoModel.from_pretrained() In Transformers To Load Customized Model Architecture
Exit mobile version