OpenAI Triton Note (2): Fused Softmax
Introduction
Softmax is a commonly used activation function, and it is often employed as the last layer in multi-class classification.
Read More »OpenAI Triton Note (2): Fused SoftmaxSoftmax is a commonly used activation function, and it is often employed as the last layer in multi-class classification.
Read More »OpenAI Triton Note (2): Fused SoftmaxTriton is an open-source GPU programming language compiler released by OpenAI in 2021. Over recent years, it has become increasingly popular among developers for writing and optimizing parallel programs on GPUs. Compared to traditional libraries such as CUDA or OpenCL, Triton offers a Python-like syntax, making it more readable and easier to learn.
Read More »OpenAI Triton Note (1): Vector AdditionMy advisor used to tell me, “Don't just use other people's libraries; you have to write your own to truly understand.” Back then, I didn’t have much time to implement various technologies I was interested in since I was fully occupied with my dissertation. However, I often recall his earnest advice even now, and it prompted me to finally attempt the implementation of BERT, a classic encoder-only transformer model.
Read More »[PyTorch] BERT Architecture Implementation NoteRecently, I integrated several applications of Outlines into my current workflow. Among them, the one I use most frequently is with vLLM. However, for some reason, its documentation has not been merged into the vLLM GitHub repository, so while designing the process, I had to constantly refer to the source code of a rejected PR for guidance XD
Read More »Using the Integrated Outlines Tool for Decoding Constraints in the vLLM Inference Acceleration FrameworkThis is a simple Python implementation, used to test Finite-State Machine (FSM) constraints for a Large Language Model (LLM) to decode responses in a specific format. It also serves as an introduction to the concept behind the Outlines tool. Of course, my implementation is far simpler compared to the actual Outlines tool.
Read More »Implementation of Using Finite-State Machine to Constrain Large Language Model DecodingWhen applying Large Language Models (LLMs) in real-world scenarios, it's often not just about letting the model generate text freely. We might want the model to return specific structures, such as multiple-choice questions or providing a rating. In such cases, transformers-based models can directly use the outlines tool.
Read More »Structuring Model Outputs Using the Outlines ToolGenerative models are becoming increasingly powerful, and independent researchers are deploying one open-source large language model (LLMs) after another. However, when using LLMs for inference or generating responses, waiting for a longer output can be quite time-consuming.
Read More »Implementing Streamed Output Token Generation Using TextStreamer and TextIteratorStreamer in HuggingFace TransformersVariational AutoEncoder (VAE) is an advanced variant of the AutoEncoder (AE). The architecture is similar to the original AutoEncoder, consisting of an encoder and a decoder.
Read More »[Machine Learning] Note Of Variational AutoEncoder (VAE)Currently, LLM services cover a wide range of fields, and Prompt Injection and Jailbreak threats to LLMs are growing by the day. A few months ago, a customer service LLM even provided incorrect information, leading to a loss of customer rights (although that wasn't caused by a prompt attack).
Microsoft's open-source BIPIA (Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models) evaluation method, although tested six months ago without significant updates since, remains a simple and convenient testing method for the tasks I have at hand.
Read More »Evaluating LLM Defense Capabilities Using the Microsoft BIPIA Frameworkdifflib
is a module in the Python standard library used to compare differences between sequences (often text). Back when I was doing my thesis, I implemented this by hand. It's funny and a bit frustrating to realize now in my work that there's such a neat module for this.