[Python] How To Use @contextmanager Decorator
Last Updated on 2024-09-12 by Clay
In Python, the context manager decorator @contextmanager
from the contextlib
module allows developers to conveniently create our own context manager.
Last Updated on 2024-09-12 by Clay
In Python, the context manager decorator @contextmanager
from the contextlib
module allows developers to conveniently create our own context manager.
Last Updated on 2024-09-10 by Clay
Recently, due to some serendipitous events, I had a chance to modify the architecture of a model slightly. I took this opportunity to explore how to iterate and print the layers of neural networks in PyTorch.
Read More »[PyTorch] Traversing Every Layer of a Neural Network in a ModelLast Updated on 2024-09-09 by Clay
Softmax is a commonly used activation function, and it is often employed as the last layer in multi-class classification.
Read More »OpenAI Triton Note (2): Fused SoftmaxLast Updated on 2024-09-08 by Clay
Triton is an open-source GPU programming language compiler released by OpenAI in 2021. Over recent years, it has become increasingly popular among developers for writing and optimizing parallel programs on GPUs. Compared to traditional libraries such as CUDA or OpenCL, Triton offers a Python-like syntax, making it more readable and easier to learn.
Read More »OpenAI Triton Note (1): Vector AdditionLast Updated on 2024-09-07 by Clay
My advisor used to tell me, “Don't just use other people's libraries; you have to write your own to truly understand.” Back then, I didn’t have much time to implement various technologies I was interested in since I was fully occupied with my dissertation. However, I often recall his earnest advice even now, and it prompted me to finally attempt the implementation of BERT, a classic encoder-only transformer model.
Read More »[PyTorch] BERT Architecture Implementation NoteLast Updated on 2024-09-07 by Clay
Recently, I integrated several applications of Outlines into my current workflow. Among them, the one I use most frequently is with vLLM. However, for some reason, its documentation has not been merged into the vLLM GitHub repository, so while designing the process, I had to constantly refer to the source code of a rejected PR for guidance XD
Read More »Using the Integrated Outlines Tool for Decoding Constraints in the vLLM Inference Acceleration FrameworkLast Updated on 2024-09-05 by Clay
This is a simple Python implementation, used to test Finite-State Machine (FSM) constraints for a Large Language Model (LLM) to decode responses in a specific format. It also serves as an introduction to the concept behind the Outlines tool. Of course, my implementation is far simpler compared to the actual Outlines tool.
Read More »Implementation of Using Finite-State Machine to Constrain Large Language Model DecodingLast Updated on 2024-09-03 by Clay
When applying Large Language Models (LLMs) in real-world scenarios, it's often not just about letting the model generate text freely. We might want the model to return specific structures, such as multiple-choice questions or providing a rating. In such cases, transformers-based models can directly use the outlines tool.
Read More »Structuring Model Outputs Using the Outlines ToolLast Updated on 2024-09-01 by Clay
Generative models are becoming increasingly powerful, and independent researchers are deploying one open-source large language model (LLMs) after another. However, when using LLMs for inference or generating responses, waiting for a longer output can be quite time-consuming.
Read More »Implementing Streamed Output Token Generation Using TextStreamer and TextIteratorStreamer in HuggingFace TransformersLast Updated on 2024-08-31 by Clay
Variational AutoEncoder (VAE) is an advanced variant of the AutoEncoder (AE). The architecture is similar to the original AutoEncoder, consisting of an encoder and a decoder.
Read More »[Machine Learning] Note Of Variational AutoEncoder (VAE)