Skip to content

Blog

Here’s a thought: Will Transformers be replaced in the future?

Today, while I was eating, I came across a video (the video is attached at the end of this article). Unlike many tech channels that jump straight into discussing AI, economics, and replacing humans, this video took a more careful approach. It explained in detail how hardware specifications have influenced algorithms (or AI model architectures) over time.

Read More »Here’s a thought: Will Transformers be replaced in the future?

Note Of KTOTrainer (Kahneman-Tversky Optimization Trainer)

I've been intermittently reading about a fine-tuning method called Kahneman-Tversky Optimization (KTO) from various sources like HuggingFace's official documents and other online materials. It's similar to DPO as a way to align models with human values, but KTO's data preparation format is much more convenient, so I'm quickly applying it to my current tasks before making time to study the detailed content in the related papers.

Read More »Note Of KTOTrainer (Kahneman-Tversky Optimization Trainer)

[Solved] Uvicorn Closed In Container - HTTP connection lost. Shutting down, exit 0

Problem Description

Today, while I was using podman to create a container (from a FastAPI image) to run my FastAPI service, I encountered an issue where the container would automatically stop if the user logged out or if there were no HTTP POST requests sent to the API for a while. After some time, the service would stop.

Read More »[Solved] Uvicorn Closed In Container - HTTP connection lost. Shutting down, exit 0

[Python] Using Locust Open Source Load Testing Framework for Stress Testing

Locust is an open-source load testing tool that helps simulate heavy user traffic on web applications and APIs. Compared to traditional load testing tools, Locust offers more customization and scalability—it supports Python as the scripting language, allowing us to write tests specific to our API or web service use cases.

Read More »[Python] Using Locust Open Source Load Testing Framework for Stress Testing

Notes on Fine-Tuning a Multi-Modal Large Language Model Using SFTTrainer (Taking LLaVa-1.5 as an Example)

A multi-modal large language model (Multi-Modal Large Language Model) isn’t limited to text only. I know this might sound contradictory, but this is a term that has become widely accepted. What I want to document today is how to fine-tune a multi-modal model using a script.

Read More »Notes on Fine-Tuning a Multi-Modal Large Language Model Using SFTTrainer (Taking LLaVa-1.5 as an Example)