Skip to content

Blog

[Python] Use `httpx` To Replace `requests` For Asynchronous Requests

In Python programming, we often use the requests module for HTTP requests. However, requests can become a bottleneck when connecting frontend and backend services due to its synchronous request handling. Recently, I experienced Kubernetes probe blockages caused by using requests, which led to the unintended deletion of my service container. In such scenarios, httpx might be a more suitable module for asynchronous request handling.

Read More »[Python] Use `httpx` To Replace `requests` For Asynchronous Requests

Stable Diffusion ComfyUI Note 02 - Build The Basic Workflow

Introduction

Previously, we finished the configuration of ComfyUI, now we can try to build a basic and simplest workflow. The workflow is the most different point with stable-diffusion-webui. ComfyUI uses a card-based process that makes it easier to understand how the Stable Diffusion model actually performs inference and also makes it easier to customize and achieve more advanced effects.

Read More »Stable Diffusion ComfyUI Note 02 - Build The Basic Workflow

Stable Diffusion ComfyUI Note 01 - Download And Installation

What is ComfyUI?

Those who play with Stable Diffusion AI-generated images have likely heard of stable-diffusion-webui. It is a visual interface that supports the Stable Diffusion model framework, allowing users to perform inference with AI models without having to write code or deal with complicated command-line operations. ComfyUI, on the other hand, is a slightly more niche front-end interface, but it has quickly garnered a loyal fan base due to its flexibility and customizability. Essentially, it can be seen as a more advanced version of stable-diffusion-webui, though it is less user-friendly.

Read More »Stable Diffusion ComfyUI Note 01 - Download And Installation

Use `snapshot_download` To Download The Models Of HuggingFace Hub

Introduction

HuggingFace Model Hub is now a widely recognized and essential open-source platform for every one. Every day, countless individuals and organizations upload their latest trained models (including those for text, images, speech, and other domains) to this platform. It can be said that anyone working in AI-related fields frequently browses the HuggingFace platform website.

Read More »Use `snapshot_download` To Download The Models Of HuggingFace Hub

[Paper Reading] Mistral 7B

Introduction

Mistral 7B is a large language model (LLM) proposed on September 27, 2023, trained by the Mistral AI team, which also released its weights as open source. Interestingly, it uses the highly permissive Apache 2.0 license, unlike Llama 2, which has its own Llama license terms. Therefore, Mistral 7B is truly "open source" (Llama's license requires discussion with Meta AI when the service volume reaches 700 million).

Read More »[Paper Reading] Mistral 7B

PaddleOCR: A Framework and Model Specialized in Chinese Optical Character Recognition (OCR)

Introduction

Recently, I have been exploring models used for Optical Character Recognition (OCR). In the past, OCR was a very popular research field as it was one of the earliest practical applications of computer vision. Today, OCR has become a very mature task, and you can easily find high-performance open-source models online.

Read More »PaddleOCR: A Framework and Model Specialized in Chinese Optical Character Recognition (OCR)

NuExtract: A Large Language Model For Information Extraction

Introduction

In today's era of flourishing large language models, researchers and companies are racking their brains to apply these models to their work. However, speaking personally, the performance of current language models is still not strong enough, and their application scenarios are limited, often far less than that of humans.

But there is one type of task for which large language models are naturally quite suitable: information extraction in any scenario, which is what I want to introduce today, the NuExtract model.

Read More »NuExtract: A Large Language Model For Information Extraction

Note Of Unsloth Accelerate Fine-tuning Open Source Project

Introduction

For several months, I have benefited greatly from the Unsloth project, primarily because a significant part of my job involves fine-tuning large language models (LLMs). Fine-tuning LLMs is extremely time-consuming; aside from data collection, the biggest time sink is the endless GPU-powered fine-tuning process.

Read More »Note Of Unsloth Accelerate Fine-tuning Open Source Project