Skip to content

Machine Learning

[Paper Reading] Lifting the Curse of Multilinguality by Pre-training Modular Transformers

Cross-lingual Modular (X-Mod) is an interesting language model architecture that modularizes the parameters for different languages as Module Units, allowing the model to use separate parameters when fine-tuning for a new language, thereby (comparatively) avoiding the problem of catastrophic forgetting.

Read More »[Paper Reading] Lifting the Curse of Multilinguality by Pre-training Modular Transformers

[Paper Reading] RAGAS: Automated Evaluation of Retrieval Augmented Generation

Introduction

The year 2023 witnessed an explosion of generative AI technologies, with a myriad of applications emerging across various domains. In the field of Natural Language Processing (NLP), Large Language Models (LLMs) stand out as one of the most significant advancements. By training LLMs effectively and reducing hallucinations, they can significantly reduce human effort across a wide range of tasks.

Read More »[Paper Reading] RAGAS: Automated Evaluation of Retrieval Augmented Generation

Use Text To Retrieve Images: Introduction Of Multi-Modals ColPali

Introduction

Since last year, I have been filled with enthusiasm and curiosity about Multi-Modal AI models. As a staunch advocate of AGI, I believe that AI's current potential has not yet reached its ceiling. One significant bottleneck and research direction in AI today is naturally the integration of various modalities (text, images, audio...) in model applications.

Read More »Use Text To Retrieve Images: Introduction Of Multi-Modals ColPali

Meta-llama--Prompt-Guard-86M: Open-Source Model for Prompt Protection, Detecting Malicious Attacks

Recently, Meta AI has released various versions of Llama 3.1 (405B, 70B, 8B), with the 405B model being particularly noteworthy. It's the first time an open-source LLM has caught up with closed-source models like GPT-4 and Claude-3.5. At the same time, Meta AI has also released a smaller model called Prompt-Guard-86M.

Read More »Meta-llama--Prompt-Guard-86M: Open-Source Model for Prompt Protection, Detecting Malicious Attacks

Use `snapshot_download` To Download The Models Of HuggingFace Hub

Introduction

HuggingFace Model Hub is now a widely recognized and essential open-source platform for every one. Every day, countless individuals and organizations upload their latest trained models (including those for text, images, speech, and other domains) to this platform. It can be said that anyone working in AI-related fields frequently browses the HuggingFace platform website.

Read More »Use `snapshot_download` To Download The Models Of HuggingFace Hub

[Paper Reading] Mistral 7B

Introduction

Mistral 7B is a large language model (LLM) proposed on September 27, 2023, trained by the Mistral AI team, which also released its weights as open source. Interestingly, it uses the highly permissive Apache 2.0 license, unlike Llama 2, which has its own Llama license terms. Therefore, Mistral 7B is truly "open source" (Llama's license requires discussion with Meta AI when the service volume reaches 700 million).

Read More »[Paper Reading] Mistral 7B