Skip to content

Multimodal

ImageBind: A Experience Notes on a Multimodal Vector Transformation Model

Introduction

Meta AI has indeed been incredibly powerful recently, seemingly securing its position as a giant in AI research and development in no time at all, and what’s more, it sets the bar high with all its top-tier open-source contributions. From Segment Anything that can segment objects in the image domain, to the public large language model and foundational model, LLaMA (yes, the one causing the llama family appear!), to the recent ImageBind that can transform six modalities and the Massively Multilingual Speech (MMS) project… I must say, for an ordinary person like me, it’s quite an effort to keep up with how to use these technologies, let alone trying to chase their technical prowess.

Read More »ImageBind: A Experience Notes on a Multimodal Vector Transformation Model
Exit mobile version