Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization
https://arxiv.org/pdf/2309.04669
Contents
- Abstract
- Introduction
Abstract
Previous approaches in VLMS
- (Vision) Regard the visual input as a prompt
- (Language) Focus exclusively on optimizing the text generation process
- Conditioned upon vision content by a frozen LLM
$\rightarrow$ Inequitable treatment of vision and language!
Solution: LaVIT
$\rightarrow$ Represent both vision and language in a unified form
- . Specifically, we introduce a well-designed visual tokenizer to translate the non-linguistic image into a sequence of discrete tokens like a foreign language that LLM can read. The resulting visual tokens encompass high-level semantics worthy of a word and also support dynamic sequence length varying from the image. Coped with this tokenizer, the presented foundation model called LaVIT can handle both image and text indiscriminately under the same generative learning paradigm. This unification empowers LaVIT to serve as an impressive generalist interface to understand and generate multi-modal content simultaneously. Extensive experiments further showcase that it outperforms the existing models by a large margin on massive vision-language tasks