Yi: Open Foundation Models by 01.AI

📅 2024-03-07
🏛️ arXiv.org
📈 Citations: 373
Influential: 37
📄 PDF
🤖 AI Summary
This work addresses key challenges in multimodal large language models (MLLMs): long-context understanding, vision-language alignment, and data quality control. Methodologically, it introduces the open-source Yi series foundation models (6B/34B), encompassing variants for general language modeling, dialogue, 200K-context processing, deep extension, and vision-language integration (ViT-LLM). A novel three-tier data quality framework is proposed: (1) 3.1 trillion tokens of bilingual, deduplicated pretraining data; (2) a high-fidelity, human-annotated fine-tuning set (<10K samples); and (3) lightweight continual pretraining to extend context length and enhance capabilities. Techniques include cascaded data filtering, multi-round human verification, and explicit vision-language alignment mechanisms. Experimental results demonstrate state-of-the-art performance: the base model achieves SOTA on MMLU; the dialogue variant leads in human preference win rates on AlpacaEval and Chatbot Arena; and the 200K-context variant significantly improves needle-in-a-haystack retrieval accuracy.

Technology Category

Application Category

📝 Abstract
We introduce the Yi model family, a series of language and multimodal models that demonstrate strong multi-dimensional capabilities. The Yi model family is based on 6B and 34B pretrained language models, then we extend them to chat models, 200K long context models, depth-upscaled models, and vision-language models. Our base models achieve strong performance on a wide range of benchmarks like MMLU, and our finetuned chat models deliver strong human preference rate on major evaluation platforms like AlpacaEval and Chatbot Arena. Building upon our scalable super-computing infrastructure and the classical transformer architecture, we attribute the performance of Yi models primarily to its data quality resulting from our data-engineering efforts. For pretraining, we construct 3.1 trillion tokens of English and Chinese corpora using a cascaded data deduplication and quality filtering pipeline. For finetuning, we polish a small scale (less than 10K) instruction dataset over multiple iterations such that every single instance has been verified directly by our machine learning engineers. For vision-language, we combine the chat language model with a vision transformer encoder and train the model to align visual representations to the semantic space of the language model. We further extend the context length to 200K through lightweight continual pretraining and demonstrate strong needle-in-a-haystack retrieval performance. We show that extending the depth of the pretrained checkpoint through continual pretraining further improves performance. We believe that given our current results, continuing to scale up model parameters using thoroughly optimized data will lead to even stronger frontier models.
Problem

Research questions and friction points this paper is trying to address.

Natural Language Processing
Image Recognition
Multimodal Information Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Understanding
Transformer Architecture
Depth Learning
🔎 Similar Papers
Alex Young
Alex Young
B
Bei Chen
C
Chao Li
C
Chengen Huang
G
Ge Zhang
G
Guanwei Zhang
H
Heng Li
J
Jiangcheng Zhu
J
Jianqun Chen
J
Jing Chang
K
Kaidong Yu
P
Peng Liu
Q
Qiang Liu
S
Shawn Yue
S
Senbin Yang
S
Shiming Yang
T
Tao Yu
Wen Xie
Wen Xie
W
Wenhao Huang
X
Xiaohui Hu
X
Xiaoyi Ren
Xinyao Niu
Xinyao Niu
Unknown affiliation
AIML
P
Pengcheng Nie
Y
Yuchi Xu
Y
Yudong Liu
Y
Yue Wang
Y
Yuxuan Cai
Zhenyu Gu
Zhenyu Gu
AMD
high performance computingdeep learningEDA
Z
Zhiyuan Liu
Z
Zonghong Dai