- LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
- LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention (ICLR 2024)
- Few-Shot Object Detection via Variational Feature Aggregation (AAAI 2023)
Research Experience
Interned at Bytedance Seed, Shanghai AI Lab, and Tencent YouTu Lab.
Education
Received Master and Bachelor degrees from Wuhan University and Central South University, respectively.
Background
Currently a PhD student at MMLab, CUHK, advised by Prof. Xiangyu Yue. Recent research focuses on efficient and unified multimodal LLMs, such as LLaMA-Adapter, OneLLM, and Tar. Interned at Bytedance Seed, Shanghai AI Lab, and Tencent YouTu Lab.