Towards Improved Text-Aligned Codebook Learning: Multi-Hierarchical Codebook-Text Alignment with Long Text

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image-text paired datasets employ overly concise captions, resulting in insufficient fine-grained semantic alignment between textual descriptions and vector-quantized (VQ) codebooks. To address this, we propose TA-VQ: a Text-Augmented VQ framework that first leverages vision-language models to generate rich, descriptive long-text captions; then introduces a three-level (word/phrase/sentence) multi-granularity text encoder coupled with a hierarchical sampling-based alignment mechanism to enable precise cross-modal matching between the VQ codebook and long-text representations. TA-VQ pioneers a plug-and-play, multi-level alignment architecture—fully compatible with off-the-shelf VQ backbones and requiring no modification to the original VQ pipeline for end-to-end integration. Experiments demonstrate that TA-VQ significantly outperforms state-of-the-art methods on image reconstruction and multiple downstream tasks, validating substantial improvements in both the semantic expressiveness of the codebook and its cross-modal generalization capability under long-text guidance.

Technology Category

Application Category

📝 Abstract
Image quantization is a crucial technique in image generation, aimed at learning a codebook that encodes an image into a discrete token sequence. Recent advancements have seen researchers exploring learning multi-modal codebook (i.e., text-aligned codebook) by utilizing image caption semantics, aiming to enhance codebook performance in cross-modal tasks. However, existing image-text paired datasets exhibit a notable flaw in that the text descriptions tend to be overly concise, failing to adequately describe the images and provide sufficient semantic knowledge, resulting in limited alignment of text and codebook at a fine-grained level. In this paper, we propose a novel Text-Augmented Codebook Learning framework, named TA-VQ, which generates longer text for each image using the visual-language model for improved text-aligned codebook learning. However, the long text presents two key challenges: how to encode text and how to align codebook and text. To tackle two challenges, we propose to split the long text into multiple granularities for encoding, i.e., word, phrase, and sentence, so that the long text can be fully encoded without losing any key semantic knowledge. Following this, a hierarchical encoder and novel sampling-based alignment strategy are designed to achieve fine-grained codebook-text alignment. Additionally, our method can be seamlessly integrated into existing VQ models. Extensive experiments in reconstruction and various downstream tasks demonstrate its effectiveness compared to previous state-of-the-art approaches.
Problem

Research questions and friction points this paper is trying to address.

Improving text-aligned codebook learning for image quantization.
Addressing limited alignment due to concise image-text descriptions.
Proposing hierarchical encoding and alignment for long text integration.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates longer text using visual-language model
Splits long text into multiple granularities for encoding
Uses hierarchical encoder and sampling-based alignment strategy
🔎 Similar Papers
No similar papers found.
G
Guotao Liang
Peng Cheng Laboratory
B
Baoquan Zhang
Harbin Institute of Technology, Shenzhen
Zhiyuan Wen
Zhiyuan Wen
The Hong Kong Polytechnic University
NLP
J
Junteng Zhao
Harbin Institute of Technology, Shenzhen
Yunming Ye
Yunming Ye
Harbin Institute of Technology, Shenzhen, China
Mining Multimodal Data
K
Kola Ye
SiFar Company
Yao He
Yao He
Stanford University
RoboticsSLAMComputer Vision