ElasticTok: Adaptive Tokenization for Image and Video

πŸ“… 2024-10-10
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
To address information redundancy, low fidelity, and poor computational efficiency arising from fixed-length tokenization in long-video modeling, this paper proposes a frame-level adaptive video tokenization method. Our approach introduces two key innovations: (1) the first elastic token generation mechanism conditioned on preceding frames, enabling dynamic, variable-length visual sequence encoding; and (2) a stochastic tail-token masking strategy integrated with a conditional generation architecture to jointly enhance inter-frame temporal modeling and representation efficiency. Evaluated on standard image and video benchmarks, the method significantly reduces redundant token usageβ€”by an average of 37%β€”while improving both processing speed and accuracy for long videos. This work establishes a lightweight, high-fidelity, and scalable visual representation foundation for multimodal large language models and world models.

Technology Category

Application Category

πŸ“ Abstract
Efficient video tokenization remains a key bottleneck in learning general purpose vision models that are capable of processing long video sequences. Prevailing approaches are restricted to encoding videos to a fixed number of tokens, where too few tokens will result in overly lossy encodings, and too many tokens will result in prohibitively long sequence lengths. In this work, we introduce ElasticTok, a method that conditions on prior frames to adaptively encode a frame into a variable number of tokens. To enable this in a computationally scalable way, we propose a masking technique that drops a random number of tokens at the end of each frames's token encoding. During inference, ElasticTok can dynamically allocate tokens when needed -- more complex data can leverage more tokens, while simpler data only needs a few tokens. Our empirical evaluations on images and video demonstrate the effectiveness of our approach in efficient token usage, paving the way for future development of more powerful multimodal models, world models, and agents.
Problem

Research questions and friction points this paper is trying to address.

Video Processing
Visual Models
Efficient Representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

ElasticTok
Adaptive Tokenization
Efficient Multimodal Processing