TokenCom: Vision-Language Model for Multimodal and Multitask Token Communications

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key limitations in current vision-language models—namely coarse visual token granularity, excessively long sequences, and insufficient cross-modal alignment—by introducing the TaiChi framework. TaiChi employs dual visual tokenizers that jointly process high- and low-resolution images to generate fine-grained, multi-scale visual tokens. These tokens are efficiently fused via a bilateral attention network (BAN) and aligned with textual representations through a modality projector based on Kolmogorov–Arnold Networks (KANs), enabling precise nonlinear cross-modal alignment. The resulting end-to-end multimodal, multi-task token communication system demonstrates significant improvements over existing methods in both visual understanding and token compression. Experimental results further validate TaiChi’s efficiency and feasibility in multi-task communication scenarios.

Technology Category

Application Category

📝 Abstract
Visual-Language Models (VLMs), with their strong capabilities in image and text understanding, offer a solid foundation for intelligent communications. However, their effectiveness is constrained by limited token granularity, overlong visual token sequences, and inadequate cross-modal alignment. To overcome these challenges, we propose TaiChi, a novel VLM framework designed for token communications. TaiChi adopts a dual-visual tokenizer architecture that processes both high- and low-resolution images to collaboratively capture pixel-level details and global conceptual features. A Bilateral Attention Network (BAN) is introduced to intelligently fuse multi-scale visual tokens, thereby enhancing visual understanding and producing compact visual tokens. In addition, a Kolmogorov Arnold Network (KAN)-based modality projector with learnable activation functions is employed to achieve precise nonlinear alignment from visual features to the text semantic space, thus minimizing information loss. Finally, TaiChi is integrated into a multimodal and multitask token communication system equipped with a joint VLM-channel coding scheme. Experimental results validate the superior performance of TaiChi, as well as the feasibility and effectiveness of the TaiChi-driven token communication system.
Problem

Research questions and friction points this paper is trying to address.

token granularity
visual token sequence
cross-modal alignment
vision-language models
multimodal communication
Innovation

Methods, ideas, or system contributions that make the work stand out.

dual-visual tokenizer
Bilateral Attention Network (BAN)
Kolmogorov Arnold Network (KAN)
cross-modal alignment
token communication
F
Feibo Jiang
Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China
S
Siwei Tu
School of Information Science and Engineering, Hunan Normal University, Changsha, China
L
Li Dong
Changsha Social Laboratory of Artificial Intelligence, Hunan University of Technology and Business, Changsha, China
Xiaolong Li
Xiaolong Li
University Of Electronic Science And Technology Of China
moving target detection and trackingmotion parameter estimationradar imaging
Kezhi Wang
Kezhi Wang
Professor, Royal Society Industry Fellow, Brunel University London
Wireless CommunicationEdge ComputingMachine Learning
Cunhua Pan
Cunhua Pan
Professor, Southeast University
RISUAVISACURLLC
Zhu Han
Zhu Han
University of Houston
Game TheoryWireless NetworkingSecurityData ScienceSmart Grid
Jiangzhou Wang
Jiangzhou Wang
Professor, University of Kent
Mobile Communications