đ¤ AI Summary
Existing codecs are designed for unimodal, unidirectional communication, suffering from progressive performance degradation across the compressionâtransmissionâreconstruction pipeline. This work proposes the first unified multimodal interactive encoding framework tailored for humanâAI collaboration, replacing raw pixels and text with tokenized representations to enable efficient bidirectional communication between edge devices and cloud-based AI agents. Methodologically, we introduce a scene-adaptive lightweight Transformer-based entropy model, integrated with hybrid compression strategiesâgeneral, masked, and text-conditionedâto substantially reduce inter-token redundancy. Evaluated on diverse downstream tasksâincluding text-to-image generation, image inpainting, outpainting, and visual question answeringâthe framework achieves transmission bitrates below 0.05 bits per pixel (bpp) while preserving full task performance. This demonstrates the paradigmâs exceptional efficiency and robustness under ultra-low-bitrate constraints.
đ Abstract
The rapid progress of Large Multimodal Models (LMMs) and cloud-based AI agents is transforming human-AI collaboration into bidirectional, multimodal interaction. However, existing codecs remain optimized for unimodal, one-way communication, resulting in repeated degradation under conventional compress-transmit-reconstruct pipelines. To address this limitation, we propose UniMIC, a Unified token-based Multimodal Interactive Coding framework that bridges edge devices and cloud AI agents. Instead of transmitting raw pixels or plain text, UniMIC employs compact tokenized representations as the communication medium, enabling efficient low-bitrate transmission while maintaining compatibility with LMMs. To further enhance compression, lightweight Transformer-based entropy models with scenario-specific designs-generic, masked, and text-conditioned-effectively minimize inter-token redundancy. Extensive experiments on text-to-image generation, text-guided inpainting, outpainting, and visual question answering show that UniMIC achieves substantial bitrate savings and remains robust even at ultra-low bitrates (<0.05bpp), without compromising downstream task performance. These results establish UniMIC as a practical and forward-looking paradigm for next-generation multimodal interactive communication.