HMVLA: Hyperbolic Multimodal Fusion for Vision-Language-Action Models

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing vision-language-action (VLA) models in effectively capturing the hierarchical semantic structures inherent in both visual and linguistic modalities, which hinders cross-modal alignment and generalization. To this end, the paper introduces hyperbolic space into VLA multimodal fusion for the first time, leveraging its geometric properties to naturally model hierarchical semantic relationships. Furthermore, it proposes a sparse gated mixture-of-experts (MoE) mechanism tailored for semantic alignment, which simultaneously enhances modeling capacity and computational efficiency. Experimental results demonstrate that the proposed approach significantly outperforms current baselines in terms of accuracy, generalization, and cross-domain adaptability.

Technology Category

Application Category

📝 Abstract
Vision Language Action (VLA) models have recently shown great potential in bridging multimodal perception with robotic control. However, existing methods often rely on direct fine-tuning of pre-trained Vision-Language Models (VLMs), feeding semantic and visual features directly into a policy network without fully addressing the unique semantic alignment challenges in the VLA domain. In this paper, we propose HMVLA, a novel VLA framework that exploits the inherent hierarchical structures in vision and language for comprehensive semantic alignment. Unlike traditional methods that perform alignment in Euclidean space, our HMVLA embeds multimodal features in hyperbolic space, enabling more effective modeling of the hierarchical relationships present in image text data. Furthermore, we introduce a sparsely gated Mixture of Experts (MoE) mechanism tailored for semantic alignment, which enhances multimodal comprehension between images and text while improving efficiency. Extensive experiments demonstrate that HMVLA surpasses baseline methods in both accuracy and generalization. In addition, we validate its robustness by reconstructing datasets to further test cross domain adaptability.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
semantic alignment
multimodal fusion
hierarchical structure
robotic control
Innovation

Methods, ideas, or system contributions that make the work stand out.

hyperbolic space
multimodal fusion
Mixture of Experts
semantic alignment
Vision-Language-Action
🔎 Similar Papers
No similar papers found.
Kun Wang
Kun Wang
CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences
Molecular ImagingRadiomics
X
Xiao Feng
Harbin Institute of Technology, Harbin, China; Chongqing Research Institute of HIT, Chongqing, China
M
M. Qu
Harbin Institute of Technology, Harbin, China; Chongqing Research Institute of HIT, Chongqing, China
Tonghua Su
Tonghua Su
Professor of Harbin Institute of Technology
pattern recognitioncharacter recognitionmachine learningsoftware engineering