Any3D-VLA: Enhancing VLA Robustness via Diverse Point Clouds

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing vision–language–action (VLA) models, which rely on 2D images and struggle to comprehend complex 3D scenes due to scarce 3D data and domain shifts across environments. To overcome these challenges, we propose Any3D-VLA, a novel framework that systematically integrates multi-source point clouds from simulators, sensors, and model-based estimations to construct diverse 3D inputs within a unified training paradigm, enabling domain-agnostic 3D representation learning. By jointly leveraging point cloud reconstruction, multi-source 3D fusion, domain adaptation, and 2D–3D feature coordination, our approach effectively mitigates depth scale bias and inter-domain discrepancies. Experimental results demonstrate that Any3D-VLA significantly enhances both performance and robustness of VLA systems in both simulated and real-world environments.

Technology Category

Application Category

📝 Abstract
Existing Vision-Language-Action (VLA) models typically take 2D images as visual input, which limits their spatial understanding in complex scenes. How can we incorporate 3D information to enhance VLA capabilities? We conduct a pilot study across different observation spaces and visual representations. The results show that explicitly lifting visual input into point clouds yields representations that better complement their corresponding 2D representations. To address the challenges of (1) scarce 3D data and (2) the domain gap induced by cross-environment differences and depth-scale biases, we propose Any3D-VLA. It unifies the simulator, sensor, and model-estimated point clouds within a training pipeline, constructs diverse inputs, and learns domain-agnostic 3D representations that are fused with the corresponding 2D representations. Simulation and real-world experiments demonstrate Any3D-VLA's advantages in improving performance and mitigating the domain gap. Our project homepage is available at https://xianzhefan.github.io/Any3D-VLA.github.io.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
3D point clouds
domain gap
spatial understanding
data scarcity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action
Point Cloud
Domain Generalization
3D Representation
Sensor Fusion
🔎 Similar Papers
No similar papers found.
X
Xianzhe Fan
School of Computing and Data Science, The University of Hong Kong, Hong Kong SAR, China
S
Shengliang Deng
School of Computing and Data Science, The University of Hong Kong, Hong Kong SAR, China
Xiaoyang Wu
Xiaoyang Wu
The Univeristy of Hong Kong
3D Representation LearningSpatial Intelligence
Yuxiang Lu
Yuxiang Lu
The University of Hong Kong
Computer VisionMulti-Task LearningEmbodied AI
Zhuoling Li
Zhuoling Li
The University of Hong Kong
Embodied AIAutonomous Driving3D Viisual Perception
M
Mi Yan
Galbot, Beijing, China; Peking University, Beijing, China
Y
Yujia Zhang
School of Computing and Data Science, The University of Hong Kong, Hong Kong SAR, China
Z
Zhizheng Zhang
Galbot, Beijing, China
He Wang
He Wang
Assistant Professor of Computer Science, Peking University
Embodied AIComputer VisionRobotics
Hengshuang Zhao
Hengshuang Zhao
The University of Hong Kong
Computer VisionMachine LearningArtificial Intelligence