CombatVLA: An Efficient Vision-Language-Action Model for Combat Tasks in 3D Action Role-Playing Games

๐Ÿ“… 2025-03-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Real-time combat decision-making in 3D action role-playing games (ARPGs) demands millisecond-level response latency, high-resolution visual perception, and dynamic tactical reasoningโ€”posing significant challenges for existing AI agents. Method: We propose the first lightweight 3B vision-language-action (VLA) model specifically optimized for ARPG combat. Our approach introduces a novel Action-of-Thought (AoT) data format and a truncated AoT inference strategy, integrated with video-action pair training, action tracker-based data collection, and an end-to-end execution framework. Contribution/Results: Evaluated on a newly constructed ARPG combat understanding benchmark, our model achieves higher task success rates than human players while reducing inference latency to the millisecond scale and accelerating combat execution by 50ร—. To foster reproducibility and community advancement, we fully open-source the code, datasets, models, and benchmark.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advances in Vision-Language-Action models (VLAs) have expanded the capabilities of embodied intelligence. However, significant challenges remain in real-time decision-making in complex 3D environments, which demand second-level responses, high-resolution perception, and tactical reasoning under dynamic conditions. To advance the field, we introduce CombatVLA, an efficient VLA model optimized for combat tasks in 3D action role-playing games(ARPGs). Specifically, our CombatVLA is a 3B model trained on video-action pairs collected by an action tracker, where the data is formatted as action-of-thought (AoT) sequences. Thereafter, CombatVLA seamlessly integrates into an action execution framework, allowing efficient inference through our truncated AoT strategy. Experimental results demonstrate that CombatVLA not only outperforms all existing models on the combat understanding benchmark but also achieves a 50-fold acceleration in game combat. Moreover, it has a higher task success rate than human players. We will open-source all resources, including the action tracker, dataset, benchmark, model weights, training code, and the implementation of the framework at https://combatvla.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Real-time decision-making in complex 3D environments
High-resolution perception and tactical reasoning under dynamic conditions
Efficient combat task execution in 3D action role-playing games
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient 3B Vision-Language-Action model for ARPGs
Trained on action-of-thought sequences from action tracker
Truncated AoT strategy enables 50-fold acceleration
๐Ÿ”Ž Similar Papers
No similar papers found.
P
Peng Chen
Alibaba Group
P
Pi Bu
Alibaba Group
Yingyao Wang
Yingyao Wang
Alibaba Group, Harbin Institute of Technology
LVLMQuestion AnsweringKnowledge Reasoning
X
Xinyi Wang
Alibaba Group
Z
Ziming Wang
Alibaba Group
J
Jie Guo
Alibaba Group
Y
Yingxiu Zhao
Alibaba Group
Q
Qi Zhu
Alibaba Group
Jun Song
Jun Song
Shenzhen University
nanophotonics
S
Siran Yang
Alibaba Group
J
Jiamang Wang
Alibaba Group
B
Bo Zheng
Alibaba Group