The Price Is Not Right: Neuro-Symbolic Methods Outperform VLAs on Structured Long-Horizon Manipulation Tasks with Significantly Lower Energy Consumption

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically evaluates the performance and energy efficiency of vision–language–action (VLA) models versus neuro-symbolic approaches in structured, long-horizon robotic manipulation tasks. Focusing on the Tower of Hanoi, we compare a fine-tuned end-to-end VLA model with a neuro-symbolic architecture that integrates PDDL-based symbolic planning and learned low-level control. Experimental results demonstrate that the neuro-symbolic method achieves a 95% success rate on the 3-disk task—substantially outperforming the VLA model’s 34%—and successfully generalizes to the 4-disk variant with a 78% success rate, while reducing training energy consumption by nearly two orders of magnitude. This study is the first to reveal the critical role of explicit symbolic structures in enhancing reliability, generalization, and energy efficiency in long-horizon, structured manipulation tasks.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have recently been proposed as a pathway toward generalist robotic policies capable of interpreting natural language and visual inputs to generate manipulation actions. However, their effectiveness and efficiency on structured, long-horizon manipulation tasks remain unclear. In this work, we present a head-to-head empirical comparison between a fine-tuned open-weight VLA model π0 and a neuro-symbolic architecture that combines PDDL-based symbolic planning with learned low-level control. We evaluate both approaches on structured variants of the Towers of Hanoi manipulation task in simulation while measuring both task performance and energy consumption during training and execution. On the 3-block task, the neuro-symbolic model achieves 95% success compared to 34% for the best-performing VLA. The neuro-symbolic model also generalizes to an unseen 4-block variant (78% success), whereas both VLAs fail to complete the task. During training, VLA fine-tuning consumes nearly two orders of magnitude more energy than the neuro-symbolic approach. These results highlight important trade-offs between end-to-end foundation-model approaches and structured reasoning architectures for long-horizon robotic manipulation, emphasizing the role of explicit symbolic structure in improving reliability, data efficiency, and energy efficiency. Code and models are available at https://price-is-not-right.github.io
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
neuro-symbolic
long-horizon manipulation
energy consumption
structured tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

neuro-symbolic
structured reasoning
energy efficiency
long-horizon manipulation
symbolic planning
🔎 Similar Papers
No similar papers found.