Sample-Efficient Robot Skill Learning for Construction Tasks: Benchmarking Hierarchical Reinforcement Learning and Vision-Language-Action VLA Model

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of rapidly acquiring new skills for construction robots in real-world settings. Method: We systematically compare vision-language-action (VLA) models against hierarchical reinforcement learning (RL) approaches—specifically DQN—using a multi-stage panel installation testbed and a custom bimodal teleoperation interface. Evaluation focuses on sample efficiency, cross-task generalization, and deployment practicality. Results: VLA models demonstrate exceptional few-shot adaptability: zero-shot success rate of 60% in the pick-up phase, reaching 100% with only one demonstration; in contrast, RL methods require extensive hyperparameter tuning and large-scale training data to achieve comparable performance. VLA achieves production-ready capability with minimal data, significantly reducing manual programming effort and deployment time. This work establishes the first benchmarking framework for VLA versus RL in construction automation and empirically validates VLA’s feasibility for efficient skill transfer in structured physical environments.

Technology Category

Application Category

📝 Abstract
This study evaluates two leading approaches for teaching construction robots new skills to understand their applicability for construction automation: a Vision-Language-Action (VLA) model and Reinforcement Learning (RL) methods. The goal is to understand both task performance and the practical effort needed to deploy each approach on real jobs. The authors developed two teleoperation interfaces to control the robots and collect the demonstrations needed, both of which proved effective for training robots for long-horizon and dexterous tasks. In addition, the authors conduct a three-stage evaluation. First, the authors compare a Multi-Layer Perceptron (MLP) policy with a Deep Q-network (DQN) imitation model to identify the stronger RL baseline, focusing on model performance, generalization, and a pick-up experiment. Second, three different VLA models are trained in two different scenarios and compared with each other. Third, the authors benchmark the selected RL baseline against the VLA model using computational and sample-efficiency measures and then a robot experiment on a multi-stage panel installation task that includes transport and installation. The VLA model demonstrates strong generalization and few-shot capability, achieving 60% and 100% success in the pickup phase. In comparison, DQN can be made robust but needs additional noise during tuning, which increases the workload. Overall, the findings indicate that VLA offers practical advantages for changing tasks by reducing programming effort and enabling useful performance with minimal data, while DQN provides a viable baseline when sufficient tuning effort is acceptable.
Problem

Research questions and friction points this paper is trying to address.

Evaluates robot skill learning methods for construction automation
Compares Vision-Language-Action and Reinforcement Learning approaches
Assesses task performance and deployment effort in real-world settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLA model enables few-shot learning with strong generalization
Hierarchical RL provides robust baseline via teleoperation demonstrations
Benchmark compares sample efficiency between VLA and RL methods
🔎 Similar Papers
No similar papers found.
Z
Zhaofeng Hu
Department of Civil Engineering, Stony Brook University, Stony Brook, NY 11794 USA
H
Hongrui Yu
Department of Civil and Environmental Engineering, Virginia Tech, Blacksburg, VA 24061 USA
V
Vaidhyanathan Chandramouli
Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061 USA
Ci-Jyun Liang
Ci-Jyun Liang
Assistant Professor, Department of Civil Engineering, Stony Brook University
RoboticsComputer VisionImitation LearningDigital TwinsMixed Reality