Autonomous Control of Redundant Hydraulic Manipulator Using Reinforcement Learning with Action Feedback

📅 2022-10-23
🏛️ IEEE/RJS International Conference on Intelligent RObots and Systems
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Hydraulic-driven redundant manipulators face challenges in autonomous control due to complex system modeling and strong reliance on precise dynamic parameters. Method: This paper proposes an end-to-end data-driven approach requiring only minimal simulation priors and teleoperated demonstration data. It employs Actuator Networks to model nonlinear hydraulic dynamics and integrates forward-kinematics–guided supervision into a modified DDPG framework—enhanced with Ornstein–Uhlenbeck noise for exploration—to directly output joint-level commands for 3D end-effector pose tracking. Contribution/Results: We introduce, for the first time, kinematic feedback within the RL action-selection mechanism, eliminating the need for system identification, inverse-dynamics modeling, or post-deployment fine-tuning. Evaluated on a scaled 3R1P hydraulic logging crane, the policy trained purely in simulation transfers zero-shot to hardware, achieving high-precision 3D position tracking. This significantly advances the feasibility and robustness of data-driven control for strongly nonlinear hydraulic systems.

Technology Category

Application Category

📝 Abstract
This article presents an entirely data-driven approach for autonomous control of redundant manipulators with hydraulic actuation. The approach only requires minimal system information, which is inherited from a simulation model. The non-linear hydraulic actuation dynamics are modeled using actuator networks from the data gathered during the manual operation of the manipulator to effectively emulate the real system in a simulation environment. A neural network control policy for autonomous control, based on end-effector (EE) position tracking is then learned using Reinforcement Learning (RL) with Ornstein-Uhlenbeck process noise (OUNoise) for efficient exploration. The RL agent also receives feedback based on supervised learning of the forward kinematics which facilitates selecting the best suitable action from exploration. The control policy directly provides the joint variables as outputs based on provided target EE position while taking into account the system dynamics. The joint variables are then mapped to the hydraulic valve commands, which are then fed to the system without further modifications. The proposed approach is implemented on a scaled hydraulic forwarder crane with three revolute and one prismatic joint to track the desired position of the EE in 3-Dimensional (3D) space. With the emulated dynamics and extensive learning in simulation, the results demonstrate the feasibility of deploying the learned controller directly on the real system.
Problem

Research questions and friction points this paper is trying to address.

Autonomous control of hydraulic manipulators using reinforcement learning
Modeling non-linear hydraulic dynamics with minimal system information
Tracking end-effector position in 3D space with learned control policy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-driven hydraulic manipulator control using RL
Neural network policy with OUNoise for exploration
Simulation-emulated dynamics enable real-world deployment
🔎 Similar Papers
No similar papers found.
R
Rohit Dhakate
Department of Smart Systems Technologies in the Control of Networked Systems Group, University of Klagenfurt
Christian Brommer
Christian Brommer
University of Klagenfurt
State EstimationSensor FusionAerial RoboticsRoboticsAutonomous Systems
Christoph Böhm
Christoph Böhm
Ph.D. Student at the University of Klagenfurt
UAVState EstimationSelf-CalibrationObservabilityTrajectory Planning
H
Harald Gietler
Department of Smart Systems Technologies in the Sensors and Actuators Group, University of Klagenfurt
S
S. Weiss
Department of Smart Systems Technologies in the Control of Networked Systems Group, University of Klagenfurt
J
J. Steinbrener
Department of Smart Systems Technologies in the Control of Networked Systems Group, University of Klagenfurt