Validation of AI-Based 3D Human Pose Estimation in a Cyber-Physical Environment

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the reliability verification challenge of 3D pose estimation for vulnerable road users (VRUs) in autonomous driving systems. We propose a cyber-physical testing framework integrating vehicle-in-the-loop (VIL) and motion-capture laboratory environments. Our method introduces a novel real-time virtual-scene projection excitation technique, synergizing monocular AI-based skeletal detection, high-fidelity Unreal Engine 5 synthetic rendering, full-body optical motion capture, and real-time animated rendering to enable closed-loop consistency evaluation between real and virtual human pose estimates. Experimental results show that under stable motion conditions, pose estimation errors between the two domains remain below 3°, achieving 98.2% consistency. However, significant deviations persist under dynamic occlusion and complex cycling postures, with average error increasing by 42%. This work establishes a reproducible, scalable, physics-informed verification paradigm for AI-driven perception trustworthiness assessment in autonomous driving.

Technology Category

Application Category

📝 Abstract
Ensuring safe and realistic interactions between automated driving systems and vulnerable road users (VRUs) in urban environments requires advanced testing methodologies. This paper presents a test environment that combines a Vehiclein-the-Loop (ViL) test bench with a motion laboratory, demonstrating the feasibility of cyber-physical (CP) testing of vehicle-pedestrian and vehicle-cyclist interactions. Building upon previous work focused on pedestrian localization, we further validate a human pose estimation (HPE) approach through a comparative analysis of real-world (RW) and virtual representations of VRUs. The study examines the perception of full-body motion using a commercial monocular camera-based 3Dskeletal detection AI. The virtual scene is generated in Unreal Engine 5, where VRUs are animated in real time and projected onto a screen to stimulate the camera. The proposed stimulation technique ensures the correct perspective, enabling realistic vehicle perception. To assess the accuracy and consistency of HPE across RW and CP domains, we analyze the reliability of detections as well as variations in movement trajectories and joint estimation stability. The validation includes dynamic test scenarios where human avatars, both walking and cycling, are monitored under controlled conditions. Our results show a strong alignment in HPE between RW and CP test conditions for stable motion patterns, while notable inaccuracies persist under dynamic movements and occlusions, particularly for complex cyclist postures. These findings contribute to refining CP testing approaches for evaluating next-generation AI-based vehicle perception and to enhancing interaction models of automated vehicles and VRUs in CP environments.
Problem

Research questions and friction points this paper is trying to address.

Validating AI-based 3D human pose estimation in cyber-physical environments
Assessing accuracy of vehicle-pedestrian and vehicle-cyclist interaction testing
Evaluating reliability of human pose detection across real and virtual domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Vehicle-in-the-Loop with motion lab
Uses Unreal Engine 5 for virtual VRU animation
Validates AI-based 3D pose estimation accuracy
🔎 Similar Papers
No similar papers found.
L
Lisa Marie Otto
Fachgebiet Kraftfahrzeuge, TU Berlin, Berlin, Germany
M
Michael Kaiser
Fachgebiet Kraftfahrzeuge, TU Berlin, Berlin, Germany
Daniel Seebacher
Daniel Seebacher
University of Konstanz
Machine LearningVisual AnalyticsSpatio-Temporal Events
Steffen Müller
Steffen Müller
Professor für Kraftfahrzeugtechnik, TU Berlin
Kraftfahrzeugtechnik