Learning Human-Object Interaction for 3D Human Pose Estimation from LiDAR Point Clouds

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges posed by spatial ambiguity and uneven point distribution in LiDAR point clouds—particularly around body parts involved in human-object interactions—which significantly degrade the accuracy of 3D human pose estimation. To tackle these issues, we propose the HOIL framework, which leverages human-object interaction-aware contrastive learning (HOIC) to model interaction semantics and introduces a contact-aware part-guided pooling (CPPool) mechanism to mitigate local ambiguity and class imbalance. Furthermore, we incorporate a temporal keypoint refinement strategy guided by contact cues to enhance temporal consistency. Our approach substantially improves keypoint localization accuracy in frequently interacting regions such as hands and feet, achieving state-of-the-art performance overall.

Technology Category

Application Category

📝 Abstract
Understanding humans from LiDAR point clouds is one of the most critical tasks in autonomous driving due to its close relationships with pedestrian safety, yet it remains challenging in the presence of diverse human-object interactions and cluttered backgrounds. Nevertheless, existing methods largely overlook the potential of leveraging human-object interactions to build robust 3D human pose estimation frameworks. There are two major challenges that motivate the incorporation of human-object interaction. First, human-object interactions introduce spatial ambiguity between human and object points, which often leads to erroneous 3D human keypoint predictions in interaction regions. Second, there exists severe class imbalance in the number of points between interacting and non-interacting body parts, with the interaction-frequent regions such as hand and foot being sparsely observed in LiDAR data. To address these challenges, we propose a Human-Object Interaction Learning (HOIL) framework for robust 3D human pose estimation from LiDAR point clouds. To mitigate the spatial ambiguity issue, we present human-object interaction-aware contrastive learning (HOICL) that effectively enhances feature discrimination between human and object points, particularly in interaction regions. To alleviate the class imbalance issue, we introduce contact-aware part-guided pooling (CPPool) that adaptively reallocates representational capacity by compressing overrepresented points while preserving informative points from interacting body parts. In addition, we present an optional contact-based temporal refinement that refines erroneous per-frame keypoint estimates using contact cues over time. As a result, our HOIL effectively leverages human-object interaction to resolve spatial ambiguity and class imbalance in interaction regions. Codes will be released.
Problem

Research questions and friction points this paper is trying to address.

human-object interaction
3D human pose estimation
LiDAR point clouds
spatial ambiguity
class imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-Object Interaction
LiDAR Point Clouds
3D Human Pose Estimation
Contrastive Learning
Class Imbalance
🔎 Similar Papers
No similar papers found.