milliMamba: Specular-Aware Human Pose Estimation via Dual mmWave Radar with Multi-Frame Mamba Fusion

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of sparse radar signals and missing joint detections in millimeter-wave (mmWave) radar-based human pose estimation (HPE) caused by mirror-like reflections, this paper proposes an end-to-end spatiotemporal joint modeling framework. Methodologically, it pioneers the integration of the Mamba state-space model into radar HPE; introduces a Cross-View Fusion Mamba encoder for effective dual-radar cross-view feature fusion; and designs a Spatio-Temporal-Cross Attention decoder to jointly model multi-frame temporal dynamics. Additionally, a velocity-constrained loss is incorporated to enhance motion smoothness. Evaluated on the TransHuPR and HuPR benchmarks, the method achieves absolute AP improvements of 11.0 and 14.6 over strong baselines, respectively—demonstrating substantial gains in accuracy and robustness while maintaining computationally efficient inference.

Technology Category

Application Category

📝 Abstract
Millimeter-wave radar offers a privacy-preserving and lighting-invariant alternative to RGB sensors for Human Pose Estimation (HPE) task. However, the radar signals are often sparse due to specular reflection, making the extraction of robust features from radar signals highly challenging. To address this, we present milliMamba, a radar-based 2D human pose estimation framework that jointly models spatio-temporal dependencies across both the feature extraction and decoding stages. Specifically, given the high dimensionality of radar inputs, we adopt a Cross-View Fusion Mamba encoder to efficiently extract spatio-temporal features from longer sequences with linear complexity. A Spatio-Temporal-Cross Attention decoder then predicts joint coordinates across multiple frames. Together, this spatio-temporal modeling pipeline enables the model to leverage contextual cues from neighboring frames and joints to infer missing joints caused by specular reflections. To reinforce motion smoothness, we incorporate a velocity loss alongside the standard keypoint loss during training. Experiments on the TransHuPR and HuPR datasets demonstrate that our method achieves significant performance improvements, exceeding the baselines by 11.0 AP and 14.6 AP, respectively, while maintaining reasonable complexity. Code: https://github.com/NYCU-MAPL/milliMamba
Problem

Research questions and friction points this paper is trying to address.

Estimates 2D human poses using millimeter-wave radar
Addresses sparse signals from specular reflection in radar data
Models spatio-temporal dependencies to infer missing joints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual mmWave radar for specular-aware pose estimation
Cross-View Fusion Mamba encoder with linear complexity
Spatio-Temporal-Cross Attention decoder for multi-frame prediction
🔎 Similar Papers
No similar papers found.
N
Niraj Prakash Kini
National Yang Ming Chiao Tung University, Taiwan
S
Shiau-Rung Tsai
National Yang Ming Chiao Tung University, Taiwan
G
Guan-Hsun Lin
National Yang Ming Chiao Tung University, Taiwan
Wen-Hsiao Peng
Wen-Hsiao Peng
Professor, Computer Science, National Chiao Tung University
Video coding standardsmachine learningcomputer visionvisual signal processing
C
Ching-Wen Ma
National Yang Ming Chiao Tung University, Taiwan
J
Jenq-Neng Hwang
University of Washington, USA