🤖 AI Summary
This work exposes a critical security vulnerability in expressive human pose and shape (EHPS) estimation models widely used in digital human generation: while existing methods prioritize estimation accuracy, they largely neglect robustness and adversarial resilience. To address this gap, we propose Tangible Attack (TBA), a novel framework featuring a dual heterogeneous noise generator (DHNG) and a customized adversarial loss function, integrated with VAE-based latent modeling, ControlNet conditioning, and multi-gradient iterative optimization. TBA enables cross-model, highly controllable, and strongly disruptive targeted adversarial attacks. Experiments demonstrate that TBA increases EHPS estimation error by an average of 17.0% and up to 41.0%, providing the first systematic evidence of severe security risks in mainstream digital human generation systems. Our work establishes a vital benchmark for evaluating model robustness and offers concrete directions for enhancing reliability and trustworthiness in expressive human modeling.
📝 Abstract
Expressive human pose and shape estimation (EHPS) is crucial for digital human generation, especially in applications like live streaming. While existing research primarily focuses on reducing estimation errors, it largely neglects robustness and security aspects, leaving these systems vulnerable to adversarial attacks. To address this significant challenge, we propose the extbf{Tangible Attack (TBA)}, a novel framework designed to generate adversarial examples capable of effectively compromising any digital human generation model. Our approach introduces a extbf{Dual Heterogeneous Noise Generator (DHNG)}, which leverages Variational Autoencoders (VAE) and ControlNet to produce diverse, targeted noise tailored to the original image features. Additionally, we design a custom extbf{adversarial loss function} to optimize the noise, ensuring both high controllability and potent disruption. By iteratively refining the adversarial sample through multi-gradient signals from both the noise and the state-of-the-art EHPS model, TBA substantially improves the effectiveness of adversarial attacks. Extensive experiments demonstrate TBA's superiority, achieving a remarkable 41.0% increase in estimation error, with an average improvement of approximately 17.0%. These findings expose significant security vulnerabilities in current EHPS models and highlight the need for stronger defenses in digital human generation systems.