🤖 AI Summary
This work addresses structural anomalies and generation artifacts prevalent in existing text-guided human pose editing methods, as well as the absence of fine-grained pose consistency evaluation metrics. To bridge this gap, we propose the first unified evaluation framework based on a layer-selective multimodal large language model (MLLM), integrating contrastive LoRA fine-tuning with Layer Sensitivity Analysis (LSA) to precisely identify optimal feature layers. This enables simultaneous authenticity detection and multidimensional quality regression. Furthermore, we introduce HPE-Bench, a new benchmark dataset designed to support systematic evaluation of pose editing outputs. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on both tasks, effectively bridging the divide between forensic analysis of generated content and comprehensive quality assessment.
📝 Abstract
Text-guided human pose editing has gained significant traction in AIGC applications. However,it remains plagued by structural anomalies and generative artifacts. Existing evaluation metrics often isolate authenticity detection from quality assessment, failing to provide fine-grained insights into pose-specific inconsistencies. To address these limitations, we introduce HPE-Bench, a specialized benchmark comprising 1,700 standardized samples from 17 state-of-the-art editing models, offering both authenticity labels and multi-dimensional quality scores. Furthermore, we propose a unified framework based on layer-selective multimodal large language models (MLLMs). By employing contrastive LoRA tuning and a novel layer sensitivity analysis (LSA) mechanism, we identify the optimal feature layer for pose evaluation. Our framework achieves superior performance in both authenticity detection and multi-dimensional quality regression, effectively bridging the gap between forensic detection and quality assessment.