Face-Human-Bench: A Comprehensive Benchmark of Face and Human Understanding for Multi-modal Assistants

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal assistants lack systematic benchmarks for evaluating facial and bodily understanding capabilities. Method: This paper introduces FaceBodyBench—the first comprehensive, bilingual (Chinese–English) benchmark for this purpose—comprising a 900-item development set and an 1,800-item test set. We propose the first three-tier hierarchical taxonomy for facial–bodily understanding; design a semi-automated data synthesis pipeline integrating public datasets and multimodal large language model (MLLM) generation; and establish a multi-model collaborative evaluation framework grounded in attribution analysis. Contribution/Results: Evaluating 25 state-of-the-art MLLMs, we uncover strong inter-capability correlations, pronounced spatial sensitivity, and diminishing returns of chain-of-thought prompting. Crucially, we identify critical capability gaps requiring expert-model augmentation—particularly in fine-grained anatomical reasoning and cross-modal pose–expression alignment.

Technology Category

Application Category

📝 Abstract
Faces and humans are crucial elements in social interaction and are widely included in everyday photos and videos. Therefore, a deep understanding of faces and humans will enable multi-modal assistants to achieve improved response quality and broadened application scope. Currently, the multi-modal assistant community lacks a comprehensive and scientific evaluation of face and human understanding abilities. In this paper, we first propose a hierarchical ability taxonomy that includes three levels of abilities. Then, based on this taxonomy, we collect images and annotations from publicly available datasets in the face and human community and build a semi-automatic data pipeline to produce problems for the new benchmark. Finally, the obtained Face-Human-Bench comprises a development set with 900 problems and a test set with 1800 problems, supporting both English and Chinese. We conduct evaluations over 25 mainstream multi-modal large language models (MLLMs) with our Face-Human-Bench, focusing on the correlation between abilities, the impact of the relative position of targets on performance, and the impact of Chain of Thought (CoT) prompting on performance. Moreover, inspired by multi-modal agents, we also explore which abilities of MLLMs need to be supplemented by specialist models.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Assistants
Face and Body Recognition
Evaluation Framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Face-Human-Bench
Multi-modal Assistants Evaluation
Skill-tree Testing Framework
🔎 Similar Papers
No similar papers found.