True to Tone? Quantifying Skin Tone Fidelity and Bias in Photographic-to-Virtual Human Pipelines

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the degradation of realism and emergence of skin tone bias in virtual human generation caused by the absence of chromatic calibration, which leads to skin color distortion. To tackle this issue, the authors propose the first fully automatic and scalable framework for skin tone fidelity evaluation. The framework integrates decoupling of skin tone and illumination, texture recoloring, real-time rendering, and quantitative analysis using ΔE and Individual Typology Angle (ITA) metrics in CIELAB color space. By incorporating TRUST-based illumination compensation and MetaHuman multi-illumination configurations, the system enables low-overhead, end-to-end assessment. Evaluation across 19,848 rendered instances reveals significant systematic chromatic bias against individuals with darker skin tones, and differential responses across phenotypes empirically confirm the presence of skin tone bias in current methods.
📝 Abstract
Accurate reproduction of facial skin tone is essential for realism, identity preservation, and fairness in Virtual Human (VH) rendering. However, most accessible avatar creation pipelines rely on photographic inputs that lack colorimetric calibration, which can introduce inconsistencies and bias. We propose a fully automatic and scalable methodology to systematically evaluate skin tone fidelity across the VH generation pipeline. Our approach defines a full workflow that integrates skin color and illumination extraction, texture recolorization, real-time rendering, and quantitative color analysis. Using facial images from the Chicago Face Database (CFD), we compare skin tone extraction strategies based on cheek-region sampling, following the literature, and multidimensional masking derived from full-face analysis. Additionally, we test both strategies with lighting isolation, using the pre-trained TRUST framework, employed without any training or optimization within our pipeline. Extracted skin tones are applied to MetaHuman textures and rendered under multiple lighting configurations. Skin tone consistency is evaluated objectively in the CIELAB color space using the $ΔE$ metric and the Individual Typology Angle (ITA). The proposed methodology operates without manual intervention and, with the exception of pre-trained illumination compensation modules, the pipeline does not include learning or training stages, enabling low computational cost and large-scale evaluation. Using this framework, we generate and analyze approximately 19,848 rendered instances. Our results show phenotype-dependent behavior of extraction strategies and consistently higher colorimetric errors for darker skin tones.
Problem

Research questions and friction points this paper is trying to address.

skin tone fidelity
virtual human
color bias
photographic-to-virtual pipeline
fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

skin tone fidelity
virtual human rendering
colorimetric bias
illumination isolation
CIELAB ΔE
G
Gabriel Ferri Schneider
PUCRS, Porto Alegre – RS – Brazil
E
Erick Menezes
UNIT Araujo – SE – Brazil; PUCRS, Porto Alegre – RS – Brazil; INCT-SANI, Brazil
R
Rafael Mecenas
UNIT Araujo – SE – Brazil; PUCRS, Porto Alegre – RS – Brazil; INCT-SANI, Brazil
Paulo Knob
Paulo Knob
PUCRS
Simulação de multidõesComputação GráficaGames
V
Victor Araujo
PUCRS, Porto Alegre – RS – Brazil; UNIT Araujo – SE – Brazil; Kunumi Institute, Brazil; INCT-SANI, Brazil
Soraia Raupp Musse
Soraia Raupp Musse
Full Professor, Pontifical Catholic University of Rio Grande do Sul
Crowd SimulationPedestrian simulationComputer AnimationVirtual humansVisual perception