Found in Translation: semantic approaches for enhancing AI interpretability in face verification

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited interpretability of face recognition models, this paper proposes a cognitively inspired global-local collaborative semantic explanation framework. Leveraging facial landmark localization, it embeds human-understandable semantic concepts (e.g., “eyebrow shape”, “lip color”) into the XAI pipeline, jointly modeling pixel-level similarity and semantic-level associations to generate multi-scale saliency maps and LLM-generated natural language explanations. Its key contribution is the first realization of a cross-granularity cognitive leap—from low-level visual features to high-level semantic concepts—within face recognition explainability. Quantitative experiments and a user study (N=120) demonstrate significant improvements in explanation understandability (+42%) and trustworthiness, with an 87% user preference rate over conventional pixel-wise heatmaps. The framework exhibits clear practical value in high-stakes domains such as finance and security.

Technology Category

Application Category

📝 Abstract
The increasing complexity of machine learning models in computer vision, particularly in face verification, requires the development of explainable artificial intelligence (XAI) to enhance interpretability and transparency. This study extends previous work by integrating semantic concepts derived from human cognitive processes into XAI frameworks to bridge the comprehension gap between model outputs and human understanding. We propose a novel approach combining global and local explanations, using semantic features defined by user-selected facial landmarks to generate similarity maps and textual explanations via large language models (LLMs). The methodology was validated through quantitative experiments and user feedback, demonstrating improved interpretability. Results indicate that our semantic-based approach, particularly the most detailed set, offers a more nuanced understanding of model decisions than traditional methods. User studies highlight a preference for our semantic explanations over traditional pixelbased heatmaps, emphasizing the benefits of human-centric interpretability in AI. This work contributes to the ongoing efforts to create XAI frameworks that align AI models behaviour with human cognitive processes, fostering trust and acceptance in critical applications.
Problem

Research questions and friction points this paper is trying to address.

AI Interpretability
Facial Recognition
Decision Process Understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic Technique
Explainable AI
Facial Recognition
🔎 Similar Papers
No similar papers found.
Miriam Doh
Miriam Doh
PhD student, Université Libre de Bruxelles (ULB), Univerisité de Mons (UMONS)
Computer visionFace analysisTrustworthy AI
Caroline Mazini Rodrigues
Caroline Mazini Rodrigues
IRISA - CNRS
Machine LearningExplainable AIInformation RetrievalComputer VisionImage Processing
N
N. Boutry
Laboratoire de Recherche de l’EPITA – LRE, 14-16, Rue Voltaire, Le Kremlin-Bicêtre, 94270, France
L
L. Najman
Univ Gustave Eiffel, CNRS, LIGM, Marne-la-Vallée, 77454, France
M
M. Mancas
ISIA lab - Université de Mons (UMONS), Mons, Belgium
B
B. Gosselin
ISIA lab - Université de Mons (UMONS), Mons, Belgium