Feature-Function Curvature Analysis: A Geometric Framework for Explaining Differentiable Models

πŸ“… 2025-10-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing XAI methods predominantly yield static, unidimensional feature attributions, failing to capture nonlinearities, feature interactions, and training-time dynamics. To address this, we propose Feature-Function Curvature Analysis (FFCA)β€”the first geometric framework modeling the dynamic role of features in differentiable models. FFCA constructs a four-dimensional explanation signature quantifying feature importance, volatility, nonlinearity, and interaction strength, leveraging high-order gradient curvature and dynamic prototype clustering to track temporal evolution across the entire training trajectory. Experiments reveal a hierarchical learning pattern: models first acquire linear representations before progressively learning interactions. FFCA further enables early diagnosis of model capacity limitations and overfitting. Empirically, FFCA significantly outperforms baselines on both static attribution and dynamic diagnostic tasks, establishing a rigorous geometric foundation for interpretable AI.

Technology Category

Application Category

πŸ“ Abstract
Explainable AI (XAI) is critical for building trust in complex machine learning models, yet mainstream attribution methods often provide an incomplete, static picture of a model's final state. By collapsing a feature's role into a single score, they are confounded by non-linearity and interactions. To address this, we introduce Feature-Function Curvature Analysis (FFCA), a novel framework that analyzes the geometry of a model's learned function. FFCA produces a 4-dimensional signature for each feature, quantifying its: (1) Impact, (2) Volatility, (3) Non-linearity, and (4) Interaction. Crucially, we extend this framework into Dynamic Archetype Analysis, which tracks the evolution of these signatures throughout the training process. This temporal view moves beyond explaining what a model learned to revealing how it learns. We provide the first direct, empirical evidence of hierarchical learning, showing that models consistently learn simple linear effects before complex interactions. Furthermore, this dynamic analysis provides novel, practical diagnostics for identifying insufficient model capacity and predicting the onset of overfitting. Our comprehensive experiments demonstrate that FFCA, through its static and dynamic components, provides the essential geometric context that transforms model explanation from simple quantification to a nuanced, trustworthy analysis of the entire learning process.
Problem

Research questions and friction points this paper is trying to address.

Explaining complex models beyond static attribution scores
Quantifying feature impact, volatility, non-linearity and interactions
Tracking feature signature evolution throughout training process
Innovation

Methods, ideas, or system contributions that make the work stand out.

FFCA framework analyzes model geometry with 4D feature signatures
Dynamic Archetype Analysis tracks signature evolution during training
Provides diagnostics for model capacity and overfitting prediction
πŸ”Ž Similar Papers
No similar papers found.