Trade-offs in Financial AI: Explainability in a Trilemma with Accuracy and Compliance

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the oversimplified binary trade-off often assumed in financial AI research between model interpretability and requirements such as accuracy and regulatory compliance, overlooking multidimensional practical constraints. Drawing on semi-structured interviews with 20 financial industry practitioners and adopting a sociotechnical systems perspective, the paper proposes a “trilemma” framework that positions accuracy and compliance as non-negotiable prerequisites, while interpretability functions not as an independent objective but as a critical threshold determining AI adoption. The research further clarifies the hierarchical relationships among these factors: cost and speed influence deployment feasibility, whereas interpretability drives stakeholder trust and real-world application. These insights offer both theoretical grounding and practical guidance for the design of financial AI systems.

Technology Category

Application Category

📝 Abstract
As Artificial Intelligence (AI) becomes increasingly embedded in financial decision-making, the opacity of complex models presents significant challenges for professionals and regulators. While the field of Explainable AI (XAI) attempts to bridge this gap, current research often reduces the implementation challenge to a binary trade-off between model accuracy and explainability. This paper argues that such a view is insufficient for the financial domain, where algorithmic choices must navigate a complex sociotechnical web of strict regulatory bounds, budget constraints, and latency requirements. Through semi-structured interviews with twenty finance professionals, ranging from C-suite executives and developers to regulators across multiple regions, this study empirically investigates how practitioners prioritize explainability relative to four competing factors: accuracy, compliance, cost, and speed. Our findings reveal that these priorities are structured not as a simple trade-off, but as a system of distinct prerequisites and constraints. Accuracy and compliance emerge as non-negotiable"hygiene factors": without them, an AI system is viewed as a liability regardless of its transparency. Operational levers (speed and cost) serve as secondary constraints that determine practical feasibility, while ease of understanding functions as a gateway to adoption, shaping whether AI tools are trusted, used, and defensible in practice.
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
Financial AI
Accuracy
Compliance
Trade-offs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Financial AI
Regulatory Compliance
Accuracy-Explainability Trade-off
Sociotechnical Constraints
🔎 Similar Papers
No similar papers found.
P
Patricia Marcella Evite
Università degli Studi di Napoli Federico II, Naples, Italy
E
Ekaterina Svetlova
University of Twente, Netherlands
Doina Bucur
Doina Bucur
University of Twente
network data sciencemachine learningevolutionary algorithms