Triadic Fusion of Cognitive, Functional, and Causal Dimensions for Explainable LLMs: The TAXAL Framework

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address trust and accountability challenges arising from opaque reasoning, ambiguous planning logic, and unassessable systemic impacts when deploying agentic large language models (LLMs) in high-stakes domains, this paper proposes TAXAL—a novel explainability framework. TAXAL introduces the first triadic paradigm integrating cognitive, functional, and causal dimensions of explanation, underpinned by a three-dimensional weight alignment mechanism that jointly models reasoning paths, planning logic, and real-world socio-technical utility. It further incorporates post-hoc attribution, dialogue-based explanation interfaces, and explanation-aware prompting to enable concept-driven, interpretable system design. Empirical evaluation across legal, educational, and healthcare applications demonstrates that TAXAL significantly improves explanation quality, user trust, and system accountability—establishing a scalable theoretical and practical foundation for governance of high-risk AI systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly being deployed in high-risk domains where opacity, bias, and instability undermine trust and accountability. Traditional explainability methods, focused on surface outputs, do not capture the reasoning pathways, planning logic, and systemic impacts of agentic LLMs. We introduce TAXAL (Triadic Alignment for eXplainability in Agentic LLMs), a triadic fusion framework that unites three complementary dimensions: cognitive (user understanding), functional (practical utility), and causal (faithful reasoning). TAXAL provides a unified, role-sensitive foundation for designing, evaluating, and deploying explanations in diverse sociotechnical settings. Our analysis synthesizes existing methods, ranging from post-hoc attribution and dialogic interfaces to explanation-aware prompting, and situates them within the TAXAL triadic fusion model. We further demonstrate its applicability through case studies in law, education, healthcare, and public services, showing how explanation strategies adapt to institutional constraints and stakeholder roles. By combining conceptual clarity with design patterns and deployment pathways, TAXAL advances explainability as a technical and sociotechnical practice, supporting trustworthy and context-sensitive LLM applications in the era of agentic AI.
Problem

Research questions and friction points this paper is trying to address.

Addressing opacity, bias, and instability in high-risk LLM deployments
Capturing reasoning pathways and systemic impacts of agentic LLMs
Unifying cognitive, functional, and causal dimensions for explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Triadic fusion framework for explainable LLMs
Combines cognitive, functional, and causal dimensions
Provides role-sensitive foundation for explanation design
🔎 Similar Papers
D
David Herrera-Poyatos
Department of Computer Science and Artificial Intelligence, Andalusian Institute of Data Science and Computational Intelligence (DaSCI), University of Granada, Spain.
C
Carlos Peláez-González
Department of Computer Science and Artificial Intelligence, Andalusian Institute of Data Science and Computational Intelligence (DaSCI), University of Granada, Spain.
Cristina Zuheros
Cristina Zuheros
University of Granada
Deep LearningSocial NetworksDecision MakingComputing with Words
V
Virilo Tejedor
Department of Computer Science and Artificial Intelligence, Andalusian Institute of Data Science and Computational Intelligence (DaSCI), University of Granada, Spain.
R
Rosana Montes
Department of Software Engineering, Andalusian Institute of Data Science and Computational Intelligence (DaSCI), University of Granada, Spain.
Francisco Herrera
Francisco Herrera
Professor Computer Science and AI, DaSCI Research Institute, Granada University, Spain
Artificial IntelligenceComputational IntelligenceData ScienceTrustworthy AI