đ¤ AI Summary
The deployment of AI models in safety-critical domainsâsuch as healthcare, finance, and autonomous drivingâis hindered by their lack of explainability. Method: This paper presents a systematic survey of eXplainable Artificial Intelligence (XAI), uniquely integrating *user-centric* (demand-side) and *model-centric* (supply-side) perspectives to establish a four-dimensional evaluation frameworkâencompassing transparency, faithfulness, practicality, and fairnessâand to identify critical gaps between methodological development and real-world deployment. It rigorously analyzes core XAI techniquesâincluding surrogate modeling, attention visualization, counterfactual explanation, causal reasoning, and formal verificationâcharacterizing their applicability boundaries and limitations. Contribution/Results: The study produces an XAI practice atlas spanning 12 application domains and defines explanation priority criteria for high-risk scenarios. These findings provide foundational theoretical and methodological support for XAI standardization, regulatory policy formulation, and cross-domain adoption.