🤖 AI Summary
Traditional financial decision-making models rely on parametric knowledge, suffering from factual inconsistency, incomplete reasoning chains, insufficient semantic coverage, and poor interpretability. To address these limitations, this work proposes a knowledge-enhanced large language model (LLM) agent that integrates external financial knowledge retrieval, structured semantic encoding, and traceable causal reasoning. We innovatively design a weighted knowledge fusion mechanism and a multi-head attention–driven causal chain generation module to jointly optimize prediction accuracy and reasoning transparency. Evaluated on financial text understanding and decision-making tasks, our approach significantly outperforms mainstream baseline models across three key metrics: accuracy, generation quality, and factual support. These results empirically validate the critical importance of knowledge guidance and interpretable causal reasoning in building trustworthy financial AI systems.
📝 Abstract
This study investigates an explainable reasoning method for financial decision-making based on knowledge-enhanced large language model agents. To address the limitations of traditional financial decision methods that rely on parameterized knowledge, lack factual consistency, and miss reasoning chains, an integrated framework is proposed that combines external knowledge retrieval, semantic representation, and reasoning generation. The method first encodes financial texts and structured data to obtain semantic representations, and then retrieves task-related information from external knowledge bases using similarity computation. Internal representations and external knowledge are combined through weighted fusion, which ensures fluency while improving factual accuracy and completeness of generated content. In the reasoning stage, a multi-head attention mechanism is introduced to construct logical chains, allowing the model to present transparent causal relationships and traceability during generation. Finally, the model jointly optimizes task objectives and explanation consistency objectives, which enhances predictive performance and reasoning interpretability. Experiments on financial text processing and decision tasks show that the method outperforms baseline approaches in accuracy, text generation quality, and factual support, verifying the effectiveness of knowledge enhancement and explainable reasoning. Overall, the proposed approach overcomes the limitations of traditional models in semantic coverage and reasoning transparency, and demonstrates strong practical value in complex financial scenarios.