AxBERT: An Interpretable Chinese Spelling Correction Method Driven by Associative Knowledge Network

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning models for Chinese spelling error correction suffer from limited interpretability. To address this, we propose AxBERT—the first model that explicitly aligns a statistically driven Associative Knowledge Network (AKN) with BERT’s attention mechanism. Specifically, AxBERT constructs an AKN based on character co-occurrence statistics, designs a translation matrix to align the AKN’s semantic space with BERT’s contextual representations, and incorporates a dynamic attention weight regulator to jointly optimize correction accuracy and interpretability. On the SIGHAN benchmarks, AxBERT significantly outperforms state-of-the-art baselines. Qualitative analysis demonstrates that its corrections are grounded in linguistically plausible reasoning, with transparent and traceable decision-making. Our core contribution is the first end-to-end unification of knowledge-enhanced interpretability and pretrained semantic modeling—bridging statistical linguistic knowledge and deep contextual understanding within a single, explainable architecture.

Technology Category

Application Category

📝 Abstract
Deep learning has shown promising performance on various machine learning tasks. Nevertheless, the uninterpretability of deep learning models severely restricts the usage domains that require feature explanations, such as text correction. Therefore, a novel interpretable deep learning model (named AxBERT) is proposed for Chinese spelling correction by aligning with an associative knowledge network (AKN). Wherein AKN is constructed based on the co-occurrence relations among Chinese characters, which denotes the interpretable statistic logic contrasted with uninterpretable BERT logic. And a translator matrix between BERT and AKN is introduced for the alignment and regulation of the attention component in BERT. In addition, a weight regulator is designed to adjust the attention distributions in BERT to appropriately model the sentence semantics. Experimental results on SIGHAN datasets demonstrate that AxBERT can achieve extraordinary performance, especially upon model precision compared to baselines. Our interpretable analysis, together with qualitative reasoning, can effectively illustrate the interpretability of AxBERT.
Problem

Research questions and friction points this paper is trying to address.

Develops interpretable Chinese spelling correction using associative knowledge network.
Aligns BERT with AKN for enhanced model interpretability and precision.
Introduces translator matrix and weight regulator for semantic modeling.
Innovation

Methods, ideas, or system contributions that make the work stand out.

AxBERT integrates associative knowledge network for interpretability.
Translator matrix aligns BERT with associative knowledge network.
Weight regulator adjusts BERT attention for semantic accuracy.
🔎 Similar Papers
No similar papers found.
Fanyu Wang
Fanyu Wang
Monash University
Requirements EngineeringApplied NLP
H
Hangyu Zhu
School of Artificial Intelligence and Computer Science and the Jiangsu Key Laboratory of Media Design and Software Technology, Jiangnan University, Wuxi 214122, Jiangsu, China
Z
Zhenping Xie
School of Artificial Intelligence and Computer Science and the Jiangsu Key Laboratory of Media Design and Software Technology, Jiangnan University, Wuxi 214122, Jiangsu, China