🤖 AI Summary
To address task interference and inefficient knowledge transfer in cross-era information extraction from classical and vernacular Chinese, this paper proposes Tea-MOELoRA—a parameter-efficient framework integrating Low-Rank Adaptation (LoRA) with a Mixture of Experts (MoE) architecture. It introduces multiple low-rank LoRA expert subnetworks and a novel task-era dual-aware routing mechanism to enable controllable, interference-mitigated joint modeling. By explicitly encoding era-specific and task-specific signals into the routing decisions, the framework achieves structured knowledge transfer across linguistic eras while preserving model compactness. Experimental results demonstrate significant improvements over both single-task baselines and standard joint LoRA models on cross-era named entity recognition, relation extraction, and event extraction—validating the effectiveness of era-aware structured knowledge transfer. The approach maintains high efficiency, with minimal parameter overhead, making it suitable for low-resource historical language processing.
📝 Abstract
Chinese information extraction (IE) involves multiple tasks across diverse temporal domains, including Classical and Modern documents. Fine-tuning a single model on heterogeneous tasks and across different eras may lead to interference and reduced performance. Therefore, in this paper, we propose Tea-MOELoRA, a parameter-efficient multi-task framework that combines LoRA with a Mixture-of-Experts (MoE) design. Multiple low-rank LoRA experts specialize in different IE tasks and eras, while a task-era-aware router mechanism dynamically allocates expert contributions. Experiments show that Tea-MOELoRA outperforms both single-task and joint LoRA baselines, demonstrating its ability to leverage task and temporal knowledge effectively.