Towards Effective In-context Cross-domain Knowledge Transfer via Domain-invariant-neurons-based Retrieval

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models struggle to transfer knowledge effectively in specialized domains due to the scarcity of target-domain examples. To overcome this limitation, the authors propose DIN-Retrieval, a novel retrieval method based on domain-invariant neurons. By identifying implicit logical structures shared across domains, DIN-Retrieval dynamically retrieves source-domain examples with compatible reasoning structures for in-context learning. This approach leverages domain-invariant neurons to construct a universal representation, enabling enhanced cross-domain reasoning without requiring any expert-annotated data from the target domain. Evaluated on multiple mathematical and logical reasoning transfer tasks, the method achieves an average performance gain of 1.8 percentage points over the current state-of-the-art.
📝 Abstract
Large language models (LLMs) have made notable progress in logical reasoning, yet still fall short of human-level performance. Current boosting strategies rely on expert-crafted in-domain demonstrations, limiting their applicability in expertise-scarce domains, such as specialized mathematical reasoning, formal logic, or legal analysis. In this work, we demonstrate the feasibility of leveraging cross-domain demonstrating examples to boost the LLMs' reasoning performance. Despite substantial domain differences, many reusable implicit logical structures are shared across domains. In order to effectively retrieve cross-domain examples for unseen domains under investigation, in this work, we further propose an effective retrieval method, called domain-invariant neurons-based retrieval (\textbf{DIN-Retrieval}). Concisely, DIN-Retrieval first summarizes a hidden representation that is universal across different domains. Then, during the inference stage, we use the DIN vector to retrieve structurally compatible cross-domain demonstrations for the in-context learning. Experimental results in multiple settings for the transfer of mathematical and logical reasoning demonstrate that our method achieves an average improvement of 1.8 over the state-of-the-art methods \footnote{Our implementation is available at https://github.com/Leon221220/DIN-Retrieval}.
Problem

Research questions and friction points this paper is trying to address.

cross-domain knowledge transfer
in-context learning
logical reasoning
domain-invariant representation
demonstration retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-domain knowledge transfer
domain-invariant neurons
in-context learning
logical reasoning
retrieval-based prompting
🔎 Similar Papers
No similar papers found.
J
Jianzhi Yan
Harbin Institute of Technology, Shenzhen, China; Pengcheng Laboratory, Shenzhen, China
Zhiming Li
Zhiming Li
Central South University
Materials designMaterials processingMicrostructureMaterials PropertiesPhysical Metallurgy
Le Liu
Le Liu
Northwestern Polytechnical University
VisualizationComputer GraphicsComputer VisionAI
Z
Zike Yuan
Harbin Institute of Technology, Shenzhen, China; Pengcheng Laboratory, Shenzhen, China
S
Shiwei Chen
Harbin Institute of Technology, Shenzhen, China; Pengcheng Laboratory, Shenzhen, China
Y
Youcheng Pan
Pengcheng Laboratory, Shenzhen, China
B
Buzhou Tang
Pengcheng Laboratory, Shenzhen, China
Yang Xiang
Yang Xiang
Associate Professor of Peng Cheng Laboratory, China
artificial intelligencepredictive modelingmachine learningnatural language processing
D
Danny Dongning Sun
Pengcheng Laboratory, Shenzhen, China