ExpeTrans: LLMs Are Experiential Transfer Learners

๐Ÿ“… 2025-05-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
How can large language models (LLMs) autonomously transfer cross-task experience to reduce reliance on manual annotation and expert-curated demonstrations? This paper proposes ExpeTrans, the first end-to-end framework enabling LLMs to automatically extract, model task similarity, remap, and adaptively inject textual reasoning experiences from source tasks into novel target tasksโ€”without human-designed prompts. ExpeTrans breaks from conventional prompt-engineering paradigms by emulating human-like generalization through cognitive-inspired experience abstraction and transfer. Evaluated on 13 diverse benchmark datasets, it consistently improves LLM performance across tasks. Ablation studies validate the necessity and efficacy of each module, while interpretability analyses confirm the plausibility and mechanism of experience transfer. The core contributions are threefold: (1) the first fully autonomous, end-to-end experience transfer framework for LLMs; (2) seamless integration of task-agnostic generalization, full automation, and transparency; and (3) empirical demonstration of robust, interpretable cross-task knowledge reuse.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent studies provide large language models (LLMs) with textual task-solving experiences via prompts to improve their performance. However, previous methods rely on substantial human labor or time to gather such experiences for each task, which is impractical given the growing variety of task types in user queries to LLMs. To address this issue, we design an autonomous experience transfer framework to explore whether LLMs can mimic human cognitive intelligence to autonomously transfer experience from existing source tasks to newly encountered target tasks. This not only allows the acquisition of experience without extensive costs of previous methods, but also offers a novel path for the generalization of LLMs. Experimental results on 13 datasets demonstrate that our framework effectively improves the performance of LLMs. Furthermore, we provide a detailed analysis of each module in the framework.
Problem

Research questions and friction points this paper is trying to address.

Autonomous transfer of task-solving experiences to new tasks
Reducing human effort in gathering LLM training experiences
Improving LLM generalization across diverse task types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autonomous transfer of task-solving experiences
Mimics human cognitive intelligence
Improves LLM performance without human labor
๐Ÿ”Ž Similar Papers
No similar papers found.
Jinglong Gao
Jinglong Gao
Harbin Institute of Technology
causal reasoninglarge language model
X
Xiao Ding
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, China
L
Lingxiao Zou
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, China
Bibo Cai
Bibo Cai
Harbin Institute Technology
NLP
Bing Qin
Bing Qin
Professor in Harbin Institute of Technology
Natural Language ProcessingInformation ExtractionSentiment Analysis
T
Ting Liu
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, China