🤖 AI Summary
This work addresses the performance limitations of in-context learning (ICL) in target domains lacking labeled examples. The authors propose enhancing ICL by transferring shared reasoning structures from cross-domain examples, revealing the existence of an “example absorption threshold” that governs successful positive transfer across domains. They demonstrate that performance gains primarily stem from the repair of reasoning structures rather than the incorporation of semantic cues. Through multiple retrieval strategies for selecting cross-domain examples, experiments show that when the absorption threshold is met, such examples significantly improve reasoning performance in the target domain. These findings provide both theoretical grounding and practical guidance for efficient example retrieval and effective knowledge transfer in ICL settings.
📝 Abstract
Despite its success, existing in-context learning (ICL) relies on in-domain expert demonstrations, limiting its applicability when expert annotations are scarce. We posit that different domains may share underlying reasoning structures, enabling source-domain demonstrations to improve target-domain inference despite semantic mismatch. To test this hypothesis, we conduct a comprehensive empirical study of different retrieval methods to validate the feasibility of achieving cross-domain knowledge transfer under the in-context learning setting. Our results demonstrate conditional positive transfer in cross-domain ICL. We identify a clear example absorption threshold: beyond it, positive transfer becomes more likely, and additional demonstrations yield larger gains. Further analysis suggests that these gains stem from reasoning structure repair by retrieved cross-domain examples, rather than semantic cues. Overall, our study validates the feasibility of leveraging cross-domain knowledge transfer to improve cross-domain ICL performance, motivating the community to explore designing more effective retrieval approaches for this novel direction.\footnote{Our implementation is available at https://github.com/littlelaska/ICL-TF4LR}