🤖 AI Summary
This work addresses the negative transfer problem in meta-learning when adapting to out-of-distribution target tasks due to task shift. The authors propose a novel approach that integrates causal embeddings with Bayesian meta-learning, leveraging expert-provided judgments of causal similarity across tasks to construct mechanism-aware, task-specific priors. This enables mechanism-based generalization across tasks under severe data scarcity in the target domain. By uniquely combining causal representation learning with Bayesian meta-learning and using expert feedback to correct prior mismatch, the method substantially mitigates negative transfer. Empirical evaluations on both synthetic and real-world clinical cross-disease prediction tasks demonstrate improved out-of-distribution adaptation performance, with learned causal embeddings showing strong alignment with established clinical mechanisms.
📝 Abstract
Meta-learning methods perform well on new within-distribution tasks but often fail when adapting to out-of-distribution target tasks, where transfer from source tasks can induce negative transfer. We propose a causally-aware Bayesian meta-learning method, by conditioning task-specific priors on precomputed latent causal task embeddings, enabling transfer based on mechanistic similarity rather than spurious correlations. Our approach explicitly considers realistic deployment settings where access to target-task data is limited, and adaptation relies on noisy (expert-provided) pairwise judgments of causal similarity between source and target tasks. We provide a theoretical analysis showing that conditioning on causal embeddings controls prior mismatch and mitigates negative transfer under task shift. Empirically, we demonstrate reductions in negative transfer and improved out-of-distribution adaptation in both controlled simulations and a large-scale real-world clinical prediction setting for cross-disease transfer, where causal embeddings align with underlying clinical mechanisms.