Continuous Domain Generalization

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world data distributions dynamically drift under multidimensional continuous factors—e.g., time, geography, and socioeconomic indicators—yet existing domain generalization (DG) methods typically assume discrete or unidimensional domain shifts, failing to capture their intrinsic continuity. This work formally introduces *Continuous Domain Generalization (CDG)*: the task of generalizing to unseen domains defined by arbitrary combinations of continuous domain descriptors. We theoretically show that optimal model parameters lie on a low-dimensional manifold in parameter space. Leveraging this insight, we propose the *Neural Lie Transport Operator (NeuralLTO)*, which integrates differential geometry and Lie algebra to enable geometrically continuous and algebraically consistent parameter transport across domains. We further design a domain-descriptor gating mechanism and a local coordinate chart strategy for robust generalization. Extensive experiments on remote sensing, scientific literature analysis, and traffic forecasting demonstrate significant improvements over SOTA methods, with strong robustness to descriptor noise and missing descriptors.

Technology Category

Application Category

📝 Abstract
Real-world data distributions often shift continuously across multiple latent factors such as time, geography, and socioeconomic context. However, existing domain generalization approaches typically treat domains as discrete or evolving along a single axis (e.g., time), which fails to capture the complex, multi-dimensional nature of real-world variation. This paper introduces the task of Continuous Domain Generalization (CDG), which aims to generalize predictive models to unseen domains defined by arbitrary combinations of continuous variation descriptors. We present a principled framework grounded in geometric and algebraic theory, showing that optimal model parameters across domains lie on a low-dimensional manifold. To model this structure, we propose a Neural Lie Transport Operator (NeuralLTO), which enables structured parameter transitions by enforcing geometric continuity and algebraic consistency. To handle noisy or incomplete domain descriptors, we introduce a gating mechanism to suppress irrelevant dimensions and a local chart-based strategy for robust generalization. Extensive experiments on synthetic and real-world datasets-including remote sensing, scientific documents, and traffic forecasting-demonstrate that our method significantly outperforms existing baselines in generalization accuracy and robustness under descriptor imperfections.
Problem

Research questions and friction points this paper is trying to address.

Generalizing models to unseen domains with continuous multi-dimensional shifts
Addressing limitations of discrete or single-axis domain generalization approaches
Handling noisy or incomplete domain descriptors for robust generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Lie Transport Operator enables structured parameter transitions
Gating mechanism suppresses irrelevant domain descriptor dimensions
Local chart-based strategy ensures robust generalization
🔎 Similar Papers
No similar papers found.