๐ค AI Summary
To address the challenges of online learning under dynamic concept drift and inter-stream heterogeneity in heterogeneous multi-data-stream settings, this paper proposes the CAMEL framework. Methodologically, CAMEL (1) assigns each stream an independent feature extractor and task head to explicitly model stream-wise heterogeneity; (2) introduces a collaborative mixture-of-experts mechanism augmented with multi-head attention for context-aware knowledge coordination and targeted transfer; and (3) incorporates an autonomous expert tuning strategy that enables dynamic expert instantiation, incremental updating, and pruningโthereby jointly enhancing adaptability to concept drift and mitigating catastrophic forgetting. Extensive experiments across diverse multi-stream benchmarks demonstrate that CAMEL consistently outperforms state-of-the-art methods, achieving significant improvements in generalization, robustness, and continual learning efficiency.
๐ Abstract
Learning from multiple data streams in real-world scenarios is fundamentally challenging due to intrinsic heterogeneity and unpredictable concept drifts. Existing methods typically assume homogeneous streams and employ static architectures with indiscriminate knowledge fusion, limiting generalizability in complex dynamic environments. To tackle this gap, we propose CAMEL, a dynamic extbf{C}ollaborative extbf{A}ssistance extbf{M}ixture of extbf{E}xperts extbf{L}earning framework. It addresses heterogeneity by assigning each stream an independent system with a dedicated feature extractor and task-specific head. Meanwhile, a dynamic pool of specialized private experts captures stream-specific idiosyncratic patterns. Crucially, collaboration across these heterogeneous streams is enabled by a dedicated assistance expert. This expert employs a multi-head attention mechanism to distill and integrate relevant context autonomously from all other concurrent streams. It facilitates targeted knowledge transfer while inherently mitigating negative transfer from irrelevant sources. Furthermore, we propose an Autonomous Expert Tuner (AET) strategy, which dynamically manages expert lifecycles in response to drift. It instantiates new experts for emerging concepts (freezing prior ones to prevent catastrophic forgetting) and prunes obsolete ones. This expert-level plasticity provides a robust and efficient mechanism for online model capacity adaptation. Extensive experiments demonstrate CAMEL's superior generalizability across diverse multistreams and exceptional resilience against complex concept drifts.