🤖 AI Summary
Existing large EEG models (LEMs) exhibit strong pre-trained representation capabilities but lack dedicated decoders, leading to suboptimal feature utilization. Method: We propose a decoder-centric paradigm for large-scale EEG modeling, reformulating EEG analysis as a sequence-to-sequence learning task. Our framework introduces, for the first time, a discrete support sample injection mechanism to construct contextual cues, enabling parameter-free in-context learning and dynamic cross-task/cross-dataset adaptation. It explicitly models hierarchical relationships among neural signals, labels, and tasks. Contribution/Results: Extensive multi-dataset experiments demonstrate that—even with basic model components—our approach significantly outperforms state-of-the-art single-task LEMs. It achieves superior generalization and zero-shot transfer capability in multi-task settings, establishing a new foundation for adaptive, context-aware EEG intelligence.
📝 Abstract
Electroencephalography (EEG), with its broad range of applications, necessitates models that can generalize effectively across various tasks and datasets. Large EEG Models (LEMs) address this by pretraining encoder-centric architectures on large-scale unlabeled data to extract universal representations. While effective, these models lack decoders of comparable capacity, limiting the full utilization of the learned features. To address this issue, we introduce ECHO, a novel decoder-centric LEM paradigm that reformulates EEG modeling as sequence-to-sequence learning. ECHO captures layered relationships among signals, labels, and tasks within sequence space, while incorporating discrete support samples to construct contextual cues. This design equips ECHO with in-context learning, enabling dynamic adaptation to heterogeneous tasks without parameter updates. Extensive experiments across multiple datasets demonstrate that, even with basic model components, ECHO consistently outperforms state-of-the-art single-task LEMs in multi-task settings, showing superior generalization and adaptability.