ChA-MAEViT: Unifying Channel-Aware Masked Autoencoders and Multi-Channel Vision Transformers for Improved Cross-Channel Learning

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing masked autoencoders (MAEs) based on random patch masking struggle to model cross-channel dependencies in multi-channel imaging (MCI), where channels exhibit strong complementarity and low redundancy. To address this, we propose the Channel-Aware Masked Autoencoder (CA-MAE). Its key contributions are: (1) a dynamic channel-block joint masking strategy that explicitly accounts for channel heterogeneity; (2) a memory token mechanism coupled with a hybrid token fusion module, jointly integrating fine-grained local structures and global cross-channel relationships; and (3) a channel-conditioned lightweight decoder enabling structural-aware reconstruction. Evaluated on multimodal remote sensing and microscopy benchmarks—including CHAMMI, JUMP-CP, and So2Sat—CA-MAE outperforms state-of-the-art multi-channel Vision Transformers by 3.0–21.5%, achieving significant gains in cross-channel reconstruction fidelity and downstream task performance.

Technology Category

Application Category

📝 Abstract
Prior work using Masked Autoencoders (MAEs) typically relies on random patch masking based on the assumption that images have significant redundancies across different channels, allowing for the reconstruction of masked content using cross-channel correlations. However, this assumption does not hold in Multi-Channel Imaging (MCI), where channels may provide complementary information with minimal feature overlap. Thus, these MAEs primarily learn local structures within individual channels from patch reconstruction, failing to fully leverage cross-channel interactions and limiting their MCI effectiveness. In this paper, we present ChA-MAEViT, an MAE-based method that enhances feature learning across MCI channels via four key strategies: (1) dynamic channel-patch masking, which compels the model to reconstruct missing channels in addition to masked patches, thereby enhancing cross-channel dependencies and improving robustness to varying channel configurations; (2) memory tokens, which serve as long-term memory aids to promote information sharing across channels, addressing the challenges of reconstructing structurally diverse channels; (3) hybrid token fusion module, which merges fine-grained patch tokens with a global class token to capture richer representations; and (4) Channel-Aware Decoder, a lightweight decoder utilizes channel tokens to effectively reconstruct image patches. Experiments on satellite and microscopy datasets, CHAMMI, JUMP-CP, and So2Sat, show that ChA-MAEViT significantly outperforms state-of-the-art MCI-ViTs by 3.0-21.5%, highlighting the importance of cross-channel interactions in MCI.
Problem

Research questions and friction points this paper is trying to address.

Improving cross-channel learning in Multi-Channel Imaging (MCI)
Enhancing feature reconstruction across diverse MCI channels
Addressing limitations of random patch masking in MAEs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic channel-patch masking enhances cross-channel dependencies
Memory tokens promote information sharing across channels
Hybrid token fusion captures richer representations
🔎 Similar Papers
No similar papers found.