🤖 AI Summary
This work proposes “Noisomics,” a novel framework that redefines imaging noise not as mere interference but as a rich, multi-parameter information source. Addressing the longstanding challenge of disentangling noise from true signal under severe label scarcity, Noisomics leverages a contrastive pretraining foundation model (CoP) guided by manifold assumptions and synthetic noise genomes. This paradigm shift—from noise suppression to noise解析—enables unprecedented data efficiency, surpassing supervised methods trained on 100,000 samples with only 100 labeled examples. The approach breaks conventional deep learning scaling laws, achieving a 63.8% reduction in estimation error and an 85.1% improvement in coefficient of determination across twelve out-of-domain datasets. Critically, it supports zero-shot cross-domain generalization, reduces data and computational requirements by three orders of magnitude, and enables accurate diagnosis without device calibration.
📝 Abstract
Characterizing imaging noise is notoriously data-intensive and device-dependent, as modern sensors entangle physical signals with complex algorithmic artifacts. Current paradigms struggle to disentangle these factors without massive supervised datasets, often reducing noise to mere interference rather than an information resource. Here, we introduce"Noisomics", a framework shifting the focus from suppression to systematic noise decoding via the Contrastive Pre-trained (CoP) Foundation Model. By leveraging the manifold hypothesis and synthetic noise genome, CoP employs contrastive learning to disentangle semantic signals from stochastic perturbations. Crucially, CoP breaks traditional deep learning scaling laws, achieving superior performance with only 100 training samples, outperforming supervised baselines trained on 100,000 samples, thereby reducing data and computational dependency by three orders of magnitude. Extensive benchmarking across 12 diverse out-of-domain datasets confirms its robust zero-shot generalization, demonstrating a 63.8% reduction in estimation error and an 85.1% improvement in the coefficient of determination compared to the conventional training strategy. We demonstrate CoP's utility across scales: from deciphering non-linear hardware-noise interplay in consumer photography to optimizing photon-efficient protocols for deep-tissue microscopy. By decoding noise as a multi-parametric footprint, our work redefines stochastic degradation as a vital information resource, empowering precise imaging diagnostics without prior device calibration.