ASTROCO: Self-Supervised Conformer-Style Transformers for Light-Curve Embeddings

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of learning embedding representations for irregularly sampled stellar light curves, this paper proposes a Conformer-style self-supervised encoder that jointly models global temporal dependencies and local morphological features. Methodologically, it innovatively integrates multi-head attention, depthwise separable convolutions, and gated linear units to learn robust light-curve representations without labeled data. Evaluated on the MACHO R-band dataset, our approach reduces error rates by 70% and 61% compared to Astromer v1 and v2, respectively, while improving macro-F1 by approximately 7%. The learned embeddings significantly outperform existing methods in few-shot classification and demonstrate strong cross-dataset transferability. This work establishes the first efficient, label-efficient foundation model specifically designed for irregular time-series data in time-domain astronomy.

Technology Category

Application Category

📝 Abstract
We present AstroCo, a Conformer-style encoder for irregular stellar light curves. By combining attention with depthwise convolutions and gating, AstroCo captures both global dependencies and local features. On MACHO R-band, AstroCo outperforms Astromer v1 and v2, yielding 70 percent and 61 percent lower error respectively and a relative macro-F1 gain of about 7 percent, while producing embeddings that transfer effectively to few-shot classification. These results highlight AstroCo's potential as a strong and label-efficient foundation for time-domain astronomy.
Problem

Research questions and friction points this paper is trying to address.

Develops self-supervised transformer for irregular stellar light curves
Captures global dependencies and local features in time series
Produces transferable embeddings for few-shot astronomical classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conformer-style encoder combines attention with convolutions
Captures global dependencies and local features simultaneously
Produces transferable embeddings for few-shot classification
A
Antony Tan
John A. Paulson School of Engineering and Applied Sciences, Harvard University, USA
Pavlos Protopapas
Pavlos Protopapas
Harvard
M
M. Cádiz-Leyton
Department of Computer Science, Universidad de Concepción, Chile
Guillermo Cabrera-Vives
Guillermo Cabrera-Vives
Department of Computer Science, University of Concepción
Artificial IntelligenceDeep LearningAstroinformaticsBioinformatics
C
C. Donoso-Oliva
Center for Data and Artificial Intelligence, Universidad de Concepción, Chile
I
I. Becker
Department of Computer Science, Pontificia Universidad Católica de Chile, Santiago, Chile