🤖 AI Summary
In star-forming regions, dense gas obscuration and structural inhomogeneity severely bias conventional spherically symmetric dynamical mass estimates. To address this, we propose a novel self-supervised pretraining paradigm tailored for astrophysical imagery: large-scale Vision Transformer (ViT) pretraining within the DINOv2 framework using one million synthetic fractal images—enabling learning of physically semantically rich features without ground-truth annotations. After fine-tuning on limited high-resolution magnetohydrodynamic (MHD) simulation data, freezing the backbone yields competitive stellar mass regression performance—slightly surpassing fully supervised baselines. Principal component analysis (PCA) of learned features uncovers low-dimensional structures strongly correlated with physical quantities such as gas density and turbulent scale. Moreover, the representations support unsupervised spatial segmentation. This approach provides an interpretable, annotation-efficient pathway for stellar mass estimation in obscured environments.
📝 Abstract
Stellar mass is a fundamental quantity that determines the properties and evolution of stars. However, estimating stellar masses in star-forming regions is challenging because young stars are obscured by dense gas and the regions are highly inhomogeneous, making spherical dynamical estimates unreliable. Supervised machine learning could link such complex structures to stellar mass, but it requires large, high-quality labeled datasets from high-resolution magneto-hydrodynamical (MHD) simulations, which are computationally expensive. We address this by pretraining a vision transformer on one million synthetic fractal images using the self-supervised framework DINOv2, and then applying the frozen model to limited high-resolution MHD simulations. Our results demonstrate that synthetic pretraining improves frozen-feature regression stellar mass predictions, with the pretrained model performing slightly better than a supervised model trained on the same limited simulations. Principal component analysis of the extracted features further reveals semantically meaningful structures, suggesting that the model enables unsupervised segmentation of star-forming regions without the need for labeled data or fine-tuning.