Time Series, Vision, and Language: Exploring the Limits of Alignment in Contrastive Representation Spaces

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether time series share a unified latent representational structure with visual and linguistic modalities and explores the limits of multimodal alignment. By applying contrastive learning to post-hoc align frozen time-series, vision, and language encoders, the work systematically analyzes the geometry of their representations, scaling behaviors, and dependence on information density. It reveals, for the first time, the role of time series in multimodal alignment: larger model scales improve overall alignment performance; time series align more readily with vision than with text; images serve as a mediating bridge across modalities; and both textual and visual modalities exhibit saturation thresholds beyond which increased information density yields diminishing returns.

Technology Category

Application Category

📝 Abstract
The Platonic Representation Hypothesis posits that learned representations from models trained on different modalities converge to a shared latent structure of the world. However, this hypothesis has largely been examined in vision and language, and it remains unclear whether time series participate in such convergence. We first examine this in a trimodal setting and find that independently pretrained time series, vision, and language encoders exhibit near-orthogonal geometry in the absence of explicit coupling. We then apply post-hoc alignment by training projection heads over frozen encoders using contrastive learning, and analyze the resulting representations with respect to geometry, scaling behavior, and dependence on information density and input modality characteristics. Our investigation reveals that overall alignment in contrastive representation spaces improves with model size, but this alignment is asymmetric: time series align more strongly with visual representations than with text, and images can act as effective intermediaries between time series and language. We further see that richer textual descriptions improve alignment only up to a threshold; training on denser captions does not lead to further improvement. Analogous effects are observed for visual representations. Our findings shed light on considerations for building multimodal systems involving non-conventional data modalities beyond vision and language.
Problem

Research questions and friction points this paper is trying to address.

time series
multimodal alignment
contrastive representation
Platonic Representation Hypothesis
cross-modal convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal alignment
contrastive representation learning
time series
Platonic Representation Hypothesis
modality asymmetry
🔎 Similar Papers
No similar papers found.
P
Pratham Yashwante
Department of Computer Science and Engineering, University of California San Diego, USA
Rose Yu
Rose Yu
Associate Professor, University of California, San Diego
Machine LearningComputational Sustainability