eMargin: Revisiting Contrastive Learning with Margin-Based Separation

📅 2025-07-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the impact of introducing an adaptive margin—termed eMargin—into contrastive learning for time-series representation learning. To address the challenge of insufficient separation between adjacent yet semantically dissimilar time steps, we propose an eMargin mechanism that dynamically adjusts the margin based on a similarity threshold and integrate it into the InfoNCE framework for unsupervised time-series representation learning. Experiments across three benchmark datasets demonstrate that eMargin significantly improves unsupervised clustering performance (e.g., NMI, ARI), yet yields only marginal gains in linear-probe classification tasks—revealing, for the first time, a systematic inconsistency between high clustering quality and downstream discriminative representation capability. This study establishes a novel paradigm for margin design in time-series contrastive learning and cautions against relying solely on clustering metrics as sufficient proxies for overall representation quality.

Technology Category

Application Category

📝 Abstract
We revisit previous contrastive learning frameworks to investigate the effect of introducing an adaptive margin into the contrastive loss function for time series representation learning. Specifically, we explore whether an adaptive margin (eMargin), adjusted based on a predefined similarity threshold, can improve the separation between adjacent but dissimilar time steps and subsequently lead to better performance in downstream tasks. Our study evaluates the impact of this modification on clustering performance and classification in three benchmark datasets. Our findings, however, indicate that achieving high scores on unsupervised clustering metrics does not necessarily imply that the learned embeddings are meaningful or effective in downstream tasks. To be specific, eMargin added to InfoNCE consistently outperforms state-of-the-art baselines in unsupervised clustering metrics, but struggles to achieve competitive results in downstream classification with linear probing. The source code is publicly available at https://github.com/sfi-norwai/eMargin.
Problem

Research questions and friction points this paper is trying to address.

Investigates adaptive margin in contrastive loss for time series
Evaluates impact on clustering and classification in benchmarks
Examines discrepancy between clustering metrics and downstream task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces adaptive margin in contrastive loss
Adjusts margin based on similarity threshold
Evaluates clustering and classification performance
🔎 Similar Papers
No similar papers found.