Adaptive von Mises-Fisher Likelihood Loss for Supervised Deep Time Series Hashing

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address information loss inherent in converting real-valued embeddings to binary codes in time-series semantic hashing, this paper proposes a deep supervised hashing method based on hyperspherical embedding. The method maps features onto the unit hypersphere and models class-level semantic structure via the von Mises–Fisher (vMF) distribution. An adaptive vMF likelihood loss is introduced to explicitly maximize inter-class separation and minimize intra-class dispersion, thereby enhancing both discriminability and semantic fidelity of hash codes. By integrating deep neural networks, hyperspherical geometric constraints, and differentiable discrete optimization, the approach achieves significant improvements over state-of-the-art hashing methods across multiple time-series benchmark datasets—yielding simultaneous gains in retrieval accuracy and efficiency. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Indexing time series by creating compact binary representations is a fundamental task in time series data mining. Recently, deep learning-based hashing methods have proven effective for indexing time series based on semantic meaning rather than just raw similarity. The purpose of deep hashing is to map samples with the same semantic meaning to identical binary hash codes, enabling more efficient search and retrieval. Unlike other supervised representation learning methods, supervised deep hashing requires a discretization step to convert real-valued representations into binary codes, but this can induce significant information loss. In this paper, we propose a von Mises-Fisher (vMF) hashing loss. The proposed deep hashing model maps data to an M-dimensional hyperspherical space to effectively reduce information loss and models each data class as points following distinct vMF distributions. The designed loss aims to maximize the separation between each modeled vMF distribution to provide a better way to maximize the margin between each semantically different data sample. Experimental results show that our method outperforms existing baselines. The implementation is publicly available at https://github.com/jmpq97/vmf-hashing
Problem

Research questions and friction points this paper is trying to address.

Indexing time series with compact binary representations for efficient search
Reducing information loss when converting real-valued representations to binary codes
Mapping semantically similar time series to identical hash codes effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses von Mises-Fisher likelihood loss
Maps data to M-dimensional hyperspherical space
Models each class as distinct vMF distribution
🔎 Similar Papers
No similar papers found.
J
Juan Manuel Perez
The University of Texas Rio Grande Valley, Edinburg, TX, USA
K
Kevin Garcia
The University of Texas Rio Grande Valley, Edinburg, TX, USA
B
Brooklyn Berry
The University of Texas Rio Grande Valley, Edinburg, TX, USA
Dongjin Song
Dongjin Song
Associate Professor, School of Computing, University of Connecticut
Artificial IntelligenceMachine LearningData MiningTime SeriesGraph Learning
Y
Yifeng Gao
The University of Texas Rio Grande Valley, Edinburg, TX, USA