Probabilistic Hash Embeddings for Online Learning of Categorical Features

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenges in online learning arising from dynamically growing and unbounded categorical vocabularies, as well as performance degradation of conventional deterministic embeddings due to order sensitivity and catastrophic forgetting. We propose a probabilistic hashing embedding framework that integrates randomized hashing with Bayesian online learning, ensuring order-invariance, fixed-parameter scalability, and adaptive vocabulary evolution. Our method employs an efficient, scalable incremental inference algorithm to update the posterior distributions of hash embeddings and other latent variables in real time. Extensive experiments across streaming classification, sequential modeling, and recommendation tasks demonstrate significant improvements over state-of-the-art approaches. The proposed method incurs only 2–4× the memory overhead of a one-hot embedding table, achieving both high accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
We study streaming data with categorical features where the vocabulary of categorical feature values is changing and can even grow unboundedly over time. Feature hashing is commonly used as a pre-processing step to map these categorical values into a feature space of fixed size before learning their embeddings. While these methods have been developed and evaluated for offline or batch settings, in this paper we consider online settings. We show that deterministic embeddings are sensitive to the arrival order of categories and suffer from forgetting in online learning, leading to performance deterioration. To mitigate this issue, we propose a probabilistic hash embedding (PHE) model that treats hash embeddings as stochastic and applies Bayesian online learning to learn incrementally from data. Based on the structure of PHE, we derive a scalable inference algorithm to learn model parameters and infer/update the posteriors of hash embeddings and other latent variables. Our algorithm (i) can handle an evolving vocabulary of categorical items, (ii) is adaptive to new items without forgetting old items, (iii) is implementable with a bounded set of parameters that does not grow with the number of distinct observed values on the stream, and (iv) is invariant to the item arrival order. Experiments in classification, sequence modeling, and recommendation systems in online learning setups demonstrate the superior performance of PHE while maintaining high memory efficiency (consumes as low as 2~4 memory of a one-hot embedding table). Supplementary materials are at https://github.com/aodongli/probabilistic-hash-embeddings
Problem

Research questions and friction points this paper is trying to address.

Handles evolving categorical vocabularies in streaming data environments
Prevents forgetting old items while adapting to new online data
Eliminates sensitivity to item arrival order in online learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic hash embeddings treat embeddings as stochastic
Bayesian online learning incrementally updates embeddings from data
Scalable inference algorithm handles evolving vocabulary without forgetting
🔎 Similar Papers
No similar papers found.