UniDEC : Unified Dual Encoder and Classifier Training for Extreme Multi-Label Classification

๐Ÿ“… 2024-05-04
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high training cost in eXtreme Multi-Label Classification (XMC) caused by computing losses over the full label space, this paper proposes an end-to-end trainable unified dual-encoderโ€“One-vs-All (OvA) classifier framework. Our core innovation is the novel โ€œPick-Some-Labelโ€ (PSL) loss reduction mechanism: it dynamically selects the most discriminative positive and negative label subsets within each batch, enabling efficient utilization of supervision signals. PSL decouples computational complexity from label-space size and, for the first time, enables joint optimization of dual encoders and OvA classifiers without coupling constraints imposed by loss-function design. Evaluated on million-label benchmarks, our method achieves state-of-the-art (SOTA) performance while accelerating training by 4โ€“16ร—. Crucially, the entire training and inference pipeline fits on a single GPU.

Technology Category

Application Category

๐Ÿ“ Abstract
Extreme Multi-label Classification (XMC) involves predicting a subset of relevant labels from an extremely large label space, given an input query and labels with textual features. Models developed for this problem have conventionally made use of dual encoder (DE) to embed the queries and label texts and one-vs-all (OvA) classifiers to rerank the shortlisted labels by the DE. While such methods have shown empirical success, a major drawback is their computational cost, often requiring upto 16 GPUs to train on the largest public dataset. Such a high cost is a consequence of calculating the loss over the entire label space. While shortlisting strategies have been proposed for classifiers, we aim to study such methods for the DE framework. In this work, we develop UniDEC, a loss-independent, end-to-end trainable framework which trains the DE and classifier together in a unified manner with a multi-class loss, while reducing the computational cost by 4-16x. This is done via the proposed pick-some-label (PSL) reduction, which aims to compute the loss on only a subset of positive and negative labels. These labels are carefully chosen in-batch so as to maximise their supervisory signals. Not only does the proposed framework achieve state-of-the-art results on datasets with labels in the order of millions, it is also computationally and resource efficient in achieving this performance on a single GPU. Code is made available at https://github.com/the-catalyst/UniDEC.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational cost in extreme multi-label classification.
Unifies dual encoder and classifier training efficiently.
Achieves state-of-the-art results with single GPU.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified training of dual encoder and classifier
Pick-some-label reduction for computational efficiency
State-of-the-art results on million-label datasets
๐Ÿ”Ž Similar Papers
No similar papers found.