๐ค AI Summary
To address the high training cost in eXtreme Multi-Label Classification (XMC) caused by computing losses over the full label space, this paper proposes an end-to-end trainable unified dual-encoderโOne-vs-All (OvA) classifier framework. Our core innovation is the novel โPick-Some-Labelโ (PSL) loss reduction mechanism: it dynamically selects the most discriminative positive and negative label subsets within each batch, enabling efficient utilization of supervision signals. PSL decouples computational complexity from label-space size and, for the first time, enables joint optimization of dual encoders and OvA classifiers without coupling constraints imposed by loss-function design. Evaluated on million-label benchmarks, our method achieves state-of-the-art (SOTA) performance while accelerating training by 4โ16ร. Crucially, the entire training and inference pipeline fits on a single GPU.
๐ Abstract
Extreme Multi-label Classification (XMC) involves predicting a subset of relevant labels from an extremely large label space, given an input query and labels with textual features. Models developed for this problem have conventionally made use of dual encoder (DE) to embed the queries and label texts and one-vs-all (OvA) classifiers to rerank the shortlisted labels by the DE. While such methods have shown empirical success, a major drawback is their computational cost, often requiring upto 16 GPUs to train on the largest public dataset. Such a high cost is a consequence of calculating the loss over the entire label space. While shortlisting strategies have been proposed for classifiers, we aim to study such methods for the DE framework. In this work, we develop UniDEC, a loss-independent, end-to-end trainable framework which trains the DE and classifier together in a unified manner with a multi-class loss, while reducing the computational cost by 4-16x. This is done via the proposed pick-some-label (PSL) reduction, which aims to compute the loss on only a subset of positive and negative labels. These labels are carefully chosen in-batch so as to maximise their supervisory signals. Not only does the proposed framework achieve state-of-the-art results on datasets with labels in the order of millions, it is also computationally and resource efficient in achieving this performance on a single GPU. Code is made available at https://github.com/the-catalyst/UniDEC.