🤖 AI Summary
This work addresses Extreme Zero-shot Multi-label Classification (EZ-XMC)—the task of efficiently and accurately retrieving relevant labels for a document given only its text and a large predefined label set, with no labeled training data. We propose the first lightweight dual-encoder framework leveraging the zero-shot discriminative capability of large language models (LLMs). Instead of generating low-quality pseudo-labels, our method employs LLMs to directly assess document–label relevance via zero-shot prompting; this relevance signal is then distilled as the training objective for a compact dual-encoder. At inference, the LLM is entirely discarded, enabling millisecond-scale response times. Our approach achieves significant improvements over state-of-the-art methods across multiple benchmarks, delivering both high semantic alignment accuracy and strong scalability—marking the first successful integration of LLM-level semantic understanding with industrial-grade efficiency.
📝 Abstract
Extreme Multi-label Learning (XMC) is a task that allocates the most relevant labels for an instance from a predefined label set. Extreme Zero-shot XMC (EZ-XMC) is a special setting of XMC wherein no supervision is provided; only the instances (raw text of the document) and the predetermined label set are given. The scenario is designed to address cold-start problems in categorization and recommendation. Traditional state-of-the-art methods extract pseudo labels from the document title or segments. These labels from the document are used to train a zero-shot bi-encoder model. The main issue with these generated labels is their misalignment with the tagging task. In this work, we propose a framework to train a small bi-encoder model via the feedback from the large language model (LLM), the bi-encoder model encodes the document and labels into embeddings for retrieval. Our approach leverages the zero-shot ability of LLM to assess the correlation between labels and the document instead of using the low-quality labels extracted from the document itself. Our method also guarantees fast inference without the involvement of LLM. The performance of our approach outperforms the SOTA methods on various datasets while retaining a similar training time for large datasets.