Hierarchical Retrieval: The Geometry and a Pretrain-Finetune Recipe

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dual-encoder (DE) models suffer from geometric limitations in hierarchical retrieval (HR): their Euclidean embedding spaces struggle to capture long-range ancestral relationships among documents, resulting in poor recall for distant ancestors (e.g., only 19% on WordNet). This work identifies and characterizes the expressive bottleneck of DEs under hierarchical structures. We propose a novel pretraining-finetuning paradigm specifically designed for HR: during pretraining, we introduce hierarchy-aware negative sampling to explicitly model multi-level ancestral relations; during finetuning, we jointly optimize embedding geometry with task objectives. Our method substantially improves distant ancestor recall—reaching 76% on WordNet—and demonstrates strong generalization on real-world e-commerce query datasets. The core contributions are (i) the first systematic analysis of DEs’ geometric inadequacy for hierarchical semantics, and (ii) the first dedicated DE pretraining framework tailored for HR.

Technology Category

Application Category

📝 Abstract
Dual encoder (DE) models, where a pair of matching query and document are embedded into similar vector representations, are widely used in information retrieval due to their simplicity and scalability. However, the Euclidean geometry of the embedding space limits the expressive power of DEs, which may compromise their quality. This paper investigates such limitations in the context of hierarchical retrieval (HR), where the document set has a hierarchical structure and the matching documents for a query are all of its ancestors. We first prove that DEs are feasible for HR as long as the embedding dimension is linear in the depth of the hierarchy and logarithmic in the number of documents. Then we study the problem of learning such embeddings in a standard retrieval setup where DEs are trained on samples of matching query and document pairs. Our experiments reveal a lost-in-the-long-distance phenomenon, where retrieval accuracy degrades for documents further away in the hierarchy. To address this, we introduce a pretrain-finetune recipe that significantly improves long-distance retrieval without sacrificing performance on closer documents. We experiment on a realistic hierarchy from WordNet for retrieving documents at various levels of abstraction, and show that pretrain-finetune boosts the recall on long-distance pairs from 19% to 76%. Finally, we demonstrate that our method improves retrieval of relevant products on a shopping queries dataset.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations of dual encoder models in hierarchical retrieval tasks
Solves lost-in-the-long-distance phenomenon degrading distant document retrieval
Improves retrieval accuracy for documents at various hierarchy levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual encoder models with embedding dimension constraints
Pretrain-finetune recipe to improve long-distance retrieval
Hierarchical retrieval using WordNet and shopping datasets
🔎 Similar Papers
2024-10-10International Conference on Machine LearningCitations: 3