DeepMTL2R: A Library for Deep Multi-task Learning to Rank

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of heterogeneous and potentially conflicting relevance criteria in learning-to-rank tasks by proposing a Transformer-based multi-task ranking framework. Leveraging self-attention mechanisms to model long-range dependencies, the framework enables context-aware fusion of multi-source signals and achieves Pareto-optimal multi-objective optimization. To the best of our knowledge, this is the first study to integrate self-attention into multi-task ranking, and it further incorporates 21 state-of-the-art multi-task learning algorithms to establish a reproducible, multi-strategy benchmark platform. Extensive experiments on public datasets demonstrate competitive performance and effectively reveal the trade-offs among multiple objectives. The implementation code has been made publicly available.

Technology Category

Application Category

📝 Abstract
This paper presents DeepMTL2R, an open-source deep learning framework for Multi-task Learning to Rank (MTL2R), where multiple relevance criteria must be optimized simultaneously. DeepMTL2R integrates heterogeneous relevance signals into a unified, context-aware model by leveraging the self-attention mechanism of transformer architectures, enabling effective learning across diverse and potentially conflicting objectives. The framework includes 21 state-of-the-art multi-task learning algorithms and supports multi-objective optimization to identify Pareto-optimal ranking models. By capturing complex dependencies and long-range interactions among items and labels, DeepMTL2R provides a scalable and expressive solution for modern ranking systems and facilitates controlled comparisons across MTL strategies. We demonstrate its effectiveness on a publicly available dataset, report competitive performance, and visualize the resulting trade-offs among objectives. DeepMTL2R is available at \href{https://github.com/amazon-science/DeepMTL2R}{https://github.com/amazon-science/DeepMTL2R}.
Problem

Research questions and friction points this paper is trying to address.

Multi-task Learning to Rank
Relevance Criteria
Multi-objective Optimization
Ranking Systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-task Learning to Rank
Transformer Self-attention
Pareto-optimal Ranking
Heterogeneous Relevance Signals
Multi-objective Optimization
🔎 Similar Papers
No similar papers found.