MTLSI-Net: A Linear Semantic Interaction Network for Parameter-Efficient Multi-Task Dense Prediction

πŸ“… 2026-04-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of efficiently modeling global cross-task interactions in multi-task dense prediction, where standard self-attention suffers from quadratic computational complexity and becomes impractical for high-resolution features. To overcome this limitation, the authors propose a linear-complexity cross-task interaction architecture that uniquely integrates a shared global context matrix, semantic token distillation, and a dual-branch global-local fusion mechanism. The design further incorporates multi-scale query linear fusion and cross-window integrated attention to preserve spatial precision and global consistency while substantially reducing computational overhead. Extensive experiments on NYUDv2 and PASCAL-Context demonstrate state-of-the-art performance, confirming the method’s efficiency and effectiveness in real-world multi-task dense prediction scenarios.
πŸ“ Abstract
Multi-task dense prediction aims to perform multiple pixel-level tasks simultaneously. However, capturing global cross-task interactions remains non-trivial due to the quadratic complexity of standard self-attention on high-resolution features. To address this limitation, we propose a Multi-Task Linear Semantic Interaction Network (MTLSI-Net), which facilitates cross-task interaction through linear attention. Specifically, MTLSI-Net incorporates three key components: a Multi-Task Multi-scale Query Linear Fusion Block, which captures cross-task dependencies across multiple scales with linear complexity using a shared global context matrix; a Semantic Token Distiller that compresses redundant features into compact semantic tokens, distilling essential cross-task knowledge; and a Cross-Window Integrated attention Block that injects global semantics into local features via a dual-branch architecture, preserving both global consistency and spatial precision. These components collectively enable the network to capture comprehensive cross-task interactions at linear complexity with reduced parameters. Extensive experiments on NYUDv2 and PASCAL-Context demonstrate that MTLSI-Net achieves state-of-the-art performance, validating its effectiveness and efficiency in multi-task learning.
Problem

Research questions and friction points this paper is trying to address.

multi-task dense prediction
cross-task interaction
linear complexity
high-resolution features
parameter efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

linear attention
multi-task dense prediction
semantic token distillation
cross-task interaction
parameter-efficient
πŸ”Ž Similar Papers
No similar papers found.