🤖 AI Summary
In multi-task reinforcement learning (MTRL), severe inter-task gradient conflicts critically impair both the generalization capability and training efficiency of unified policies. Existing binary masking approaches suffer from coarse-grained suppression, leading to loss of critical parameters, while fixed sparsity strategies fail to adapt to heterogeneous conflict levels across tasks. To address these limitations, we propose a Fisher-guided dynamic soft masking mechanism: it quantifies parameter importance via the Fisher information matrix, employs an interquartile range (IQR)-based adaptive thresholding scheme, and incorporates asymmetric cosine annealing scheduling—enabling task-aware, fine-grained conflict mitigation and balanced knowledge sharing. Evaluated on the Meta-World MT50 benchmark, our method achieves a 7.6% absolute improvement over the state-of-the-art (SOTA), and yields up to a 10.5% gain on the second-best dataset, demonstrating significant advances in both generalization performance and sample efficiency.
📝 Abstract
Multi-task reinforcement learning (MTRL) seeks to learn a unified policy for diverse tasks, but often suffers from gradient conflicts across tasks. Existing masking-based methods attempt to mitigate such conflicts by assigning task-specific parameter masks. However, our empirical study shows that coarse-grained binary masks have the problem of over-suppressing key conflicting parameters, hindering knowledge sharing across tasks. Moreover, different tasks exhibit varying conflict levels, yet existing methods use a one-size-fits-all fixed sparsity strategy to keep training stability and performance, which proves inadequate. These limitations hinder the model's generalization and learning efficiency.
To address these issues, we propose SoCo-DT, a Soft Conflict-resolution method based by parameter importance. By leveraging Fisher information, mask values are dynamically adjusted to retain important parameters while suppressing conflicting ones. In addition, we introduce a dynamic sparsity adjustment strategy based on the Interquartile Range (IQR), which constructs task-specific thresholding schemes using the distribution of conflict and harmony scores during training. To enable adaptive sparsity evolution throughout training, we further incorporate an asymmetric cosine annealing schedule to continuously update the threshold. Experimental results on the Meta-World benchmark show that SoCo-DT outperforms the state-of-the-art method by 7.6% on MT50 and by 10.5% on the suboptimal dataset, demonstrating its effectiveness in mitigating gradient conflicts and improving overall multi-task performance.