๐ค AI Summary
Existing approaches fail to fully exploit pre-trained language models (e.g., CodeBERT) for recovering links between issue reports and code commits, primarily due to insufficient cross-modal semantic alignment and poor robustness under noisy inputs. Method: We propose a multi-template prompt-tuning framework featuring (i) a novel collaborative multi-template prompt mechanism to enhance semantic alignment across heterogeneous modalities, and (ii) a lightweight FGSM-based adversarial perturbation strategy to improve noise robustness. Additionally, we integrate contrastive learning with a dual-encoder architecture to boost generalization in low-resource settings. Contribution/Results: Evaluated on multiple open-source project datasets, our method achieves an F1 score of 89.7%, outperforming state-of-the-art methods by an average of 4.2%. Results demonstrate superior accuracy, stability, and practical applicability for issueโcommit link recovery.