🤖 AI Summary
Multilingual semantic role labeling (SRL) faces two core challenges: severe scarcity of annotated data for low-resource languages and the propensity of large-model-driven cross-lingual projection methods (XSRL) to generate uninterpretable hallucinated labels and suffer from cross-frame semantic drift. To address these, we propose a semantic divergence-aware projection refinement framework—the first to explicitly model semantic divergence in SRL—featuring a differentiable projection layer for structured correction. Our approach integrates semantic consistency modeling, adversarial divergence estimation, constraint-aware decoding, and a self-supervised hallucination discrimination module. Evaluated on the CoNLL-2005/2012 benchmarks, our method achieves absolute F1 improvements of 2.3–3.1%, reduces cross-predicate and cross-domain hallucination rates by 37%, and simultaneously enhances generalization and robustness.