DAHRS: Divergence-Aware Hallucination-Remediated SRL Projection

📅 2024-07-12
🏛️ International Conference on Applications of Natural Language to Data Bases
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multilingual semantic role labeling (SRL) faces two core challenges: severe scarcity of annotated data for low-resource languages and the propensity of large-model-driven cross-lingual projection methods (XSRL) to generate uninterpretable hallucinated labels and suffer from cross-frame semantic drift. To address these, we propose a semantic divergence-aware projection refinement framework—the first to explicitly model semantic divergence in SRL—featuring a differentiable projection layer for structured correction. Our approach integrates semantic consistency modeling, adversarial divergence estimation, constraint-aware decoding, and a self-supervised hallucination discrimination module. Evaluated on the CoNLL-2005/2012 benchmarks, our method achieves absolute F1 improvements of 2.3–3.1%, reduces cross-predicate and cross-domain hallucination rates by 37%, and simultaneously enhances generalization and robustness.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Addresses multilingual SRL model challenges due to scarce annotated corpora.
Reduces spurious role labels in SRL projection using LLMs.
Improves SRL projection accuracy with divergence-aware remediation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages linguistically-informed alignment remediation
Uses greedy First-Come First-Assign SRL projection
Introduces divergence metric for language adaptation
🔎 Similar Papers
No similar papers found.