🤖 AI Summary
This work addresses the limitation of narrative similarity prediction under the assumption of a single ground truth, which fails to capture the inherent multiplicity of interpretations. To overcome this, the authors propose a role-based ensemble approach leveraging large language models, incorporating 31 distinct personas spanning both expert and lay perspectives. Treating diverse viewpoints as complementary rather than noisy, the method employs majority voting to enhance robustness. Evaluated on the SemEval-2026 Task 4 dataset, the approach achieves an accuracy of 0.705, with performance markedly improving as ensemble size increases. Although individual expert personas exhibit weaker standalone performance, their low error correlation yields substantial ensemble gains. The study further highlights the inadequacy of current evaluation benchmarks in accounting for interpretive diversity and offers a novel paradigm for modeling semantic similarity through multi-perspective integration.
📝 Abstract
Predicting narrative similarity can be understood as an inherently interpretive task: different, equally valid readings of the same text can produce divergent interpretations and thus different similarity judgments, posing a fundamental challenge for semantic evaluation benchmarks that encode a single ground truth. Rather than treating this multiperspectivity as a challenge to overcome, we propose to incorporate it in the decision making process of predictive systems. To explore this strategy, we created an ensemble of 31 LLM personas. These range from practitioners following interpretive frameworks to more intuitive, lay-style characters. Our experiments were conducted on the SemEval-2026 Task 4 dataset, where the system achieved an accuracy score of 0.705. Accuracy improves with ensemble size, consistent with Condorcet Jury Theorem-like dynamics under weakened independence. Practitioner personas perform worse individually but produce less correlated errors, yielding larger ensemble gains under majority voting. Our error analysis reveals a consistent negative association between gender-focused interpretive vocabulary and accuracy across all persona categories, suggesting either attention to dimensions not relevant for the benchmark or valid interpretations absent from the ground truth. This finding underscores the need for evaluation frameworks that account for interpretive plurality.