🤖 AI Summary
This study addresses the systematic bias in data-to-text generation concerning long-tail (rare) entity descriptions by introducing TailNLG, the first multilingual benchmark specifically designed for long-tail entities. Covering English, Italian, and Spanish and grounded in Wikidata, TailNLG evaluates state-of-the-art large language models under zero-shot settings. The work systematically demonstrates that models consistently underperform on long-tail entities—exhibiting lower embedding scores and higher uncertainty—with performance degradation varying across both models and languages. Furthermore, it reveals that existing automatic evaluation metrics struggle to effectively capture these performance disparities. By providing a new benchmark and analytical framework, this research advances efforts to enhance the multilingual accessibility of knowledge graphs for non-expert users.
📝 Abstract
The automatic verbalization of structured knowledge is a key task for making knowledge graphs accessible to non-expert users and supporting retrieval-augmented generation systems. Although recent advances in Data-to-Text generation have improved multilingual coverage, little attention has been paid to potential biases in the verbalization of rare entities, frequently known as long-tail entities. In this work, we present the first systematic study of long-tail entities in Data-to-Text generation. We introduce TailNLG, a new multilingual benchmark in English, Italian, and Spanish, built from Wikidata and covering entities with varying levels of popularity. We evaluate three different families of large language models in zero-shot settings and compare their performance on rare versus common entities, as well as against the established WebNLG benchmark. Our results reveal a consistent bias against long-tail entities: embedding-based scores are lower, and model uncertainty is higher for rare entities. We further show that the impact of long-tail entities varies across models and languages, and that existing evaluation metrics do not consistently capture these differences, highlighting the need for more reliable evaluation frameworks.