TailNLG: A Multilingual Benchmark Addressing Verbalization of Long-Tail Entities

📅 2026-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the systematic bias in data-to-text generation concerning long-tail (rare) entity descriptions by introducing TailNLG, the first multilingual benchmark specifically designed for long-tail entities. Covering English, Italian, and Spanish and grounded in Wikidata, TailNLG evaluates state-of-the-art large language models under zero-shot settings. The work systematically demonstrates that models consistently underperform on long-tail entities—exhibiting lower embedding scores and higher uncertainty—with performance degradation varying across both models and languages. Furthermore, it reveals that existing automatic evaluation metrics struggle to effectively capture these performance disparities. By providing a new benchmark and analytical framework, this research advances efforts to enhance the multilingual accessibility of knowledge graphs for non-expert users.
📝 Abstract
The automatic verbalization of structured knowledge is a key task for making knowledge graphs accessible to non-expert users and supporting retrieval-augmented generation systems. Although recent advances in Data-to-Text generation have improved multilingual coverage, little attention has been paid to potential biases in the verbalization of rare entities, frequently known as long-tail entities. In this work, we present the first systematic study of long-tail entities in Data-to-Text generation. We introduce TailNLG, a new multilingual benchmark in English, Italian, and Spanish, built from Wikidata and covering entities with varying levels of popularity. We evaluate three different families of large language models in zero-shot settings and compare their performance on rare versus common entities, as well as against the established WebNLG benchmark. Our results reveal a consistent bias against long-tail entities: embedding-based scores are lower, and model uncertainty is higher for rare entities. We further show that the impact of long-tail entities varies across models and languages, and that existing evaluation metrics do not consistently capture these differences, highlighting the need for more reliable evaluation frameworks.
Problem

Research questions and friction points this paper is trying to address.

long-tail entities
Data-to-Text generation
multilingual benchmark
verbalization bias
knowledge graph
Innovation

Methods, ideas, or system contributions that make the work stand out.

long-tail entities
multilingual benchmark
Data-to-Text generation
zero-shot evaluation
model bias
L
Lia Draetta
University of Turin, Italy
M
Michael Oliverio
University of Turin, Italy
V
Virginia Ramón-Ferrer
Universidad Politécnica de Madrid, Spain
P
Pier Felice Balestrucci
University of Turin, Italy
F
Flaviana Corallo
University of Turin, Italy
C
Carlos Badenes-Olmedo
Universidad Politécnica de Madrid, Spain
A
Alessandro Mazzei
University of Turin, Italy
Marco Antonio Stranisci
Marco Antonio Stranisci
University of Turin
Natural Language ProcessingSemantic Web
Rossana Damiano
Rossana Damiano
Associate Professor, Università di Torino