🤖 AI Summary
This work proposes the Voting Information Extraction (VotIE) task, which aims to automatically extract structured voting events from heterogeneous and freely narrated Portuguese municipal meeting minutes. Leveraging the newly constructed CitiLink corpus, the study establishes the first benchmark dataset for this task and evaluates the performance of fine-tuned encoder models—such as XLM-R-CRF—against few-shot large language models (LLMs) in both in-domain and cross-municipality settings. Experimental results show that XLM-R-CRF achieves a strong in-domain macro F1 score of 93.2%, yet its performance degrades significantly when generalizing across municipalities. In contrast, few-shot LLMs, despite higher computational costs, demonstrate superior robustness in cross-municipality transfer. This work thus introduces a novel paradigm and empirical foundation for structured information extraction from municipal administrative texts.
📝 Abstract
Municipal meeting minutes record key decisions in local democratic processes. Unlike parliamentary proceedings, which typically adhere to standardized formats, they encode voting outcomes in highly heterogeneous, free-form narrative text that varies widely across municipalities, posing significant challenges for automated extraction. In this paper, we introduce VotIE (Voting Information Extraction), a new information extraction task aimed at identifying structured voting events in narrative deliberative records, and establish the first benchmark for this task using Portuguese municipal minutes, building on the recently introduced CitiLink corpus. Our experiments yield two key findings. First, under standard in-domain evaluation, fine-tuned encoders, specifically XLM-R-CRF, achieve the strongest performance, reaching 93.2\% macro F1, outperforming generative approaches. Second, in a cross-municipality setting that evaluates transfer to unseen administrative contexts, these models suffer substantial performance degradation, whereas few-shot LLMs demonstrate greater robustness, with significantly smaller declines in performance. Despite this generalization advantage, the high computational cost of generative models currently constrains their practicality. As a result, lightweight fine-tuned encoders remain a more practical option for large-scale, real-world deployment. To support reproducible research in administrative NLP, we publicly release our benchmark, trained models, and evaluation framework.