Towards a Perspectivist Turn in Argument Quality Assessment

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the pronounced subjectivity and multiplicity of perspectives in argument quality assessment, as well as the lack of annotator metadata and multidimensional quality annotations in existing datasets. We propose a perspectivist framework for argument quality evaluation. Through systematic literature review and hierarchical taxonomy design, we establish the first unified classification scheme and evaluation standard for argument datasets in perspectivist NLP. We conduct the first systematic analysis of annotator metadata—including background and stance—to quantify their influence on quality judgments. Pilot experiments validate the comparability and interoperability of our taxonomy. Key contributions include: (1) a standardized, multidimensional classification framework covering coherence, relevance, evidential support, and rhetorical effectiveness; (2) identification of high-quality datasets amenable to non-aggregated, fine-grained modeling; and (3) practical guidelines for annotator diversity control and transparent reporting of perspective-dependent judgments.

Technology Category

Application Category

📝 Abstract
The assessment of argument quality depends on well-established logical, rhetorical, and dialectical properties that are unavoidably subjective: multiple valid assessments may exist, there is no unequivocal ground truth. This aligns with recent paths in machine learning, which embrace the co-existence of different perspectives. However, this potential remains largely unexplored in NLP research on argument quality. One crucial reason seems to be the yet unexplored availability of suitable datasets. We fill this gap by conducting a systematic review of argument quality datasets. We assign them to a multi-layered categorization targeting two aspects: (a) What has been annotated: we collect the quality dimensions covered in datasets and consolidate them in an overarching taxonomy, increasing dataset comparability and interoperability. (b) Who annotated: we survey what information is given about annotators, enabling perspectivist research and grounding our recommendations for future actions. To this end, we discuss datasets suitable for developing perspectivist models (i.e., those containing individual, non-aggregated annotations), and we showcase the importance of a controlled selection of annotators in a pilot study.
Problem

Research questions and friction points this paper is trying to address.

Addressing subjectivity in argument quality assessment
Exploring perspectivist approaches in NLP research
Systematically reviewing and categorizing argument quality datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-layered dataset categorization
Perspectivist model development
Controlled annotator selection
🔎 Similar Papers
No similar papers found.