Can LLMs Assist Annotators in Identifying Morality Frames? -- Case Study on Vaccination Debate on Social Media

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing challenges in moral framing identification within social media vaccine discourse—including data scarcity, high annotation costs, substantial cognitive load, and low inter-annotator agreement—this paper proposes a novel LLM-assisted human-in-the-loop annotation paradigm. Methodologically, it employs a two-stage process: first, an LLM generates moral concepts and their psychological interpretations via few-shot prompting; second, human annotators critically evaluate and refine these outputs using think-aloud protocols. The approach innovatively integrates few-shot reasoning, interpretable prompt engineering, and modeling grounded in moral psychology frameworks. Experiments demonstrate significant improvements in annotation accuracy, reduced task difficulty and cognitive load, and confirm the LLM’s efficacy as a reliable collaborative agent in complex psycholinguistic tasks. This work establishes a scalable, human-centered annotation framework for low-resource ethical computing applications.

Technology Category

Application Category

📝 Abstract
Nowadays, social media is pivotal in shaping public discourse, especially on polarizing issues like vaccination, where diverse moral perspectives influence individual opinions. In NLP, data scarcity and complexity of psycholinguistic tasks such as identifying morality frames makes relying solely on human annotators costly, time-consuming, and prone to inconsistency due to cognitive load. To address these issues, we leverage large language models (LLMs), which are adept at adapting new tasks through few-shot learning, utilizing a handful of in-context examples coupled with explanations that connect examples to task principles. Our research explores LLMs' potential to assist human annotators in identifying morality frames within vaccination debates on social media. We employ a two-step process: generating concepts and explanations with LLMs, followed by human evaluation using a"think-aloud"tool. Our study shows that integrating LLMs into the annotation process enhances accuracy, reduces task difficulty, lowers cognitive load, suggesting a promising avenue for human-AI collaboration in complex psycholinguistic tasks.
Problem

Research questions and friction points this paper is trying to address.

LLMs assist in identifying morality frames
Reduce cognitive load in annotation tasks
Enhance accuracy in psycholinguistic analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs assist in morality frame identification
Few-shot learning with in-context examples
Human-AI collaboration reduces cognitive load
🔎 Similar Papers
No similar papers found.