🤖 AI Summary
This study addresses the limitations of existing fact-checking approaches in identifying implicit claims and rhetorical strategies in health-related social media influencer content, which hinders accurate assessment of their real-world impact. To overcome this, the authors propose the TAIGR framework, a three-stage pipeline that first extracts core recommendations, then constructs argumentation graphs to capture underlying reasoning structures, and finally employs factor graphs for probabilistic inference. TAIGR represents the first integration of structured argumentation modeling with pragmatic reasoning, moving beyond conventional flat claim-detection paradigms. Experimental results on a dataset of transcribed health-influencer videos demonstrate that explicitly modeling both the argumentative structure and pragmatic features of discourse significantly improves the accuracy of credibility assessment.
📝 Abstract
Health influencers play a growing role in shaping public beliefs, yet their content is often conveyed through conversational narratives and rhetorical strategies rather than explicit factual claims. As a result, claim-centric verification methods struggle to capture the pragmatic meaning of influencer discourse. In this paper, we propose TAIGR (Takeaway Argumentation Inference with Grounded References), a structured framework designed to analyze influencer discourse, which operates in three stages: (1) identifying the core influencer recommendation--takeaway; (2) constructing an argumentation graph that captures influencer justification for the takeaway; (3) performing factor graph-based probabilistic inference to validate the takeaway. We evaluate TAIGR on a content validation task over influencer video transcripts on health, showing that accurate validation requires modeling the discourse's pragmatic and argumentative structure rather than treating transcripts as flat collections of claims.