🤖 AI Summary
This study addresses two key challenges in public relations (PR) research: the scarcity of labeled data for training models and the heavy reliance of large language models (LLMs) on expert knowledge coupled with limited interpretability. To this end, we propose OPRA—a novel, annotation-light framework for Organization–Public Relationship Assessment. Methodologically, we integrate domain-specific PR expertise into Chain-of-Thought prompting to guide LLMs in analyzing public sentiment from digital media; results are rendered via an interactive visualization system that transparently displays reasoning pathways and evidentiary support. Our contributions are twofold: (1) the first integration of structured PR knowledge into LLM prompts to enable lightweight, interpretable OPRA; and (2) a human-in-the-loop visual analytics interface that enhances model controllability and expert trust. Evaluation on two real-world case studies demonstrates that OPRA outperforms baseline LLMs and prompting strategies across accuracy, usability, and expert acceptance.
📝 Abstract
Analysis of public opinions collected from digital media helps organizations maintain positive relationships with the public. Such public relations (PR) analysis often involves assessing opinions, for example, measuring how strongly people trust an organization. Pre-trained Large Language Models (LLMs) hold great promise for supporting Organization-Public Relationship Assessment (OPRA) because they can map unstructured public text to OPRA dimensions and articulate rationales through prompting. However, adapting LLMs for PR analysis typically requires fine-tuning on large labeled datasets, which is both labor-intensive and knowledge-intensive, making it difficult for PR researchers to apply these models. In this paper, we present OPRA-Vis, a visual analytics system that leverages LLMs for OPRA without requiring extensive labeled data. Our framework employs Chain-of-Thought prompting to guide LLMs in analyzing public opinion data by incorporating PR expertise directly into the reasoning process. Furthermore, OPRA-Vis provides visualizations that reveal the clues and reasoning paths used by LLMs, enabling users to explore, critique, and refine model decisions. We demonstrate the effectiveness of OPRA-Vis through two real-world use cases and evaluate it quantitatively, through comparisons with alternative LLMs and prompting strategies, and qualitatively, through assessments of usability, effectiveness, and expert feedback.