Analyzing Sustainability Messaging in Large-Scale Corporate Social Media

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of aligning multimodal, ambiguous, and dynamically evolving sustainability information in corporate social media with the United Nations’ 17 Sustainable Development Goals (SDGs). To this end, we propose a cross-modal analytical framework integrating vision-language large models (VLMs). Methodologically, we innovatively employ large language models (LLMs) as zero-shot annotators for automatic textual SDG labeling, and synergistically combine VLMs with semantics-driven visual clustering to jointly model both explicit and implicit SDG expressions in images and text. Evaluated on large-scale corporate social media data, our framework uncovers inter-industry disparities in SDG communication patterns and—through first-time empirical evidence—reveals significant negative correlations between corporate SDG discourse, ESG risk exposure, and user engagement intensity. The framework demonstrates high scalability and strong generalization capability, establishing a novel paradigm for automated monitoring and assessment of sustainability communication.

Technology Category

Application Category

📝 Abstract
In this work, we introduce a multimodal analysis pipeline that leverages large foundation models in vision and language to analyze corporate social media content, with a focus on sustainability-related communication. Addressing the challenges of evolving, multimodal, and often ambiguous corporate messaging on platforms such as X (formerly Twitter), we employ an ensemble of large language models (LLMs) to annotate a large corpus of corporate tweets on their topical alignment with the 17 Sustainable Development Goals (SDGs). This approach avoids the need for costly, task-specific annotations and explores the potential of such models as ad-hoc annotators for social media data that can efficiently capture both explicit and implicit references to sustainability themes in a scalable manner. Complementing this textual analysis, we utilize vision-language models (VLMs), within a visual understanding framework that uses semantic clusters to uncover patterns in visual sustainability communication. This integrated approach reveals sectoral differences in SDG engagement, temporal trends, and associations between corporate messaging, environmental, social, governance (ESG) risks, and consumer engagement. Our methods-automatic label generation and semantic visual clustering-are broadly applicable to other domains and offer a flexible framework for large-scale social media analysis.
Problem

Research questions and friction points this paper is trying to address.

Analyzing corporate social media content for sustainability-related communication using multimodal models
Identifying SDG alignment in corporate tweets without costly manual annotations
Revealing sectoral differences and trends in sustainability messaging and ESG risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal pipeline using vision-language foundation models
LLM ensemble annotates tweets for Sustainable Development Goals
Semantic visual clustering uncovers sustainability communication patterns
🔎 Similar Papers
No similar papers found.
Ujjwal Sharma
Ujjwal Sharma
University of Amsterdam, The Netherlands
Stevan Rudinac
Stevan Rudinac
Associate Professor, University of Amsterdam
multimediacomputer visioninformation retrievalmachine learning
A
Ana Mićković
University of Amsterdam, The Netherlands
W
Willemijn van Dolen
University of Amsterdam, The Netherlands
M
Marcel Worring
University of Amsterdam, The Netherlands