Surveying Attitudinal Alignment Between Large Language Models Vs. Humans Towards 17 Sustainable Development Goals

📅 2024-04-22
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies systematic misalignments between large language models (LLMs) and human attitudes across all 17 UN Sustainable Development Goals (SDGs), revealing critical gaps in conceptual understanding, affective valence imbalance, cultural-regional bias, and decision-logic mismatch—risks that may exacerbate social inequity, racial prejudice, and environmental harm. Method: We propose the first SDG-oriented LLM–human attitude alignment analytical framework, integrating cross-cultural semantic analysis, bias auditing, risk root-cause tracing, and an SDG knowledge graph to establish a multidimensional attitude annotation schema. Contribution/Results: Empirical analysis uncovers significant attenuation of support or value drift in LLMs for key SDGs—including Climate Action (SDG 13), Gender Equality (SDG 5), and Zero Hunger (SDG 2). Based on these findings, we formulate a data–training–evaluation co-optimization pathway and actionable alignment governance strategies, offering an SDG-grounded methodological foundation for ethical LLM alignment.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have emerged as potent tools for advancing the United Nations' Sustainable Development Goals (SDGs). However, the attitudinal disparities between LLMs and humans towards these goals can pose significant challenges. This study conducts a comprehensive review and analysis of the existing literature on the attitudes of LLMs towards the 17 SDGs, emphasizing the comparison between their attitudes and support for each goal and those of humans. We examine the potential disparities, primarily focusing on aspects such as understanding and emotions, cultural and regional differences, task objective variations, and factors considered in the decision-making process. These disparities arise from the underrepresentation and imbalance in LLM training data, historical biases, quality issues, lack of contextual understanding, and skewed ethical values reflected. The study also investigates the risks and harms that may arise from neglecting the attitudes of LLMs towards the SDGs, including the exacerbation of social inequalities, racial discrimination, environmental destruction, and resource wastage. To address these challenges, we propose strategies and recommendations to guide and regulate the application of LLMs, ensuring their alignment with the principles and goals of the SDGs, and therefore creating a more just, inclusive, and sustainable future.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Human Differences
Social Inequity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Sustainable Development Goals
Bias and Fairness
🔎 Similar Papers
No similar papers found.
Qingyang Wu
Qingyang Wu
Together AI
Text GenerationDialog SystemsMultimodal
Y
Ying Xu
Washington University in St. Louis
T
Tingsong Xiao
University of Florida
Yunze Xiao
Yunze Xiao
Language Technology Institute, Carnegie Mellon University
Natural Language ProcessingComputational Social ScienceAnthropomorphism
Y
Yitong Li
Stanford University
Tianyang Wang
Tianyang Wang
University of Alabama at Birmingham
machine learning (deep learning)computer vision
Y
Yichi Zhang
Fudan University
S
Shenghai Zhong
Independent Researcher
Y
Yuwei Zhang
Northeastern University
W
Wei Lu
Reinsurance Group of America
Y
Yifan Yang
Texas A&M University