Evaluating Proactive Risk Awareness of Large Language Models

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inability of large language models (LLMs) to proactively anticipate potentially severe yet unintended risks in everyday decision-making. To this end, we propose the first ecologically responsible framework for evaluating proactive risk awareness and introduce Butterfly, a novel dataset comprising 1,094 queries spanning environmental and ecological scenarios. Through comprehensive evaluations—including large-scale model benchmarking, cross-lingual analysis, and multimodal assessment—we reveal systematic blind spots in mainstream LLMs, particularly under output length constraints, in multilingual settings, and in species conservation tasks. Our findings demonstrate a marked deficiency in proactive risk warning capabilities, exposing critical limitations in current safety alignment mechanisms and establishing a new benchmark and direction for enhancing models’ foresight in identifying prospective ecological risks.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are increasingly embedded in everyday decision-making, their safety responsibilities extend beyond reacting to explicit harmful intent toward anticipating unintended but consequential risks. In this work, we introduce a proactive risk awareness evaluation framework that measures whether LLMs can anticipate potential harms and provide warnings before damage occurs. We construct the Butterfly dataset to instantiate this framework in the environmental and ecological domain. It contains 1,094 queries that simulate ordinary solution-seeking activities whose responses may induce latent ecological impact. Through experiments across five widely used LLMs, we analyze the effects of response length, languages, and modality. Experimental results reveal consistent, significant declines in proactive awareness under length-restricted responses, cross-lingual similarities, and persistent blind spots in (multimodal) species protection. These findings highlight a critical gap between current safety alignment and the requirements of real-world ecological responsibility, underscoring the need for proactive safeguards in LLM deployment.
Problem

Research questions and friction points this paper is trying to address.

proactive risk awareness
large language models
ecological impact
safety alignment
latent risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

proactive risk awareness
large language models
safety alignment
ecological impact
Butterfly dataset
🔎 Similar Papers
No similar papers found.