🤖 AI Summary
This work addresses the challenge of aligning AI-driven robots with human values. Method: We introduce the first large-scale, science-fiction–inspired benchmark for robot ethics evaluation, encompassing critical AI decision scenarios from 824 films, novels, and popular-science works. Leveraging LLMs for scenario extraction, expert annotation, and human voting, we construct a high-quality dataset comprising 9,056 questions and 53,384 answers. We further propose a “science-fiction–inspired, evolvable AI constitution” generation framework that systematically transforms fictional ethical dilemmas into testable, optimizable safety principles. Our evaluation integrates constitution-guided reasoning, adversarial prompt robustness testing, and transfer assessment on the ASIMOV real-world safety benchmark. Contribution/Results: The LLM+constitution approach achieves 95.8% human value alignment—outperforming baselines by 16.4%—maintains 92.3% alignment under adversarial conditions, and attains state-of-the-art performance on a real-world medical safety benchmark.
📝 Abstract
Given the recent rate of progress in artificial intelligence (AI) and robotics, a tantalizing question is emerging: would robots controlled by emerging AI systems be strongly aligned with human values? In this work, we propose a scalable way to probe this question by generating a benchmark spanning the key moments in 824 major pieces of science fiction literature (movies, tv, novels and scientific books) where an agent (AI or robot) made critical decisions (good or bad). We use a LLM's recollection of each key moment to generate questions in similar situations, the decisions made by the agent, and alternative decisions it could have made (good or bad). We then measure an approximation of how well models align with human values on a set of human-voted answers. We also generate rules that can be automatically improved via amendment process in order to generate the first Sci-Fi inspired constitutions for promoting ethical behavior in AIs and robots in the real world. Our first finding is that modern LLMs paired with constitutions turn out to be well-aligned with human values (95.8%), contrary to unsettling decisions typically made in SciFi (only 21.2% alignment). Secondly, we find that generated constitutions substantially increase alignment compared to the base model (79.4% to 95.8%), and show resilience to an adversarial prompt setting (23.3% to 92.3%). Additionally, we find that those constitutions are among the top performers on the ASIMOV Benchmark which is derived from real-world images and hospital injury reports. Sci-Fi-inspired constitutions are thus highly aligned and applicable in real-world situations. We release SciFi-Benchmark: a large-scale dataset to advance robot ethics and safety research. It comprises 9,056 questions and 53,384 answers, in addition to a smaller human-labeled evaluation set. Data is available at https://scifi-benchmark.github.io