π€ AI Summary
This work proposes SP-ABCBench, the first standardized benchmark for evaluating large language model (LLM) agentsβ ability to simulate human attitudes and behaviors under security and privacy (S&P) threats. Comprising 30 tests grounded in real-world user studies, the benchmark quantifies alignment between LLM and human responses across three dimensions: attitudes, behaviors, and consistency. The authors systematically assess 12 prominent LLMs, four role-construction strategies, and two prompting methods, finding that prompts incorporating bounded rationality and cost-benefit trade-offs significantly enhance simulation fidelity. Notably, certain behavioral tasks achieve alignment scores exceeding 95 out of 100, whereas baseline models average only 50β64. The SP-ABCBench dataset and evaluation framework are publicly released to advance research in this area.
π Abstract
A growing body of research assumes that large language model (LLM) agents can serve as proxies for how people form attitudes toward and behave in response to security and privacy (S&P) threats. If correct, these simulations could offer a scalable way to forecast S&P risks in products prior to deployment. We interrogate this assumption using SP-ABCBench, a new benchmark of 30 tests derived from validated S&P human-subject studies, which measures alignment between simulations and human-subjects studies on a 0-100 ascending scale, where higher scores indicate better alignment across three dimensions: Attitude, Behavior, and Coherence. Evaluating twelve LLMs, four persona construction strategies, and two prompting methods, we found that there remains substantial room for improvement: all models score between 50 and 64 on average. Newer, bigger, and smarter models do not reliably do better and sometimes do worse. Some simulation configurations, however, do yield high alignment: e.g., with scores above 95 for some behavior tests when agents are prompted to apply bounded rationality and weigh privacy costs against perceived benefits. We release SP-ABCBench to enable reproducible evaluation as methods improve.