WARBENCH: A Comprehensive Benchmark for Evaluating LLMs in Military Decision-Making

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses critical gaps in existing military decision-making evaluation benchmarks, which often overlook constraints imposed by international humanitarian law, edge-computing limitations, robustness under the fog of war, and explicit reasoning capabilities—leading to an overestimation of large language models’ (LLMs’) real-world performance. To bridge this gap, we introduce WARBENCH, the first comprehensive evaluation framework that integrates legal compliance, edge deployment feasibility, partial observability, and explicit reasoning. Leveraging 136 high-fidelity historical combat scenarios, we systematically assess nine prominent LLMs using 4-bit quantization simulation, information masking, automated compliance verification, and chain-of-thought analysis. Our findings reveal that while closed-source models largely adhere to legal norms, compact edge-optimized models exhibit violation rates approaching 70%. Performance degrades significantly under complex terrain, force asymmetry, model compression, and information scarcity, yet explicit reasoning substantially mitigates non-compliance risks.

Technology Category

Application Category

📝 Abstract
Large Language Models are increasingly being considered for deployment in safety-critical military applications. However, current benchmarks suffer from structural blindspots that systematically overestimate model capabilities in real-world tactical scenarios. Existing frameworks typically ignore strict legal constraints based on International Humanitarian Law (IHL), omit edge computing limitations, lack robustness testing for fog of war, and inadequately evaluate explicit reasoning. To address these vulnerabilities, we present WARBENCH, a comprehensive evaluation framework establishing a foundational tactical baseline alongside four distinct stress testing dimensions. Through a large scale empirical evaluation of nine leading models on 136 high-fidelity historical scenarios, we reveal severe structural flaws. First, baseline tactical reasoning systematically collapses under complex terrain and high force asymmetry. Second, while state of the art closed source models maintain functional compliance, edge-optimized small models expose extreme operational risks with legal violation rates approaching 70 percent. Furthermore, models experience catastrophic performance degradation under 4-bit quantization and systematic information loss. Conversely, explicit reasoning mechanisms serve as highly effective structural safeguards against inadvertent violations. Ultimately, these findings demonstrate that current models remain fundamentally unready for autonomous deployment in high stakes tactical environments.
Problem

Research questions and friction points this paper is trying to address.

military decision-making
large language models
benchmarking
International Humanitarian Law
edge computing
Innovation

Methods, ideas, or system contributions that make the work stand out.

military decision-making
International Humanitarian Law (IHL)
edge computing constraints
fog of war robustness
explicit reasoning
🔎 Similar Papers
No similar papers found.