EdgeRunner 20B: Military Task Parity with GPT-5 while Running on the Edge

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deploying high-security, low-latency AI models for military applications on resource-constrained edge devices poses significant challenges in balancing performance, sovereignty, and real-time inference. Method: We propose a military-task-oriented lightweight large language model (LLM) optimization framework, built upon the open-source 20B-parameter gpt-oss-20b architecture. It employs fine-tuning on 1.6 million high-quality military-domain samples and integrates efficient inference optimizations—including quantization, kernel fusion, and memory-efficient attention. Contribution/Results: Our 2B-parameter model achieves ≥95% of GPT-5’s performance across four critical military tasks—tactical decision-making, battlefield medicine, cyber offense/defense, and intelligence analysis—while maintaining lossless accuracy on mainstream general benchmarks (e.g., MMLU, C-Eval). Fully offline and air-gapped deployment is supported; empirical evaluation confirms efficient operation on typical edge hardware (<16 GB RAM), ensuring data sovereignty, sub-second latency, and robust generalization under operational constraints.

Technology Category

Application Category

📝 Abstract
We present EdgeRunner 20B, a fine-tuned version of gpt-oss-20b optimized for military tasks. EdgeRunner 20B was trained on 1.6M high-quality records curated from military documentation and websites. We also present four new tests sets: (a) combat arms, (b) combat medic, (c) cyber operations, and (d) mil-bench-5k (general military knowledge). On these military test sets, EdgeRunner 20B matches or exceeds GPT-5 task performance with 95%+ statistical significance, except for the high reasoning setting on the combat medic test set and the low reasoning setting on the mil-bench-5k test set. Versus gpt-oss-20b, there is no statistically-significant regression on general-purpose benchmarks like ARC-C, GPQA Diamond, GSM8k, IFEval, MMLU Pro, or TruthfulQA, except for GSM8k in the low reasoning setting. We also present analyses on hyperparameter settings, cost, and throughput. These findings show that small, locally-hosted models are ideal solutions for data-sensitive operations such as in the military domain, allowing for deployment in air-gapped edge devices.
Problem

Research questions and friction points this paper is trying to address.

Optimizing language models for military task performance on edge devices
Achieving GPT-5 parity in military domains while maintaining general capabilities
Enabling secure deployment of AI in data-sensitive military operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned GPT-OSS-20B for military tasks
Trained on 1.6M military documentation records
Achieves GPT-5 parity on edge devices
🔎 Similar Papers
No similar papers found.
J
Jack FitzGerald
EdgeRunner AI
A
Aristotelis Lazaridis
EdgeRunner AI
D
Dylan Bates
EdgeRunner AI
Aman Sharma
Aman Sharma
PhD Student, KTH Royal Institute of Technology
Software EngineeringSoftware Supply Chain
J
Jonnathan Castillo
EdgeRunner AI
Y
Yousif Azami
EdgeRunner AI
S
Sean Bailey
EdgeRunner AI
J
Jeremy Cao
EdgeRunner AI
P
Peter Damianov
EdgeRunner AI
K
Kevin de Haan
EdgeRunner AI
L
Luke Kerbs
EdgeRunner AI
V
Vincent Lu
EdgeRunner AI
J
Joseph Madigan
EdgeRunner AI
J
Jeremy McLaurin
EdgeRunner AI
J
Jonathan Tainer
EdgeRunner AI
D
Dave Anderson
EdgeRunner AI
J
Jonathan Beck
EdgeRunner AI
J
Jamie Cuticello
EdgeRunner AI
C
Colton Malkerson
EdgeRunner AI
T
Tyler Saltsman
EdgeRunner AI