🤖 AI Summary
Current large language model (LLM) systems are constrained by handcrafted harnesses—code modules that manage the storage, retrieval, and presentation of control information—and lack automated methods for their optimization. This work proposes Meta-Harness, the first outer-loop system enabling end-to-end automatic optimization of harness code. Meta-Harness employs an agent-based proposer to explore candidate harnesses and iteratively refines them using historical source code, execution traces, and performance scores recorded in a file system, thereby overcoming the severe information loss inherent in conventional text-based feedback compression. Experimental results demonstrate that Meta-Harness outperforms state-of-the-art context management systems by 7.7 points on online text classification tasks while reducing context token usage by 75%, achieves an average gain of 4.7 points on International Mathematical Olympiad–level reasoning problems, and surpasses handcrafted baselines on TerminalBench-2 coding tasks.
📝 Abstract
The performance of large language model (LLM) systems depends not only on model weights, but also on their harness: the code that determines what information to store, retrieve, and present to the model. Yet harnesses are still designed largely by hand, and existing text optimizers are poorly matched to this setting because they compress feedback too aggressively. We introduce Meta-Harness, an outer-loop system that searches over harness code for LLM applications. It uses an agentic proposer that accesses the source code, scores, and execution traces of all prior candidates through a filesystem. On online text classification, Meta-Harness improves over a state-of-the-art context management system by 7.7 points while using 4x fewer context tokens. On retrieval-augmented math reasoning, a single discovered harness improves accuracy on 200 IMO-level problems by 4.7 points on average across five held-out models. On agentic coding, discovered harnesses surpass the best hand-engineered baselines on TerminalBench-2. Together, these results show that richer access to prior experience can enable automated harness engineering.