π€ AI Summary
This study evaluates the ability of large language model agents to maintain strategic coherence, handle delayed feedback, and mitigate error accumulation in long-horizon tasks. To this end, we introduce a benchmark environment simulating a startupβs year-long operations, featuring partially observable states, adversarial customers, and multidimensional decisions such as staffing and contract selection. Our framework innovatively integrates multi-turn interactive simulation, persistent Scratchpad memory, adversarial client design, and cross-model evaluation across multiple random seeds. Experimental results show that only 3 out of 12 models achieved consistent profitability: Claude Opus 4.6 led with an average final valuation of $1.27 million, while GLM-5 attained $1.21 million at approximately one-eleventh the inference cost. Persistent Scratchpad usage emerged as the strongest predictor of success, and 47% of bankruptcies were attributable to failure in identifying adversarial customers.
π Abstract
As LLM agents tackle increasingly complex tasks, a critical question is whether they can maintain strategic coherence over long horizons: planning under uncertainty, learning from delayed feedback, and adapting when early mistakes compound. We introduce $\texttt{YC-Bench}$, a benchmark that evaluates these capabilities by tasking an agent with running a simulated startup over a one-year horizon spanning hundreds of turns. The agent must manage employees, select task contracts, and maintain profitability in a partially observable environment where adversarial clients and growing payroll create compounding consequences for poor decisions. We evaluate 12 models, both proprietary and open source, across 3 seeds each. Only three models consistently surpass the starting capital of \$200K, with Claude Opus 4.6 achieving the highest average final funds at \$1.27 M, followed by GLM-5 at \$1.21 M at 11$\times$ lower inference cost. Scratchpad usage, the sole mechanism for persisting information across context truncation, is the strongest predictor of success, and adversarial client detection is the primary failure mode, accounting for $47\%$ of bankruptcies. Our analysis reveals that frontier models still fail through distinct failure modes such as over-parallelization, demonstrating the capability gaps for long-horizon performance. $\texttt{YC-Bench}$ is open-source, reproducible, and configurable.