🤖 AI Summary
This study addresses the catastrophic failure of autonomous agents in iterative optimization, where conventional metrics like accuracy mask poor performance on low-prevalence tasks—leading to “zero-detection, high-accuracy” scenarios. Building upon the Pythia open-source framework for automated prompt optimization, the authors investigate validation sensitivity instability across three clinical symptom detection tasks with varying prevalence rates. They propose two intervention strategies: a guidance agent and a selection agent. Experimental results demonstrate that the selection agent, leveraging retrospective choice, effectively stabilizes the optimization process. Notably, with only a single natural language term as input, this approach substantially improves detection performance for low-prevalence symptoms: F1 scores for “brain fog” detection surpass those of an expert-curated dictionary by 331%, and chest pain detection improves by 7%.
📝 Abstract
Autonomous agentic workflows that iteratively refine their own behavior hold considerable promise, yet their failure modes remain poorly characterized. We investigate optimization instability, a phenomenon in which continued autonomous improvement paradoxically degrades classifier performance, using Pythia, an open-source framework for automated prompt optimization. Evaluating three clinical symptoms with varying prevalence (shortness of breath at 23%, chest pain at 12%, and Long COVID brain fog at 3%), we observed that validation sensitivity oscillated between 1.0 and 0.0 across iterations, with severity inversely proportional to class prevalence. At 3% prevalence, the system achieved 95% accuracy while detecting zero positive cases, a failure mode obscured by standard evaluation metrics. We evaluated two interventions: a guiding agent that actively redirected optimization, amplifying overfitting rather than correcting it, and a selector agent that retrospectively identified the best-performing iteration successfully prevented catastrophic failure. With selector agent oversight, the system outperformed expert-curated lexicons on brain fog detection by 331% (F1) and chest pain by 7%, despite requiring only a single natural language term as input. These findings characterize a critical failure mode of autonomous AI systems and demonstrate that retrospective selection outperforms active intervention for stabilization in low-prevalence classification tasks.