🤖 AI Summary
This work addresses the challenge non-expert users face in effectively supervising large language models during complex, long-horizon tasks—such as ambient programming—due to limited domain knowledge, ambiguous intent expression, and difficulty in validating outputs. To overcome this, the authors propose a scalable interactive supervision framework that recursively decomposes user intent into a decision tree, transforming high-level goals into manageable sub-decisions. By integrating low-burden node-level feedback with a novel supervision signal aggregation algorithm, the framework enables precise global guidance. It introduces, for the first time, a recursive decision tree structure to amplify human supervision signals, facilitating reinforcement learning optimization driven solely by online user feedback. This approach not only preserves human control in AI scaling but also significantly enhances non-expert effectiveness; experiments on web development tasks show a 54% improvement in alignment between non-expert-generated specifications and expert-level quality.
📝 Abstract
As Large Language Models increasingly automate complex, long-horizon tasks such as \emph{vibe coding}, a supervision gap has emerged. While models excel at execution, users often struggle to guide them effectively due to insufficient domain expertise, the difficulty of articulating precise intent, and the inability to reliably validate complex outputs. It presents a critical challenge in scalable oversight: enabling humans to responsibly steer AI systems on tasks that surpass their own ability to specify or verify. To tackle this, we propose Scalable Interactive Oversight, a framework that decomposes complex intent into a recursive tree of manageable decisions to amplify human supervision. Rather than relying on open-ended prompting, our system elicits low-burden feedback at each node and recursively aggregates these signals into precise global guidance. Validated in web development task, our framework enables non-experts to produce expert-level Product Requirement Documents, achieving a 54\% improvement in alignment. Crucially, we demonstrate that this framework can be optimized via Reinforcement Learning using only online user feedback, offering a practical pathway for maintaining human control as AI scales.