🤖 AI Summary
This work addresses a critical challenge in AI-assisted programming: as AI agents operate at increasingly abstract levels, developers often unwittingly relinquish control over pivotal design decisions. To counter this, the paper introduces “decision-oriented programming,” a novel paradigm that reconfigures human–AI collaboration through explicit structuring of design decisions, interactive co-creation mechanisms, and traceable links between each decision and executable test suites. The authors implement this approach in Aporia, a design probe integrating decision tracking, inquiry-driven guidance, and automated code generation from decisions to tests. User studies demonstrate that this method substantially enhances developer engagement in the design process, yielding a fivefold improvement in alignment between developers’ mental models and the resulting implementation compared to baseline AI programming agents.
📝 Abstract
AI agents allow developers to express computational intent abstractly, reducing cognitive effort and helping achieve flow during programming. Increased abstraction, however, comes at a cost: developers cede decision-making authority to agents, often without realizing that important design decisions are being made without them. We aim to bring these decisions to the foreground in a paradigm we dub decision-oriented programming. In DOP, (1) decisions are explicit and structured, serving as the shared medium between the programmer and the agent; (2) decisions are co-authored interactively, with the agent proactively eliciting them from the programmer; and (3) each decision is traceable to code. As a step towards this vision, we have built Aporia, a design probe that tracks decisions in a persistent, editable Decision Bank; elicits them by asking programmers design questions; and encodes each decision as an executable test suite that can be used to validate the implementation.
In a user study of 14 programmers, Aporia increased engagement in the design process and scaffolded both exploration and validation. Participants also gained a more accurate understanding of their implementations, with their mental models 5x less likely to disagree with the code than a baseline coding agent.