🤖 AI Summary
In LLM-driven incremental synthesis of network configurations (e.g., route maps and ACLs), ambiguous user intents frequently cause overlapping rule headers, hindering autonomous priority resolution—empirical analysis reveals hundreds of ACL overlaps in cloud environments. To address this, we propose Clarify, a novel system featuring a lightweight Disambiguator module that guides users interactively to resolve intent ambiguities and integrates formal logic verification for verifiable incremental configuration synthesis. Unlike end-to-end black-box generation, Clarify explicitly models intent disambiguation as a collaborative reasoning step, ensuring both security guarantees and policy accuracy. Evaluation demonstrates that Clarify significantly reduces configuration conflict rates and, for the first time, systematically identifies and resolves LLM synthesis ambiguities arising from rule overlap in real-world network deployments.
📝 Abstract
Beyond hallucinations, another problem in program synthesis using LLMs is ambiguity in user intent. We illustrate the ambiguity problem in a networking context for LLM-based incremental configuration synthesis of route-maps and ACLs. These structures frequently overlap in header space, making the relative priority of actions impossible for the LLM to infer without user interaction. Measurements in a large cloud identify complex ACLs with 100's of overlaps, showing ambiguity is a real problem. We propose a prototype system, Clarify, which uses an LLM augmented with a new module called a Disambiguator that helps elicit user intent. On a small synthetic workload, Clarify incrementally synthesizes routing policies after disambiguation and then verifies them. Our treatment of ambiguities is useful more generally when the intent of updates can be correctly synthesized by LLMs, but their integration is ambiguous and can lead to different global behaviors.