Bridging Natural Language and Interactive What-If Interfaces via LLM-Generated Declarative Specification

๐Ÿ“… 2026-04-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing tools struggle to efficiently support interactive โ€œWhat-Ifโ€ analysis: traditional business intelligence (BI) systems require cumbersome configuration, while large language model (LLM)-driven chat interfaces suffer from semantic fragility, inaccurate intent understanding, and inconsistent outputs. This work proposes a two-stage approach that first leverages an LLM to translate natural language queries into a declarative intermediate representation based on the Praxa Specification Language (PSL), which is then compiled into an interactive visualization interface featuring parameterized controls and coordinated views. This intermediate representation is both verifiable and repairable, substantially enhancing translation reliability. Experiments on 405 user queries show an initial generation accuracy of 52.42%; applying few-shot repairs guided by an error taxonomy raises the success rate to 80.42%, demonstrating the critical role of a declarative intermediate representation in improving system robustness.
๐Ÿ“ Abstract
What-if analysis (WIA) is an iterative, multi-step process where users explore and compare hypothetical scenarios by adjusting parameters, applying constraints, and scoping data through interactive interfaces. Current tools fall short of supporting effective interactive WIA: spreadsheet and BI tools require time-consuming and laborious setup, while LLM-based chatbot interfaces are semantically fragile, frequently misinterpret intent, and produce inconsistent results as conversations progress. To address these limitations, we present a two-stage workflow that translates natural language (NL) WIA questions into interactive visual interfaces via an intermediate representation, powered by the Praxa Specification Language (PSL): first, LLMs generate PSL specifications from NL questions capturing analytical intent and logic, enabling validation and repair of erroneous specifications; and second, the specifications are compiled into interactive visual interfaces with parameter controls and linked visualizations. We benchmark this workflow with 405 WIA questions spanning 11 WIA types, 5 datasets, and 3 state-of-the-art LLMs. The results show that across models, half of specifications (52.42%) are generated correctly without intervention. We perform an analysis of the failure cases and derive an error taxonomy spanning non-functional errors (specifications fail to compile) and functional errors (specifications compile but misrepresent intent). Based on the taxonomy, we apply targeted repairs on the failure cases using few-shot prompts and improve the success rate to 80.42%. Finally, we show how undetected functional errors propagate through compilation into plausible but misleading interfaces, demonstrating that the intermediate specification is critical for reliably bridging NL and interactive WIA interface in LLM-powered WIA systems.
Problem

Research questions and friction points this paper is trying to address.

What-if analysis
interactive interfaces
natural language understanding
LLM-based systems
specification generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

declarative specification
what-if analysis
natural language interface
LLM-generated code
interactive visualization
๐Ÿ”Ž Similar Papers
No similar papers found.