🤖 AI Summary
Existing LLM-based database query tools suffer from frequent user intent misinterpretation, hallucinated SQL generation, and inadequate integration of human feedback—compromising reliability and controllability. To address these issues, we propose an interactive, progressive query generation framework that integrates incremental reasoning, real-time semantic validation, and responsive human-in-the-loop interaction, enabling users to dynamically monitor, detect, and correct semantic mismatches and logical errors across multi-turn dialogues. Our core contribution lies in closing the loop among natural language understanding, program synthesis, and explainable verification—thereby achieving robust intent alignment and hallucination suppression. Experimental results demonstrate significant reductions in erroneous query rate and hallucination incidence, alongside measurable improvements in query accuracy, user trust, and operational controllability. This work establishes a novel paradigm for trustworthy, human-centered database interaction.
📝 Abstract
Conversational user interfaces powered by large language models (LLMs) have significantly lowered the technical barriers to database querying. However, existing tools still encounter several challenges, such as misinterpretation of user intent, generation of hallucinated content, and the absence of effective mechanisms for human feedback-all of which undermine their reliability and practical utility. To address these issues and promote a more transparent and controllable querying experience, we proposed QueryGenie, an interactive system that enables users to monitor, understand, and guide the LLM-driven query generation process. Through incremental reasoning, real-time validation, and responsive interaction mechanisms, users can iteratively refine query logic and ensure alignment with their intent.