🤖 AI Summary
Database normalization has traditionally relied on manual effort, resulting in low efficiency and high error rates. This paper proposes a dual-large-language-model (LLM) collaborative, self-optimizing framework for end-to-end automated normalization: a generative LLM designs relational schemas, while a verification LLM performs logical consistency and normal-form compliance checks; the two models engage in closed-loop iterative refinement driven by feedback. We introduce a task-specific zero-shot prompting strategy—eliminating the need for costly fine-tuning—while balancing accuracy, inference efficiency, and cost-effectiveness. Experimental results demonstrate 92.4% accuracy on complex schemas and over 98% completeness in anomaly elimination, significantly reducing human intervention. The framework exhibits robust performance and scalability, confirming its readiness for industrial deployment.
📝 Abstract
Database normalization is crucial to preserving data integrity. However, it is time-consuming and error-prone, as it is typically performed manually by data engineers. To this end, we present Miffie, a database normalization framework that leverages the capability of large language models. Miffie enables automated data normalization without human effort while preserving high accuracy. The core of Miffie is a dual-model self-refinement architecture that combines the best-performing models for normalized schema generation and verification, respectively. The generation module eliminates anomalies based on the feedback of the verification module until the output schema satisfies the requirement for normalization. We also carefully design task-specific zero-shot prompts to guide the models for achieving both high accuracy and cost efficiency. Experimental results show that Miffie can normalize complex database schemas while maintaining high accuracy.