🤖 AI Summary
This work addresses inherent limitations of large language models (LLMs) in semantic understanding and behavioral reliability. We propose a hierarchical dialogue architecture that tightly couples LLMs with Answer Set Programming (ASP), wherein the LLM serves *exclusively* as a bidirectional parser between natural language and formal logic, while all core logical inference is delegated to ASP. This strict separation ensures semantically interpretable and behaviorally verifiable dialogue reasoning. Our key contribution is the first principled decoupling framework—LLM+ASP—that explicitly disentangles linguistic interpretation from symbolic reasoning, thereby overcoming critical bottlenecks in end-to-end LLM-based dialogue systems: pervasive hallucination, ill-defined operational boundaries, and non-auditable reasoning traces. Empirical evaluation on both task-oriented and social dialogue prototypes demonstrates significant improvements in reasoning traceability, hallucination resistance, and behavioral controllability.
📝 Abstract
Efforts have been made to make machines converse like humans in the past few decades. The recent techniques of Large Language Models (LLMs) make it possible to have human-like conversations with machines, but LLM's flaws of lacking understanding and reliability are well documented. We believe that the best way to eliminate this problem is to use LLMs only as parsers to translate text to knowledge and vice versa and carry out the conversation by reasoning over this knowledge using the answer set programming. I have been developing a framework based on LLMs and ASP to realize reliable chatbots that"understand"human conversation. This framework has been used to develop task-specific chatbots as well as socialbots. My future research is focused on making these chatbots scalable and trainable.