🤖 AI Summary
Explaining contrastive queries (e.g., “Why A instead of B?”) in course scheduling remains challenging due to the need for both logical rigor and human-interpretable explanations.
Method: This paper proposes a novel framework integrating symbolic reasoning with large language models (LLMs) via deep synergy: a SAT solver guarantees constraint satisfaction and logical correctness, while the LLM handles query parsing, bidirectional mapping between formal logic and natural language, and generation/refinement of explanations.
Contribution/Results: We introduce the first verifiable and traceable contrastive explainable AI architecture for course scheduling. Experiments on real-world data demonstrate 100% constraint satisfaction, a 37% improvement in explanation faithfulness, and 92% user comprehension satisfaction—effectively bridging formal soundness and linguistic readability.
📝 Abstract
We present TRACE-cs, a novel hybrid system that combines symbolic reasoning with large language models (LLMs) to address contrastive queries in scheduling problems. TRACE-cs leverages SAT solving techniques to encode scheduling constraints and generate explanations for user queries, while utilizing an LLM to process the user queries into logical clauses as well as refine the explanations generated by the symbolic solver to natural language sentences. By integrating these components, our approach demonstrates the potential of combining symbolic methods with LLMs to create explainable AI agents with correctness guarantees.