Improving Procedural Skill Explanations via Constrained Generation: A Symbolic-LLM Hybrid Architecture

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Procedural skill instruction requires explanations that address both *how* to perform a task and *why* each step is necessary; however, current large language models (LLMs) typically generate step-by-step instructions lacking causal reasoning, goal hierarchy, and structural decomposition. Method: We propose Ivy, the first system to integrate a symbolic Task-Method-Knowledge (TMK) model as a structural constraint into LLM generation—explicitly encoding task causality and goal abstraction—and employ constrained decoding to produce pedagogically effective, multi-step instructional explanations. Contribution/Results: This neuro-symbolic architecture significantly enhances explanation structure and reasoning depth. In expert evaluation and independent annotation experiments, Ivy outperforms GPT-based and retrieval-augmented baselines across three dimensions—causality, goal-directedness, and compositional logical coherence—demonstrating the critical role of symbolic priors in improving instructional explanation quality.

Technology Category

Application Category

📝 Abstract
In procedural skill learning, instructional explanations must convey not just steps, but the causal, goal-directed, and compositional logic behind them. Large language models (LLMs) often produce fluent yet shallow responses that miss this structure. We present Ivy, an AI coaching system that delivers structured, multi-step explanations by combining symbolic Task-Method-Knowledge (TMK) models with a generative interpretation layer-an LLM that constructs explanations while being constrained by TMK structure. TMK encodes causal transitions, goal hierarchies, and problem decompositions, and guides the LLM within explicit structural bounds. We evaluate Ivy against responses against GPT and retrieval-augmented GPT baselines using expert and independent annotations across three inferential dimensions. Results show that symbolic constraints consistently improve the structural quality of explanations for "how" and "why" questions. This study demonstrates a scalable AI for education approach that strengthens the pedagogical value of AI-generated explanations in intelligent coaching systems.
Problem

Research questions and friction points this paper is trying to address.

LLMs produce shallow explanations lacking causal and goal-directed structure
Instructional explanations require conveying compositional logic behind procedural steps
AI coaching systems need structured explanations for how and why questions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Symbolic TMK models constrain LLM generation
LLM constructs explanations within structural bounds
Hybrid architecture improves explanation structural quality
🔎 Similar Papers
No similar papers found.
R
Rahul Dass
Georgia Institute of Technology
T
Thomas Bowlin
Georgia Institute of Technology
Z
Zebing Li
Georgia Institute of Technology
Xiao Jin
Xiao Jin
CUHK
CV && RecSys
A
Ashok Goel
Georgia Institute of Technology