🤖 AI Summary
To address the challenge of generating high-quality AI-assisted assessment items from unstructured lecture notes in educational settings, this paper proposes a knowledge graph (KG) optimization framework grounded in rate-distortion theory. Methodologically, we introduce Fused Gromov–Wasserstein optimal transport into KG modeling to quantify semantic structural similarity within measure spaces, and jointly optimize semantic embeddings with five refinement operations—addition, merging, splitting, deletion, and relinking—to achieve Pareto-optimal trade-offs between KG size (rate) and semantic fidelity (distortion). The resulting compact KG exhibits an interpretable rate-distortion curve. Empirical evaluation on data science lecture notes demonstrates significant improvements: multiple-choice questions generated from the refined KG outperform those derived directly from raw notes across all 15 quality metrics. This work provides a scalable, interpretable, and structurally grounded foundation for AI-driven educational content understanding and assessment.
📝 Abstract
Task-oriented knowledge graphs (KGs) enable AI-powered learning assistant systems to automatically generate high-quality multiple-choice questions (MCQs). Yet converting unstructured educational materials, such as lecture notes and slides, into KGs that capture key pedagogical content remains difficult. We propose a framework for knowledge graph construction and refinement grounded in rate-distortion (RD) theory and optimal transport geometry. In the framework, lecture content is modeled as a metric-measure space, capturing semantic and relational structure, while candidate KGs are aligned using Fused Gromov-Wasserstein (FGW) couplings to quantify semantic distortion. The rate term, expressed via the size of KG, reflects complexity and compactness. Refinement operators (add, merge, split, remove, rewire) minimize the rate-distortion Lagrangian, yielding compact, information-preserving KGs. Our prototype applied to data science lectures yields interpretable RD curves and shows that MCQs generated from refined KGs consistently surpass those from raw notes on fifteen quality criteria. This study establishes a principled foundation for information-theoretic KG optimization in personalized and AI-assisted education.