🤖 AI Summary
This work addresses the limitation of existing large language model (LLM)-based tutoring systems, which typically employ a single instructional strategy and thus fail to capture the diverse teaching styles observed in real-world pedagogy and their impact on student interaction. The authors propose a novel approach that leverages authentic teacher–student dialogues to directly learn interpretable activation-space steering vectors from human teaching data. By integrating these vectors with an enhanced Bidirectional Preference Optimization (BiPO) framework, the method enables fine-grained, prompt-free control over the model’s tutoring behavior. Experimental results demonstrate that this approach significantly improves semantic alignment between model outputs and real teacher utterances, as well as human preference ratings, while maintaining high lexical similarity—thereby validating its effectiveness and interpretability.
📝 Abstract
With the emergence of large language models (LLMs) as a powerful class of generative artificial intelligence (AI), their use in tutoring has become increasingly prominent. Prior works on LLM-based tutoring typically learn a single tutor policy and do not capture the diversity of tutoring styles. In real-world tutor-student interactions, pedagogical intent is realized through adaptive instructional strategies, with tutors varying the level of scaffolding, instructional directiveness, feedback, and affective support in response to learners'needs. These differences can all impact dialogue dynamics and student engagement. In this paper, we explore how tutor personas embedded in human tutor-student dialogues can be used to guide LLM behavior without relying on explicitly prompted instructions. We modify Bidirectional Preference Optimization (BiPO) to learn a steering vector, an activation-space direction that steers model responses towards certain tutor personas. We find that this steering vector captures tutor-specific variation across dialogue contexts, improving semantic alignment with ground-truth tutor utterances and increasing preference-based evaluations, while largely preserving lexical similarity. Analysis of the learned directional coefficients further reveals interpretable structure across tutors, corresponding to consistent differences in tutoring behavior. These results demonstrate that activation steering offers an effective and interpretable way for controlling tutor-specific variation in LLMs using signals derived directly from human dialogue data.