The Impact of Steering Large Language Models with Persona Vectors in Educational Applications

πŸ“… 2026-04-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the underexplored impact of activation-based persona steering on large language models’ answer generation and automated scoring in educational contexts. For the first time, we systematically investigate this technique by applying seven distinct persona vectors to both dense and Mixture-of-Experts (MoE) architectures on the ASAP-SAS benchmark, evaluating performance in short-answer generation and scoring tasks. Our findings reveal that persona steering generally degrades generation quality, with English Language Arts (ELA) tasks exhibiting greater sensitivity than science tasks. Moreover, automated scoring demonstrates calibration shifts aligned with the induced persona traits, with MoE models showing shifts up to six times larger than those in dense models. These results highlight significant differential responses to personalization interventions based on both task type and model architecture.
πŸ“ Abstract
Activation-based steering can personalize large language models at inference time, but its effects in educational settings remain unclear. We study persona vectors for seven character traits in short-answer generation and automated scoring on the ASAP-SAS benchmark across three models spanning two architectures. Persona steering lowers answer quality overall, with much larger effects on open-ended English Language Arts (ELA) prompts than on factual science prompts; interpretive and argumentative tasks are up to 11x more sensitive. On the scoring side, we observe predictable valence-aligned calibration shifts: evil and impolite scorers grade more harshly, while good and optimistic scorers grade more leniently. ELA tasks are 2.5-3x more susceptible to scorer personalization than science tasks, and the Mixture-of-Experts model shows roughly 6x larger calibration shifts than the dense models. To our knowledge, this is the first study to systematically examine the effects of activation-steered persona traits in educational generation and scoring, and the results highlight the need for task-aware and architecture-aware calibration when deploying steered models in educational settings.
Problem

Research questions and friction points this paper is trying to address.

persona steering
large language models
educational applications
automated scoring
answer generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

persona steering
activation-based control
automated scoring
educational NLP
model calibration
πŸ”Ž Similar Papers
No similar papers found.
Y
Yongchao Wu
Department of Computer and Systems Sciences, Stockholm University
Aron Henriksson
Aron Henriksson
Associate Professor in Computer and Systems Sciences, Stockholm University
Natural Language Processing