๐ค AI Summary
This work proposes a training-free, noise-guided approach to enhance narrative diversity in Arabic early-grade reading assessment stories while preserving strict educational constraintsโsuch as vocabulary control, readability level, and narrative structure. The method injects calibrated Gaussian perturbations into internal representations of Transformer models, specifically targeting the residual stream and attention entropy. Evaluation across five Arabic-specific models (7โ9B parameters) demonstrates that residual stream noise significantly boosts narrative diversity with minimal impact on text quality or constraint adherence, while attention entropy noise injection (AENI) effectively stabilizes logical coherence and recovers textual fluency. Unlike conventional high-temperature sampling, which often induces readability drift and quality degradation, the proposed technique successfully balances diversity with pedagogical fidelity.
๐ Abstract
Generating diverse, pedagogically valid stories for Arabic early-grade reading assessments requires balancing tight constraints on vocabulary, reading level, and narrative structure against the need to avoid repetitive plots that undermine assessment validity. We investigate noise steering, injecting calibrated Gaussian perturbations into the internal representations of transformer models at inference time, as a training-free diversity method evaluated across five small Arabic-centric language models (7-9B parameters). We compare four injection strategies against high-temperature sampling baselines, measuring diversity, quality, constraint adherence, and reading grade level. Residual stream noise consistently improves narrative diversity with minimal quality or constraint cost and preserves early-grade reading level across all models. Attention entropy noise injection (AENI) stabilizes the otherwise unreliable attention-logit noise while recovering quality. High-temperature sampling inflates reading grade level and causes catastrophic collapse on several models. We find internal representation-level perturbation to be a more suitable diversity strategy than output-level stochasticity for constrained educational content generation.