🤖 AI Summary
This work addresses the limitation of conventional cross-entropy training under teacher forcing, which optimizes only token-level predictions and fails to align with sequence-level behavior in autoregressive generation. To overcome this, the authors propose Energy-Based Fine-Tuning (EBFT), a novel approach that integrates energy-based modeling with semantic feature matching. EBFT generates multiple candidate sequences in parallel, extracts batch-wise feature embeddings, and performs online updates via policy gradient optimization, augmented with KL regularization to directly shape sequence-level statistical properties. Notably, it provides dense semantic feedback without requiring task-specific verifiers or preference models. Experiments across question answering, unstructured code generation, and machine translation demonstrate that EBFT achieves higher downstream accuracy than supervised fine-tuning (SFT), matches the performance of RLVR, and yields lower validation cross-entropy.
📝 Abstract
Cross-entropy (CE) training provides dense and scalable supervision for language models, but it optimizes next-token prediction under teacher forcing rather than sequence-level behavior under model rollouts. We introduce a feature-matching objective for language-model fine-tuning that targets sequence-level statistics of the completion distribution, providing dense semantic feedback without requiring a task-specific verifier or preference model. To optimize this objective efficiently, we propose energy-based fine-tuning (EBFT), which uses strided block-parallel sampling to generate multiple rollouts from nested prefixes concurrently, batches feature extraction over these rollouts, and uses the resulting embeddings to perform an on-policy policy-gradient update. We present a theoretical perspective connecting EBFT to KL-regularized feature-matching and energy-based modeling. Empirically, across Q&A coding, unstructured coding, and translation, EBFT matches RLVR and outperforms SFT on downstream accuracy while achieving a lower validation cross-entropy than both methods.