🤖 AI Summary
Existing LLM-based text encoders predominantly rely on contrastive loss, treating the model as a black box and discarding its generative and reasoning capabilities—yielding only static, uninterpretable embeddings.
Method: We propose GRACE, the first framework that reformulates contrastive learning objectives as differentiable reward signals for generating interpretable natural language rationales via policy gradient optimization; high-quality embeddings are then derived via mean pooling over generated tokens.
Contribution/Results: GRACE unifies generative reasoning with representation learning by leveraging contrastive signals to steer controllable, interpretable generation. Crucially, it endows encoders with both semantic expressiveness and intrinsic interpretability. On the MTEB benchmark, GRACE yields substantial gains across four backbone models: +11.5% in supervised settings and +6.9% in unsupervised variants—without compromising general-purpose capabilities.
📝 Abstract
Prevailing methods for training Large Language Models (LLMs) as text encoders rely on contrastive losses that treat the model as a black box function, discarding its generative and reasoning capabilities in favor of static embeddings. We introduce GRACE (Generative Representation Learning via Contrastive Policy Optimization), a novel framework that reimagines contrastive signals not as losses to be minimized, but as rewards that guide a generative policy. In GRACE, the LLM acts as a policy that produces explicit, human-interpretable rationales--structured natural language explanations of its semantic understanding. These rationales are then encoded into high-quality embeddings via mean pooling. Using policy gradient optimization, we train the model with a multi-component reward function that maximizes similarity between query positive pairs and minimizes similarity with negatives. This transforms the LLM from an opaque encoder into an interpretable agent whose reasoning process is transparent and inspectable. On MTEB benchmark, GRACE yields broad cross category gains: averaged over four backbones, the supervised setting improves overall score by 11.5% over base models, and the unsupervised variant adds 6.9%, while preserving general capabilities. This work treats contrastive objectives as rewards over rationales, unifying representation learning with generation to produce stronger embeddings and transparent rationales. The model, data and code are available at https://github.com/GasolSun36/GRACE.