🤖 AI Summary
This work addresses the challenges of efficiently converting causal language models into bidirectional encoders, which include ambiguous training objectives, catastrophic forgetting at scale, and difficulties in integrating specialized generative models. The authors propose a dual-strategy adaptation framework that operates without access to the original pretraining data: it first stabilizes representation learning through a prior masking phase, then enables effective transfer via linear weight merging combined with lightweight multi-domain data mixing. Notably, this approach achieves the first seamless injection of modality-specific capabilities from dedicated causal models into bidirectional encoders. The resulting five open-source BidirLM encoders consistently outperform existing methods across textual, visual, and audio representation benchmarks.
📝 Abstract
Transforming causal generative language models into bidirectional encoders offers a powerful alternative to BERT-style architectures. However, current approaches remain limited: they lack consensus on optimal training objectives, suffer from catastrophic forgetting at scale, and fail to flexibly integrate the vast ecosystem of specialized generative models. In this work, through systematic ablations on the Gemma3 and Qwen3 families, we identify the key factors driving successful adaptation, highlighting the critical role of an often-omitted prior masking phase. To scale this process without original pre-training data, we introduce a dual strategy combining linear weight merging with a lightweight multi-domain data mixture that mitigates catastrophic forgetting. Finally, we augment our encoders by merging them with specialized causal models, seamlessly transferring modality- and domain-specific capabilities. This open-source recipe, designed for any causal decoder LLM, yields BidirLM, a family of five encoders that outperform alternatives on text, vision, and audio representation benchmarks.