🤖 AI Summary
Multi-objective discrete optimization tasks—such as molecular design—face challenges of exponentially large combinatorial search spaces and susceptibility to local optima.
Method: This paper proposes a collaborative evolutionary framework integrating a frozen, closed-source large language model (LLM) with a trainable lightweight open-source model. A trajectory memory mechanism enables synergistic knowledge-driven exploration—leveraging the LLM’s pretrained priors and logical reasoning—and experience-driven learning—where the small model is continuously refined via reinforcement learning. This establishes a bidirectional, mutually reinforcing loop, departing from conventional unidirectional knowledge distillation paradigms.
Contribution/Results: Evaluated on multi-objective drug design, the method significantly improves Pareto front quality across diverse objectives, consistently outperforming state-of-the-art baselines in both diversity and optimality of generated molecules.
📝 Abstract
Multi-objective discrete optimization problems, such as molecular design, pose significant challenges due to their vast and unstructured combinatorial spaces. Traditional evolutionary algorithms often get trapped in local optima, while expert knowledge can provide crucial guidance for accelerating convergence. Large language models (LLMs) offer powerful priors and reasoning ability, making them natural optimizers when expert knowledge matters. However, closed-source LLMs, though strong in exploration, cannot update their parameters and thus cannot internalize experience. Conversely, smaller open models can be continually fine-tuned but lack broad knowledge and reasoning strength. We introduce Multi-LLM Collaborative Co-evolution (MCCE), a hybrid framework that unites a frozen closed-source LLM with a lightweight trainable model. The system maintains a trajectory memory of past search processes; the small model is progressively refined via reinforcement learning, with the two models jointly supporting and complementing each other in global exploration. Unlike model distillation, this process enhances the capabilities of both models through mutual inspiration. Experiments on multi-objective drug design benchmarks show that MCCE achieves state-of-the-art Pareto front quality and consistently outperforms baselines. These results highlight a new paradigm for enabling continual evolution in hybrid LLM systems, combining knowledge-driven exploration with experience-driven learning.