Towards Community-Driven Agents for Machine Learning Engineering

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing ML agents operate primarily in isolated environments, lacking mechanisms for knowledge exchange with the broader scientific community. Method: This paper introduces CoMind, an LLM-driven ML agent system, and MLE-Live, a novel evaluation framework that enables dynamic, community-based collaboration. CoMind simulates a Kaggle-style research community to facilitate inter-agent knowledge sharing, collective intelligence assimilation, and feedback-driven cooperative/competitive problem solving. Key technical components include a community-aware interaction mechanism, a dynamic knowledge integration module, and a human-feedback-guided continual learning strategy. Contribution/Results: Evaluated on MLE-Live, CoMind achieves average performance surpassing 79.2% of human participants across real-time Kaggle competitions and attains state-of-the-art results in four concurrent challenges. The framework significantly enhances the openness, collaborative capacity, and practical applicability of automated machine learning research.

Technology Category

Application Category

📝 Abstract
Large language model-based machine learning (ML) agents have shown great promise in automating ML research. However, existing agents typically operate in isolation on a given research problem, without engaging with the broader research community, where human researchers often gain insights and contribute by sharing knowledge. To bridge this gap, we introduce MLE-Live, a live evaluation framework designed to assess an agent's ability to communicate with and leverage collective knowledge from a simulated Kaggle research community. Building on this framework, we propose CoMind, a novel agent that excels at exchanging insights and developing novel solutions within a community context. CoMind achieves state-of-the-art performance on MLE-Live and outperforms 79.2% human competitors on average across four ongoing Kaggle competitions. Our code is released at https://github.com/comind-ml/CoMind.
Problem

Research questions and friction points this paper is trying to address.

Enhancing ML agents' community interaction for knowledge sharing
Assessing agent performance in simulated research communities
Developing collaborative agents outperforming human competitors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Live evaluation framework for ML agents
Community-driven agent for knowledge exchange
State-of-the-art performance in Kaggle competitions
🔎 Similar Papers
No similar papers found.