🤖 AI Summary
This work addresses the underutilization of high-performing yet overlooked fine-tuned models in public repositories, a consequence of the research community’s disproportionate focus on popular base models that leaves “hidden gems” undiscovered. To tackle this issue, the authors formulate model discovery as a multi-armed bandit problem and propose an enhanced successive halving algorithm incorporating a shared query set and an aggressive elimination strategy for efficient retrieval. Under stringent evaluation constraints—limited to only 50 queries per candidate model—the method identifies a previously neglected model within the Llama-3.1-8B family that boosts mathematical reasoning accuracy from 83.2% to 96.0%. The approach achieves over a 50-fold improvement in search efficiency compared to baseline methods.
📝 Abstract
Public repositories host millions of fine-tuned models, yet community usage remains disproportionately concentrated on a small number of foundation checkpoints. We investigate whether this concentration reflects efficient market selection or if superior models are systematically overlooked. Through an extensive evaluation of over 2,000 models, we show the prevalence of"hidden gems", unpopular fine-tunes that significantly outperform their popular counterparts. Notably, within the Llama-3.1-8B family, we find rarely downloaded checkpoints that improve math performance from 83.2% to 96.0% without increasing inference costs. However, discovering these models through exhaustive evaluation of every uploaded model is computationally infeasible. We therefore formulate model discovery as a Multi-Armed Bandit problem and accelerate the Sequential Halving search algorithm by using shared query sets and aggressive elimination schedules. Our method retrieves top models with as few as 50 queries per candidate, accelerating discovery by over 50x.