🤖 AI Summary
This work addresses the challenge of efficiently retrieving rare yet safety-critical scenarios in autonomous driving by introducing the first large-scale benchmark dataset tailored for long-tailed distributions and retrieval-driven tasks. The dataset comprises 423k image frames, 90 rare object categories, and 513k high-quality human-annotated bounding boxes. It supports diverse tasks including text-to-image and image-to-image retrieval, few-shot learning, and multimodal fine-tuning, and is accompanied by standardized data splits and a public test server. Experimental results demonstrate that text-based semantic retrieval outperforms purely visual approaches, with spatially aligned vision-language models achieving the best performance in zero-shot settings. While fine-tuning substantially improves retrieval accuracy, overall performance still leaves considerable room for advancement.
📝 Abstract
Retrieving rare and safety-critical driving scenarios from large-scale datasets is essential for building robust autonomous driving (AD) systems. As dataset sizes continue to grow, the key challenge shifts from collecting more data to efficiently identifying the most relevant samples. We introduce SearchAD, a large-scale rare image retrieval dataset for AD containing over 423k frames drawn from 11 established datasets. SearchAD provides high-quality manual annotations of more than 513k bounding boxes covering 90 rare categories. It specifically targets the needle-in-a-haystack problem of locating extremely rare classes, with some appearing fewer than 50 times across the entire dataset. Unlike existing benchmarks, which focused on instance-level retrieval, SearchAD emphasizes semantic image retrieval with a well-defined data split, enabling text-to-image and image-to-image retrieval, few-shot learning, and fine-tuning of multi-modal retrieval models. Comprehensive evaluations show that text-based methods outperform image-based ones due to stronger inherent semantic grounding. While models directly aligning spatial visual features with language achieve the best zero-shot results, and our fine-tuning baseline significantly improves performance, absolute retrieval capabilities remain unsatisfactory. With a held-out test set on a public benchmark server, SearchAD establishes the first large-scale dataset for retrieval-driven data curation and long-tail perception research in AD: https://iis-esslingen.github.io/searchad/