🤖 AI Summary
This work addresses the challenge of open-set learning and novel class discovery in text classification, particularly the lack of a unified benchmark in multilingual settings. To this end, we introduce MOSLD-Bench, the first multilingual benchmark specifically designed for topic classification in open-set scenarios, encompassing 12 languages and approximately 960,000 samples. The benchmark integrates and restructures existing datasets while incorporating newly collected news-domain data. We further propose an ensemble framework built upon pretrained language models that supports multi-stage continual discovery and learning of emerging classes, leveraging data restructuring and targeted news collection strategies. Extensive experiments evaluate the performance of various language models and demonstrate the effectiveness of our approach. Both the code and the benchmark are publicly released to establish a reliable foundation for future research.
📝 Abstract
Open-set learning and discovery (OSLD) is a challenging machine learning task in which samples from new (unknown) classes can appear at test time. It can be seen as a generalization of zero-shot learning, where the new classes are not known a priori, hence involving the active discovery of new classes. While zero-shot learning has been extensively studied in text classification, especially with the emergence of pre-trained language models, open-set learning and discovery is a comparatively new setup for the text domain. To this end, we introduce the first multilingual open-set learning and discovery (MOSLD) benchmark for text categorization by topic, comprising 960K data samples across 12 languages. To construct the benchmark, we (i) rearrange existing datasets and (ii) collect new data samples from the news domain. Moreover, we propose a novel framework for the OSLD task, which integrates multiple stages to continuously discover and learn new classes. We evaluate several language models, including our own, to obtain results that can be used as reference for future work. We release our benchmark at https://github.com/Adriana19Valentina/MOSLD-Bench.