🤖 AI Summary
This study addresses the poor performance of zero-shot multilabel classification (ZS-MLC) for Bengali—a low-resource, morphologically rich agglutinative language. We introduce the first unified benchmark for Bengali ZS-MLC and systematically evaluate 32 state-of-the-art models, including decoder-based LLMs (e.g., LLaMA, DeepSeek) and classic encoder architectures. Methodologically, we propose a Bengali-specific label mapping and inference protocol, a zero-shot prompt engineering framework, and an unsupervised evaluation strategy based on semantic similarity. Results reveal that all SOTA models achieve accuracy below 42%, substantially underperforming their English counterparts—highlighting shared bottlenecks in both decoder- and encoder-based paradigms for such languages. Our core contributions are: (1) establishing the first dedicated Bengali ZS-MLC benchmark; (2) empirically demonstrating fundamental limitations of current LLMs on morphologically complex low-resource languages; and (3) providing concrete directions for model adaptation and data curation.
📝 Abstract
Bangla, a language spoken by over 300 million native speakers and ranked as the sixth most spoken language worldwide, presents unique challenges in natural language processing (NLP) due to its complex morphological characteristics and limited resources. While recent Large Decoder Based models (LLMs), such as GPT, LLaMA, and DeepSeek, have demonstrated excellent performance across many NLP tasks, their effectiveness in Bangla remains largely unexplored. In this paper, we establish the first benchmark comparing decoder-based LLMs with classic encoder-based models for Zero-Shot Multi-Label Classification (Zero-Shot-MLC) task in Bangla. Our evaluation of 32 state-of-the-art models reveals that, existing so-called powerful encoders and decoders still struggle to achieve high accuracy on the Bangla Zero-Shot-MLC task, suggesting a need for more research and resources for Bangla NLP.