🤖 AI Summary
To address the input-length and computational constraints of Transformer models (e.g., BERT) in long-document classification (LDC), this paper proposes a **zero-shot context reduction method** that requires no architectural modification. It leverages TF-IDF–driven sentence ranking to select discriminative local contexts, retaining only the most informative sentences. Three lightweight reduction strategies enable plug-and-play transfer of pretrained short-text classifiers to LDC tasks. Experiments on the Maharashtra News dataset show that retaining only the top 50% highest-scoring sentences achieves full-text classification accuracy—without performance degradation—while accelerating inference by up to 35%. The core contribution is the first application of unsupervised sentence ranking to zero-shot LDC, offering an efficient, scalable solution for resource-constrained settings.
📝 Abstract
Transformer-based models like BERT excel at short text classification but struggle with long document classification (LDC) due to input length limitations and computational inefficiencies. In this work, we propose an efficient, zero-shot approach to LDC that leverages sentence ranking to reduce input context without altering the model architecture. Our method enables the adaptation of models trained on short texts, such as headlines, to long-form documents by selecting the most informative sentences using a TF-IDF-based ranking strategy. Using the MahaNews dataset of long Marathi news articles, we evaluate three context reduction strategies that prioritize essential content while preserving classification accuracy. Our results show that retaining only the top 50% ranked sentences maintains performance comparable to full-document inference while reducing inference time by up to 35%. This demonstrates that sentence ranking is a simple yet effective technique for scalable and efficient zero-shot LDC.