Efficient Zero-Shot Long Document Classification by Reducing Context Through Sentence Ranking

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the input-length and computational constraints of Transformer models (e.g., BERT) in long-document classification (LDC), this paper proposes a **zero-shot context reduction method** that requires no architectural modification. It leverages TF-IDF–driven sentence ranking to select discriminative local contexts, retaining only the most informative sentences. Three lightweight reduction strategies enable plug-and-play transfer of pretrained short-text classifiers to LDC tasks. Experiments on the Maharashtra News dataset show that retaining only the top 50% highest-scoring sentences achieves full-text classification accuracy—without performance degradation—while accelerating inference by up to 35%. The core contribution is the first application of unsupervised sentence ranking to zero-shot LDC, offering an efficient, scalable solution for resource-constrained settings.

Technology Category

Application Category

📝 Abstract
Transformer-based models like BERT excel at short text classification but struggle with long document classification (LDC) due to input length limitations and computational inefficiencies. In this work, we propose an efficient, zero-shot approach to LDC that leverages sentence ranking to reduce input context without altering the model architecture. Our method enables the adaptation of models trained on short texts, such as headlines, to long-form documents by selecting the most informative sentences using a TF-IDF-based ranking strategy. Using the MahaNews dataset of long Marathi news articles, we evaluate three context reduction strategies that prioritize essential content while preserving classification accuracy. Our results show that retaining only the top 50% ranked sentences maintains performance comparable to full-document inference while reducing inference time by up to 35%. This demonstrates that sentence ranking is a simple yet effective technique for scalable and efficient zero-shot LDC.
Problem

Research questions and friction points this paper is trying to address.

Reducing long document context for classification efficiency
Adapting short-text models to long documents without retraining
Maintaining accuracy while significantly cutting inference time
Innovation

Methods, ideas, or system contributions that make the work stand out.

TF-IDF-based sentence ranking strategy
Reduces input context without architecture changes
Maintains accuracy with 50% sentence reduction
🔎 Similar Papers
No similar papers found.
P
Prathamesh Kokate
Pune Institute of Computer Technology, Pune
M
Mitali Sarnaik
Pune Institute of Computer Technology, Pune
M
Manavi Khopade
Pune Institute of Computer Technology, Pune
M
Mukta Takalikar
Pune Institute of Computer Technology, Pune
Raviraj Joshi
Raviraj Joshi
Indian Institute of Technology Madras
computer sciencemachine learningnatural language processing