Post-Training Quantization of Generative and Discriminative LSTM Text Classifiers: A Study of Calibration, Class Balance, and Robustness

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the post-training quantization (PTQ) robustness of generative versus discriminative LSTM text classifiers on edge devices under realistic challenges—including out-of-distribution data, input noise, and class imbalance. Methodologically, we implement PTQ using Brevitas and employ nonparametric statistical tests to rigorously quantify distributional shifts in weights and activations. Our results reveal that generative LSTMs exhibit pronounced sensitivity to class imbalance in the calibration set, suffering severe accuracy degradation at low bit-widths (≤4-bit) due to insufficient weight adjustment during quantization; in contrast, discriminative models demonstrate significantly greater robustness. Crucially, calibration-set balance emerges as a decisive bottleneck for generative model quantization robustness—its impact on accuracy vastly exceeds that observed in discriminative counterparts. To our knowledge, this is the first work to systematically characterize and attribute such asymmetry in PTQ behavior between generative and discriminative sequence models, providing actionable insights for deploying quantized LSTMs on resource-constrained edge platforms.

Technology Category

Application Category

📝 Abstract
Text classification plays a pivotal role in edge computing applications like industrial monitoring, health diagnostics, and smart assistants, where low latency and high accuracy are both key requirements. Generative classifiers, in particular, have been shown to exhibit robustness to out-of-distribution and noisy data, which is an extremely critical consideration for deployment in such real-time edge environments. However, deploying such models on edge devices faces computational and memory constraints. Post Training Quantization (PTQ) reduces model size and compute costs without retraining, making it ideal for edge deployment. In this work, we present a comprehensive comparative study of generative and discriminative Long Short Term Memory (LSTM)-based text classification models with PTQ using the Brevitas quantization library. We evaluate both types of classifier models across multiple bitwidths and assess their robustness under regular and noisy input conditions. We find that while discriminative classifiers remain robust, generative ones are more sensitive to bitwidth, calibration data used during PTQ, and input noise during quantized inference. We study the influence of class imbalance in calibration data for both types of classifiers, comparing scenarios with evenly and unevenly distributed class samples including their effect on weight adjustments and activation profiles during PTQ. Using test statistics derived from nonparametric hypothesis testing, we identify that using class imbalanced data during calibration introduces insufficient weight adaptation at lower bitwidths for generative LSTM classifiers, thereby leading to degraded performance. This study underscores the role of calibration data in PTQ and when generative classifiers succeed or fail under noise, aiding deployment in edge environments.
Problem

Research questions and friction points this paper is trying to address.

Study PTQ impact on generative and discriminative LSTM text classifiers
Evaluate robustness under noise and class imbalance in calibration data
Analyze sensitivity of generative classifiers to bitwidth and calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-Training Quantization reduces model size
Generative LSTM classifiers sensitive to bitwidth
Class imbalance affects calibration data performance
M
Md Mushfiqur Rahaman
School of Mathematical and Data Sciences, West Virginia University, Morgantown, WV, 26505, USA
E
Elliot Chang
School of Mathematical and Data Sciences, West Virginia University, Morgantown, WV, 26505, USA
T
Tasmiah Haque
Department of Industrial and Management Systems Engineering, West Virginia University, Morgantown, WV, 26505, USA
Srinjoy Das
Srinjoy Das
West Virginia University
Time SeriesGenerative Models