🤖 AI Summary
To address the performance limitations of LSTM models in fine-grained sentiment analysis, this paper proposes an enhanced LSTM architecture integrating TF-IDF feature optimization with a lightweight multi-head attention mechanism. Specifically, TF-IDF-weighted word embeddings are directly injected into the LSTM input layer, and a parameter-efficient multi-head attention module is co-designed to dynamically emphasize discriminative sentiment words and improve contextual modeling. Experiments on benchmark datasets demonstrate that the proposed model achieves 80.28% accuracy—outperforming the standard LSTM by 12.0%—along with substantial gains in recall and F1 score. Ablation studies confirm the indispensable contributions of both the TF-IDF input enhancement and the attention module. This work advances the design of low-overhead, highly interpretable sentiment analysis models, offering a novel framework that balances computational efficiency with semantic expressiveness.
📝 Abstract
This work proposes an LSTM-based sentiment classification model with multi-head attention mechanism and TF-IDF optimization. Through the integration of TF-IDF feature extraction and multi-head attention, the model significantly improves text sentiment analysis performance. Experimental results on public data sets demonstrate that the new method achieves substantial improvements in the most critical metrics like accuracy, recall, and F1-score compared to baseline models. Specifically, the model achieves an accuracy of 80.28% on the test set, which is improved by about 12% in comparison with standard LSTM models. Ablation experiments also support the necessity and necessity of all modules, in which the impact of multi-head attention is greatest to performance improvement. This research provides a proper approach to sentiment analysis, which can be utilized in public opinion monitoring, product recommendation, etc.