🤖 AI Summary
This paper addresses the Model Variability Problem (MVP)—a newly formalized challenge in LLM-based sentiment analysis arising from stochastic reasoning, prompt sensitivity, and training data bias. We systematically characterize how the temperature parameter governs output uncertainty and propose an interpretability-centered approach to enhance model trustworthiness. Methodologically, we integrate sensitivity analysis, prompt engineering evaluation, uncertainty quantification, and eXplainable AI (XAI) techniques to construct a unified assessment framework that jointly ensures stability, reproducibility, and interpretability. Our contributions include: (i) the first rigorous definition and empirical characterization of MVP; (ii) principled guidance on temperature tuning for uncertainty control; and (iii) a holistic evaluation protocol enabling robust, transparent sentiment analysis. Results demonstrate significant mitigation of classification inconsistency and output polarization, thereby facilitating deployment of trustworthy sentiment models in high-stakes domains such as finance, healthcare, and public policy.
📝 Abstract
Large Language Models (LLMs) have significantly advanced sentiment analysis, yet their inherent uncertainty and variability pose critical challenges to achieving reliable and consistent outcomes. This paper systematically explores the Model Variability Problem (MVP) in LLM-based sentiment analysis, characterized by inconsistent sentiment classification, polarization, and uncertainty arising from stochastic inference mechanisms, prompt sensitivity, and biases in training data. We analyze the core causes of MVP, presenting illustrative examples and a case study to highlight its impact. In addition, we investigate key challenges and mitigation strategies, paying particular attention to the role of temperature as a driver of output randomness and emphasizing the crucial role of explainability in improving transparency and user trust. By providing a structured perspective on stability, reproducibility, and trustworthiness, this study helps develop more reliable, explainable, and robust sentiment analysis models, facilitating their deployment in high-stakes domains such as finance, healthcare, and policymaking, among others.