Leveraging Large Language Models for Predictive Analysis of Human Misery

πŸ“… 2025-08-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the problem of quantitatively predicting human suffering intensity from natural-language scene descriptions using large language models (LLMs), formulated as a fine-grained regression task over a 0–100 scale. Methodologically, it introduces three prompting strategies: zero-shot prompting; retrieval-augmented few-shot prompting leveraging BERT embeddings; and a novel composite task design integrating ordinal comparison, binary classification, and scalar estimation. A key contribution is the β€œPain Game Show”—a gamified, multimodal, feedback-driven psychological assessment framework enabling iterative, context-aware evaluation of affective perception. This framework significantly enhances LLMs’ dynamic emotional understanding and adaptive reasoning capabilities. Experimental results demonstrate that few-shot prompting substantially outperforms zero-shot baselines, validating the feasibility and promise of LLMs for embodied, situationally grounded suffering modeling.

Technology Category

Application Category

πŸ“ Abstract
This study investigates the use of Large Language Models (LLMs) for predicting human-perceived misery scores from natural language descriptions of real-world scenarios. The task is framed as a regression problem, where the model assigns a scalar value from 0 to 100 to each input statement. We evaluate multiple prompting strategies, including zero-shot, fixed-context few-shot, and retrieval-based prompting using BERT sentence embeddings. Few-shot approaches consistently outperform zero-shot baselines, underscoring the value of contextual examples in affective prediction. To move beyond static evaluation, we introduce the "Misery Game Show", a novel gamified framework inspired by a television format. It tests LLMs through structured rounds involving ordinal comparison, binary classification, scalar estimation, and feedback-driven reasoning. This setup enables us to assess not only predictive accuracy but also the model's ability to adapt based on corrective feedback. The gamified evaluation highlights the broader potential of LLMs in dynamic emotional reasoning tasks beyond standard regression. Code and data link: https://github.com/abhi1nandy2/Misery_Data_Exps_GitHub
Problem

Research questions and friction points this paper is trying to address.

Predict human misery scores from text using LLMs
Compare prompting strategies for affective prediction
Test LLMs in dynamic emotional reasoning via gamification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LLMs for misery score prediction
Evaluates few-shot prompting strategies
Introduces gamified Misery Game Show
πŸ”Ž Similar Papers
No similar papers found.
B
Bishanka Seal
Indian Institute of Technology Kharagpur
R
Rahul Seetharaman
UMass Amherst
A
Aman Bansal
UMass Amherst
Abhilash Nandy
Abhilash Nandy
Prime Minister Research Fellow (PMRF), Microsoft Research India PhD Awardee, CSE, IIT Kharagpur
LLMsDomain-Specific NLPPre-trainingMultimodalComputer Vision