Adaptive LLM Agents: Toward Personalized Empathetic Care

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing mental health dialogue systems rely on static interaction paradigms and lack real-time adaptation to users’ evolving psychological states, limiting personalization and clinical efficacy. Method: This study proposes an adaptive psychological dialogue framework powered by large language models (LLMs). It first quantifies users’ psychological states via the Affective Intensity Scale (AIS); then implements an L-M-H (Low–Medium–High) hierarchical agent architecture integrating clinical knowledge and fictionalized design principles to dynamically modulate empathic strategies and deliver narrative-based interventions; finally, it conducts contextualized narrative modeling to examine the societal impact and ethical responsibilities of LLM-based psychological companions. Contribution/Results: Experiments demonstrate significant improvements in empathic expression quality and therapeutic alliance formation. The framework establishes a novel paradigm for accessible, trustworthy, and sustainable AI-enabled mental health support systems grounded in clinical rigor and ethical accountability.

Technology Category

Application Category

📝 Abstract
Current mental-health conversational systems are usually based on fixed, generic dialogue patterns. This paper proposes an adaptive framework based on large language models that aims to personalize therapeutic interaction according to a user's psychological state, quantified with the Acceptance of Illness Scale (AIS). The framework defines three specialized agents, L, M, and H, each linked to a different level of illness acceptance, and adjusts conversational behavior over time using continuous feedback signals. The AIS-stratified architecture is treated as a diegetic prototype placed in a plausible near-future setting and examined through the method of design fiction. By embedding the architecture in narrative scenarios, the study explores how such agents might influence access to care and therapeutic relationship. The goal is to show how clinically informed personalization, technical feasibility, and speculative scenario analysis can together inform the responsible design of LLM-based companions for mental-health support.
Problem

Research questions and friction points this paper is trying to address.

Personalizing therapeutic interactions using psychological state metrics
Adapting conversational behavior through continuous feedback mechanisms
Exploring responsible design of LLM companions via speculative scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive LLM framework personalizes therapeutic interactions
Three specialized agents adjust behavior using feedback signals
AIS-stratified architecture enables clinically informed personalization
🔎 Similar Papers
No similar papers found.