🤖 AI Summary
This study addresses the challenge of balancing adaptability and auditability in interventions for elderly loneliness by proposing a three-layer decoupled framework. In this architecture, a large language model (LLM) is strictly confined to a diagnostic role—assessing group-level states and generating structured risk reports—while a deterministic, bounded control policy leverages these reports to perform traceable parameter updates. By explicitly separating the LLM from the decision-making mechanism and restricting it to diagnosis only, the approach ensures full auditability without sacrificing policy adaptability. Simulation experiments in eldercare scenarios demonstrate an 11.7% performance improvement over end-to-end black-box LLM strategies, highlighting the efficacy of this decoupled design in real-world intervention settings.
📝 Abstract
Mitigating elderly loneliness requires policy interventions that achieve both adaptability and auditability. Existing methods struggle to reconcile these objectives: traditional agent-based models suffer from static rigidity, while direct large language model (LLM) controllers lack essential traceability. This work proposes a three-layer framework that separates diagnosis from control to achieve both properties simultaneously. LLMs operate strictly as diagnostic instruments that assess population state and generate structured risk evaluations, while deterministic formulas with explicit bounds translate these assessments into traceable parameter updates. This separation ensures that every policy decision can be attributed to inspectable rules while maintaining adaptive response to emergent needs. We validate the framework through systematic ablation across five experimental conditions in elderly care simulation. Results demonstrate that explicit control rules outperform end-to-end black-box LLM approaches by 11.7\% while preserving full auditability, confirming that transparency need not compromise adaptive performance.