🤖 AI Summary
This paper addresses the challenges of response latency and insufficient accuracy in early risk detection (ERD) for online mental health monitoring. We propose “Temporal Fine-tuning”, a novel paradigm that explicitly models user posting chronology during Transformer training, enabling dynamic decision adjustment to jointly optimize detection accuracy and timeliness. Unlike conventional classification frameworks, our approach jointly optimizes both precision and latency using time-sensitive evaluation metrics—particularly ERDE(θ). Experiments on Spanish-language datasets for depression and eating disorders, benchmarked on MentalRiskES 2023, demonstrate significant performance gains. To our knowledge, this is the first work achieving coordinated optimization of accuracy and latency within a single-objective setting. The proposed framework establishes a scalable, temporally aware modeling paradigm for multilingual mental health risk预警.
📝 Abstract
Early Risk Detection (ERD) on the Web aims to identify promptly users facing social and health issues. Users are analyzed post-by-post, and it is necessary to guarantee correct and quick answers, which is particularly challenging in critical scenarios. ERD involves optimizing classification precision and minimizing detection delay. Standard classification metrics may not suffice, resorting to specific metrics such as ERDE(theta) that explicitly consider precision and delay. The current research focuses on applying a multi-objective approach, prioritizing classification performance and establishing a separate criterion for decision time. In this work, we propose a completely different strategy, temporal fine-tuning, which allows tuning transformer-based models by explicitly incorporating time within the learning process. Our method allows us to analyze complete user post histories, tune models considering different contexts, and evaluate training performance using temporal metrics. We evaluated our proposal in the depression and eating disorders tasks for the Spanish language, achieving competitive results compared to the best models of MentalRiskES 2023. We found that temporal fine-tuning optimized decisions considering context and time progress. In this way, by properly taking advantage of the power of transformers, it is possible to address ERD by combining precision and speed as a single objective.