Counterfactual Modeling with Fine-Tuned LLMs for Health Intervention Design and Sensor Data Augmentation

πŸ“… 2026-01-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the dual challenges of generating interpretable, actionable counterfactual explanations for health intervention design and enhancing sensor data under label scarcity. We propose a solution based on fine-tuning large language models (LLMs), specifically BioMistral-7B and LLaMA-3.1-8B, on the multimodal AI-READI clinical dataset. Our systematic evaluation demonstrates that the fine-tuned LLaMA-3.1-8B produces counterfactual explanations with 99% plausibility and 0.99 validity, while recovering an average of 20% F1 score across three label-scarce scenarios. These results significantly outperform baseline methods such as DiCE and CFNOW, highlighting the model’s dual utility in delivering clinically actionable guidance and effective data augmentation.

Technology Category

Application Category

πŸ“ Abstract
Counterfactual explanations (CFEs) provide human-centric interpretability by identifying the minimal, actionable changes required to alter a machine learning model's prediction. Therefore, CFs can be used as (i) interventions for abnormality prevention and (ii) augmented data for training robust models. We conduct a comprehensive evaluation of CF generation using large language models (LLMs), including GPT-4 (zero-shot and few-shot) and two open-source models-BioMistral-7B and LLaMA-3.1-8B, in both pretrained and fine-tuned configurations. Using the multimodal AI-READI clinical dataset, we assess CFs across three dimensions: intervention quality, feature diversity, and augmentation effectiveness. Fine-tuned LLMs, particularly LLaMA-3.1-8B, produce CFs with high plausibility (up to 99%), strong validity (up to 0.99), and realistic, behaviorally modifiable feature adjustments. When used for data augmentation under controlled label-scarcity settings, LLM-generated CFs substantially restore classifier performance, yielding an average 20% F1 recovery across three scarcity scenarios. Compared with optimization-based baselines such as DiCE, CFNOW, and NICE, LLMs offer a flexible, model-agnostic approach that generates more clinically actionable and semantically coherent counterfactuals. Overall, this work demonstrates the promise of LLM-driven counterfactuals for both interpretable intervention design and data-efficient model training in sensor-based digital health. Impact: SenseCF fine-tunes an LLM to generate valid, representative counterfactual explanations and supplement minority class in an imbalanced dataset for improving model training and boosting model robustness and predictive performance
Problem

Research questions and friction points this paper is trying to address.

counterfactual explanations
health intervention design
sensor data augmentation
label scarcity
model robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Counterfactual Explanations
Fine-tuned LLMs
Health Intervention Design
Sensor Data Augmentation
Model-Agnostic Interpretability
πŸ”Ž Similar Papers
No similar papers found.