🤖 AI Summary
This work addresses the vulnerability of clinical large language models (LLMs) to jailbreak attacks by proposing an automated, annotation-free method for detecting unsafe or task-deviant user inputs. The approach leverages both medical and general-domain BERT models to automatically extract four categories of expert-defined linguistic features, which are then fed into a two-tier classification system combining tree-based models, linear models, probabilistic models, and ensemble techniques. This architecture enables scalable and interpretable jailbreak detection without manual labeling. Experimental results demonstrate strong performance across both cross-validation and hold-out test settings, marking the first validation of the effectiveness and practicality of using LLM-derived linguistic features for clinical jailbreak detection.
📝 Abstract
Detecting jailbreak attempts in clinical training large language models (LLMs) requires accurate modeling of linguistic deviations that signal unsafe or off-task user behavior. Prior work on the 2-Sigma clinical simulation platform showed that manually annotated linguistic features could support jailbreak detection. However, reliance on manual annotation limited both scalability and expressiveness. In this study, we extend this framework by using experts'annotations of four core linguistic features (Professionalism, Medical Relevance, Ethical Behavior, and Contextual Distraction) and training multiple general-domain and medical-domain BERT-based LLM models to predict these features directly from text. The most reliable feature regressor for each dimension was selected and used as the feature extractor in a second layer of classifiers. We evaluate a suite of predictive models, including tree-based, linear, probabilistic, and ensemble methods, to determine jailbreak likelihood from the extracted features. Across cross-validation and held-out evaluations, the system achieves strong overall performance, indicating that LLM-derived linguistic features provide an effective basis for automated jailbreak detection. Error analysis further highlights key limitations in current annotations and feature representations, pointing toward future improvements such as richer annotation schemes, finer-grained feature extraction, and methods that capture the evolving risk of jailbreak behavior over the course of a dialogue. This work demonstrates a scalable and interpretable approach for detecting jailbreak behavior in safety-critical clinical dialogue systems.