Evaluating Structured Output Robustness of Small Language Models for Open Attribute-Value Extraction from Clinical Notes

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the robustness of small language models (SLMs) in generating structured outputs for open-ended attribute-value extraction from clinical notes—particularly under privacy-sensitive conditions where output serialization format critically affects parseability. Method: We systematically evaluate JSON, YAML, and XML parsing success rates across multiple SLM sizes using fine-grained prompt engineering on real-world clinical documents, analyzing impacts of model scale, prompt design, and document length/type on output stability. Contribution/Results: JSON achieves significantly higher parsing success than YAML or XML. Increasing model parameters and optimizing prompts improve robustness, yet challenges persist for long documents and specific note types (e.g., operative reports). This work identifies, for the first time, characteristic failure modes of serialization formats in clinical NLP and proposes evidence-based guidelines for format selection and prompt optimization tailored to privacy-preserving environments—providing empirical foundations for deploying lightweight medical LMs in practice.

Technology Category

Application Category

📝 Abstract
We present a comparative analysis of the parseability of structured outputs generated by small language models for open attribute-value extraction from clinical notes. We evaluate three widely used serialization formats: JSON, YAML, and XML, and find that JSON consistently yields the highest parseability. Structural robustness improves with targeted prompting and larger models, but declines for longer documents and certain note types. Our error analysis identifies recurring format-specific failure patterns. These findings offer practical guidance for selecting serialization formats and designing prompts when deploying language models in privacy-sensitive clinical settings.
Problem

Research questions and friction points this paper is trying to address.

Evaluate parseability of structured outputs from small language models
Compare JSON, YAML, XML for clinical note attribute-value extraction
Identify format-specific failure patterns in clinical data extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compare JSON, YAML, XML for clinical data extraction
Use targeted prompting to improve robustness
Analyze format-specific errors for better deployment
🔎 Similar Papers
No similar papers found.