Magical: Medical Lay Language Generation via Semantic Invariance and Layperson-tailored Adaptation

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical Lay Language Generation (MLLG) faces the challenge of simultaneously preserving semantic fidelity and accommodating stylistic diversity under multi-source heterogeneous data. To address this, we propose an asymmetric LoRA architecture that decouples semantic representation from stylistic control via a shared low-rank matrix A and multiple task-specific matrices B. We introduce a semantic invariance constraint to ensure medical accuracy and design a recommendation-guided switching mechanism for personalized lay-style adaptation. The method supports external prompting interfaces to enhance controllability. Experiments on three real-world medical datasets demonstrate that our approach significantly outperforms standard prompting, vanilla LoRA, and its variants—achieving higher semantic accuracy while maintaining high readability. Moreover, it reduces trainable parameters by 31.66%, offering both computational efficiency and practical applicability.

Technology Category

Application Category

📝 Abstract
Medical Lay Language Generation (MLLG) plays a vital role in improving the accessibility of complex scientific content for broader audiences. Recent literature to MLLG commonly employ parameter-efficient fine-tuning methods such as Low-Rank Adaptation (LoRA) to fine-tuning large language models (LLMs) using paired expert-lay language datasets. However, LoRA struggles with the challenges posed by multi-source heterogeneous MLLG datasets. Specifically, through a series of exploratory experiments, we reveal that standard LoRA fail to meet the requirement for semantic fidelity and diverse lay-style generation in MLLG task. To address these limitations, we propose Magical, an asymmetric LoRA architecture tailored for MLLG under heterogeneous data scenarios. Magical employs a shared matrix $A$ for abstractive summarization, along with multiple isolated matrices $B$ for diverse lay-style generation. To preserve semantic fidelity during the lay language generation process, Magical introduces a Semantic Invariance Constraint to mitigate semantic subspace shifts on matrix $A$. Furthermore, to better adapt to diverse lay-style generation, Magical incorporates the Recommendation-guided Switch, an externally interface to prompt the LLM to switch between different matrices $B$. Experimental results on three real-world lay language generation datasets demonstrate that Magical consistently outperforms prompt-based methods, vanilla LoRA, and its recent variants, while also reducing trainable parameters by 31.66%.
Problem

Research questions and friction points this paper is trying to address.

Improving accessibility of complex scientific content for broader audiences
Addressing semantic fidelity and diverse lay-style generation challenges
Reducing trainable parameters while enhancing performance in MLLG tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asymmetric LoRA architecture for heterogeneous data
Semantic Invariance Constraint for fidelity
Recommendation-guided Switch for style adaptation
🔎 Similar Papers
No similar papers found.