🤖 AI Summary
This work addresses the limited interpretability and irreversibility of clinical trial text embeddings, which hinder transparent and creative applications in biomedicine. We introduce ctELM, the first open-source, domain-agnostic framework based on an Embedding Language Model (ELM), designed specifically for clinical trial texts. By integrating expert-validated synthetic data, concept vector manipulation, and domain-adaptive training strategies, ctELM enables accurate parsing, semantically coherent generation, and controllable editing of clinical trial abstracts—such as adjusting attributes like age or gender—within the embedding space. Experiments demonstrate that ctELM generalizes effectively to unseen trials and supports precise semantic-directional control over generated content. The code and model are publicly released to foster reproducibility and further research.
📝 Abstract
Text embeddings have become an essential part of a variety of language applications. However, methods for interpreting, exploring and reversing embedding spaces are limited, reducing transparency and precluding potentially valuable generative use cases. In this work, we align Large Language Models to embeddings of clinical trials using the recently reported Embedding Language Model (ELM) method. We develop an open-source, domain-agnostic ELM architecture and training framework, design training tasks for clinical trials, and introduce an expert-validated synthetic dataset. We then train a series of ELMs exploring the impact of tasks and training regimes. Our final model, ctELM, can accurately describe and compare unseen clinical trials from embeddings alone and produce plausible clinical trials from novel vectors. We further show that generated trial abstracts are responsive to moving embeddings along concept vectors for age and sex of study subjects. Our public ELM implementation and experimental results will aid the alignment of Large Language Models to embedding spaces in the biomedical domain and beyond.