I-ETL: an interoperability-aware health (meta) data pipeline to enable federated analyses

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Healthcare faces significant challenges in cross-institutional federated analytics due to strong data heterogeneity and insufficient standardization, severely limiting interoperability. To address this, we propose I-ETL, a decentralized federated analytics framework that enables privacy-preserving collaborative integration of heterogeneous multi-center data—including phenotypic, clinical, imaging, and genomic modalities. Our key innovation is advancing interoperability to the ETL source: we design two generic, extensible conceptual models and integrate metadata modeling with standardization techniques to systematically embed cross-institutional semantic alignment directly into the ETL pipeline—first such approach in the literature. Experiments demonstrate that I-ETL substantially improves unified data representation across sources and enhances cross-center collaboration efficiency. By establishing an interoperable, privacy-aware data foundation, I-ETL enables high-quality, trustworthy federated learning across institutional boundaries.

Technology Category

Application Category

📝 Abstract
Clinicians are interested in better understanding complex diseases, such as cancer or rare diseases, so they need to produce and exchange data to mutualize sources and join forces. To do so and ensure privacy, a natural way consists in using a decentralized architecture and Federated Learning algorithms. This ensures that data stays in the organization in which it has been collected, but requires data to be collected in similar settings and similar models. In practice, this is often not the case because healthcare institutions work individually with different representations and raw data; they do not have means to normalize their data, and even less to do so across centers. For instance, clinicians have at hand phenotypic, clinical, imaging and genomic data (each individually collected) and want to better understand some diseases by analyzing them together. This example highlights the needs and challenges for a cooperative use of this wealth of information. We designed and implemented a framework, named I-ETL, for integrating highly heterogeneous healthcare datasets of hospitals in interoperable databases. Our proposal is twofold: (i) we devise two general and extensible conceptual models for modeling both data and metadata and (ii) we propose an Extract-Transform-Load (ETL) pipeline ensuring and assessing interoperability from the start. By conducting experiments on open-source datasets, we show that I-ETL succeeds in representing various health datasets in a unified way thanks to our two general conceptual models. Next, we demonstrate the importance of blending interoperability as a first-class citizen in integration pipelines, ensuring possible collaboration between different centers.
Problem

Research questions and friction points this paper is trying to address.

Integrating heterogeneous healthcare data from multiple hospitals
Ensuring interoperability across diverse clinical data representations
Enabling federated analysis while maintaining data privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interoperability-aware ETL pipeline for federated analyses
Two general conceptual models for data and metadata
Ensures interoperability from start in integration pipelines
🔎 Similar Papers
No similar papers found.
N
Nelly Barret
Department of Electronics, Information and Bioeingineering – Politecnico di Milano, Italy.
Anna Bernasconi
Anna Bernasconi
Department of Electronics, Information and Bioeingineering – Politecnico di Milano, Italy.
B
Boris Bikbov
Department of Electronics, Information and Bioeingineering – Politecnico di Milano, Italy.
Pietro Pinoli
Pietro Pinoli
Research Fellow, Politecnico di Milano
BioinformaticsMachine LearningBig Data