🤖 AI Summary
This work addresses the challenge of enforcing governance in cross-institutional medical research under stringent privacy regulations, where existing federated learning frameworks lack mechanisms to prevent unauthorized participation and ensure accountability. The authors propose FLA³, a platform that uniquely integrates enforceable Authentication, Authorization, and Accounting (AAA) directly into the federated learning orchestration layer. By combining XACML-compliant attribute-based access control (ABAC), cryptographic auditing, and domain-constrained federation, FLA³ enables runtime policy enforcement that preserves data locality while ensuring regulatory compliance. Deployed across five institutions in four countries, the platform was validated on 54,446 complete blood count samples from 25 centers, achieving model performance comparable to centralized training while strictly adhering to governance constraints.
📝 Abstract
Collaborative healthcare research across multiple institutions increasingly requires diverse clinical datasets, but cross-border data sharing is strictly constrained by privacy regulations. Federated learning (FL) enables model training while keeping data local; however, many existing frameworks remain proof-of-concept and do not adequately address governance risks such as unauthorised participation, misuse, and lack of accountability. In particular, enforceable mechanisms for authentication, authorisation, and accounting (AAA) are often missing, limiting real-world clinical deployment. This paper presents FLA$^3$ (Federated Learning with Authentication, Authorisation, and Accounting), a governance-aware federated learning platform that operationalises regulatory obligations through runtime policy enforcement. FLA$^3$ integrates eXtensible Access Control Markup Language (XACML) compliant attribute-based access control (ABAC), cryptographic accounting, and study-scoped federation directly into the federated learning orchestration layer to enforce institutional sovereignty and protocol adherence. We evaluate FLA$^3$ through two complementary studies. First, we demonstrate operational feasibility by deploying the platform infrastructure across five BloodCounts! Consortium institutions in four countries: United Kingdom, Netherlands, India, and The Gambia. Second, we assess clinical utility using simulated federation of full blood count (FBC) data from 54,446 samples from 35,315 subjects across 25 centres in the INTERVAL study. Results show that FLA$^3$ achieves predictive performance comparable to centralised training while strictly enforcing governance constraints. These results show that enforceable governance can function as a first-class privacy-preserving control, improving trustworthiness for scalable artificial intelligence (AI) in cross-jurisdictional healthcare deployments.