Leveraging Imperfection with MEDLEY A Multi-Model Approach Harnessing Bias in Medical AI

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical AI bias is conventionally treated as a defect to be eliminated, yet human clinical reasoning inherently relies on education- and culture-shaped “beneficial biases.” Method: We propose MEDLEY—a framework that reframes bias and model diversity as exploitable resources rather than targets for suppression. MEDLEY integrates over 30 large language models, deliberately preserving output divergence and latent biases; it recasts hallucinations as testable clinical hypotheses and explicitly documents bias origins and uncertainty. Results: Evaluated on synthetic cases, MEDLEY concurrently presents consensus diagnoses and minority viewpoints under physician supervision, enhancing diagnostic transparency, reasoning depth, and decision traceability. This work shifts medical AI from pursuing model consistency toward a structured diversity paradigm, advancing bias-aware human-AI collaborative reasoning.

Technology Category

Application Category

📝 Abstract
Bias in medical artificial intelligence is conventionally viewed as a defect requiring elimination. However, human reasoning inherently incorporates biases shaped by education, culture, and experience, suggesting their presence may be inevitable and potentially valuable. We propose MEDLEY (Medical Ensemble Diagnostic system with Leveraged diversitY), a conceptual framework that orchestrates multiple AI models while preserving their diverse outputs rather than collapsing them into a consensus. Unlike traditional approaches that suppress disagreement, MEDLEY documents model-specific biases as potential strengths and treats hallucinations as provisional hypotheses for clinician verification. A proof-of-concept demonstrator was developed using over 30 large language models, creating a minimum viable product that preserved both consensus and minority views in synthetic cases, making diagnostic uncertainty and latent biases transparent for clinical oversight. While not yet a validated clinical tool, the demonstration illustrates how structured diversity can enhance medical reasoning under clinician supervision. By reframing AI imperfection as a resource, MEDLEY offers a paradigm shift that opens new regulatory, ethical, and innovation pathways for developing trustworthy medical AI systems.
Problem

Research questions and friction points this paper is trying to address.

Addresses bias in medical AI as a resource rather than defect
Proposes a multi-model framework to preserve diagnostic diversity
Transforms AI imperfections into transparent clinical decision support
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-model ensemble preserving diverse outputs
Treating model biases as potential strengths
Documenting hallucinations as provisional hypotheses