From Black Box to Bedside: Explainable Machine Learning and the Clinical Uptake of Diagnostic Support Systems
Students & Supervisors
Student Authors
Supervisors
Abstract
Explainable Artificial Intelligence (XAI) can connect the high-performance machine learning models and the interpretability needed for clinical safety for deploying into clinical practice. Adoption of AI systems into healthcare use has experienced limitations due to opaqueness, trust variability by clinician, and inconsistent regulation. The current paper summarizes evidence from 2015–2024 through a scoping review of regulatory databases, clinician surveys, and bibliometric trends, to suggest a conceptual framework of four layers for the uptake of XAI in clinical environments. The defined framework incorporates performance, explainable, clinical trust, and regulatory governance; and demonstrates the adoption occurs as performance, explainability, regulatory compliance, and clinical trust are aligned over accuracy. Milestones in regulation with AI/ML device approvals by the FDA, and emerging explainability techniques such as LIME and SHAP correlate with tipping point measures of clinician trust. The research identified consistent barriers related to training and accountability, and suggests directions toward the establishment of context-aware, lightweight, explainability methods. The framing of the evidence provides a framework for usable diagnostic AI systems, which validate for trustworthiness, effectiveness, and compliance.
Keywords
Publication Details
- Type of Publication:
- Conference Name: IEEE International Conference on Biomedical Engineering, Computer and Information Technology for Health 2025 (BECITHCON 2025)
- Date of Conference: 29/11/2025 - 29/11/2025
- Venue: Eastern University, Dhaka, Bangladesh
- Organizer: IEEE Bangladesh section and IEEE Engineering in Medicine and Biology Society Bangladesh