← Back to Publications List

Bridging the Trust Gap: Machine Learning and Explainable AI for Enhancing Clinical Adoption of Diagnostic Support Systems

Students & Supervisors

Student Authors
Tawfiqul Hasan
Bachelor of Science in Computer Science & Engineering, FST
Md. Nafis Khan
Bachelor of Science in Computer Science & Engineering, FST
Md Shakib Hasan Shafin
Bachelor of Science in Computer Science & Engineering, FST
Md Junayed Rahman
Bachelor of Science in Computer Science & Engineering, FST
Supervisors
Md. Mortuza Ahmmed
Associate Professor, Faculty, FST

Abstract

Background: The acceleration of AI-powered diagnostic support systems has revolutionized healthcare, with the number of deployments increasing from zero in 2000 to 143 in 2024 and marked improvement in diagnostic accuracy (60% to 93%) and clinical uptake rates (0% to 88%). Yet clinical adoption remains low due to many algorithms operates as ""Black Box"". The lack of explainability undermines clinical trust and creates a gap between AI’s potential and real-world applications. This study provides evidence of how explainable AI(XAI) can foster trust, increase adoption and produce compliance with new regulatory standards like the ""EU AI Act"". Methodology: This study utilized clustering methods with K mean clustering method in order to visualize trends of AI implementation, accuracy, and adoption. The use of the SHAP model (SHapley Additive exPlanations) was also used to improve the interpretability of the model by generating useable clinical insights from model decisions and cultivate clinician trust. Results: The results indicate a very strong relationship between explainability features and clinical adoption. The models with SHAP explanations achieved faster regulatory approvals and were more quickly integrated into healthcare systems globally. Visual clustering also highlights specific important milestones on the adoption. Conclusion: This research highlights the opportunity to close the trust gap in the clinical settings by addressing interpretability gaps using explainable AI. Future studies must emphasize transparency, to better fulfill the societal potential of AI-based diagnoses.

Keywords

Keywords: Explainable AI (XAI) Clinical Adoption Diagnostic Support Systems SHAP (SHapley Additive exPlanations) K-Means Clustering Interpretability Trust Gap AI in Healthcare EU AI Act.

Publication Details

  • Type of Publication:
  • Conference Name: Bangladesh’s 1st National BioMed Health ResearchCon (NBHRC) 2025
  • Date of Conference: 28/08/2025 - 28/08/2025
  • Venue: Dr. Milon Auditorium, Dhaka Medical College
  • Organizer: Dhaka Medical College Research and Academic Club (DMC-RAC).