Transparency in Medical Recommendations: A Comprehensive Methodology of Explainable AI Techniques in Healthcare
Keywords:
Explainable AI (XAI), healthcare recommender systems, transparency, trust, EHR, LIME, SHAP, medical decision-making.Abstract
Explainable AI (XAI) techniques are increasingly crucial in healthcare for enhancing transparency and trust in medical recommendations. XAI refers to methodologies and tools that make the decision-making process of AI systems transparent and comprehensible to humans. This paper reviews and proposes a comprehensive methodology to improve the explainability and trustworthiness of healthcare recommender systems. Key components include advanced machine learning models, such as Convolutional Neural Networks and Restricted Boltzmann Machines, integrated with explainable AI techniques like LIME and
SHAP. The system incorporates collaborative, content-based, and graph-based filtering for personalized recommendations, supported by robust UI/UX design principles. The methodology aims to bridge the gap between AI-driven recommendations and user trust,
thereby enhancing patient outcomes and healthcare delivery efficiency.
References
Nazar, M., Alam, M.M., Yafi, E., Su'ud, M.M.: A Systematic Review of Human Computer Interaction and Explainable Artificial Intelligence in Healthcare With Artificial Intelligence Techniques. IEEE Access 9, 153210-153223 (2021).
Srinivasu, P.N., Sandhya, N., Jhaveri, R.H., Raut, R.: From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies. Computational Intelligence and Neuroscience 2022, 8167821 (2022).
Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: Explainable AI for the Medical Domain: What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923 [cs.AI] (2017).