Integrating Explainable AI in Public Health: A Responsible Approach to Mitigating Vision Health Disparities
Keywords:
Explainable Artificial Intelligence, Eye Disease Detection, Public Health, Deep Learning Models, Decision Tree and Random Forest.Abstract
Quick diagnosis and effective management of eye diseases are essential in the prevention of visual impairment and reduction of vision health disparities. This paper presents a novel paradigm that unifies XAI with public health strategies toward addressing such challenges. The proposed methodology integrates pre-trained deep learning models with XAI techniques to facilitate differential diagnosis of various eye conditions, including cataracts, foreign bodies, subconjunctival hemorrhage, and viral conjunctivitis. The models for the study also included Decision Tree and Random Forest for transparent and interpretable decision-making processes. The models were evaluated on accuracy, precision, recall, F1 score, and MCC. As such, the models' performance of these studies was compared with respect to how well the Proposed model performed in comparison to MobileNet, ResNet50, InceptionV3, VGG19, and NASNetMobile with an accuracy of 93%.The paper highlights besides model performance, the integration of XAI, bringing a touch of interpretability and thus responsible application in clinical settings. The transparency afforded by the XAI models will give clinicians comprehension of the reasoning behind AI-driven decisions, thus engendering trust as well as accountability. This approach will, on one hand, allow for real-time ophthalmological observations but, on the other hand, ensure the equitable and ethical use of AI in public health, ultimately bridging gaps in healthcare access and hence working towards better outcomes for the underserved populations.