Enhancing Interpretability in Diverse Recommendation Systems through Explainable AI Techniques
Keywords:
Recommendation Systems, Matrix factorization, Content-based filtering, Collaborative filtering, xAI, SHAP, GPT-4Abstract
This paper explores the application of XAI methodologies, particularly focusing on the utilization of the Shapley Additive explanations (SHAP) framework, and implement it into three distinct recommendation systems with explainability: matrix factorization, content-based filtering, and collaborative filtering. Using a novel blend of SHAP values and a multimodal Large Language Model (LLM), namely GPT-4, we highlight a unique methodology utilized for understanding the decision-making processes underlying recommendation algorithms. The exploration of SHAP values reveals granular insights into the factors which influence individual recommendations, embiggening users understanding of the suggestions provided by these algorithms. Leveraging a multimodal LLM further augments interpretability by providing a detailed yet succint explanation of SHAP-derived insights. By laying bare the inner working of the chosen recommendation models, our research seeks to foster transparency and increased user control in the domain of recommendation systems.