ISSN: 3048-6939

Towards Transparency in AI: A Review of Explainable AI (XAI) Approaches and Research Opportunities

Abstract

As Artificial Intelligence (AI) continues to infiltrate various sectors, from healthcare to finance, the ability to trust AI-driven decisions becomes crucial. Machine learning (ML) models, though highly accurate, often operate as "black boxes," making it difficult to understand their decision-making processes. This lack of transparency creates significant challenges in critical areas like medical diagnosis and financial transactions, where understanding the reasoning behind decisions is vital. In particular, ensemble models like Random Forests and Deep Learning algorithms, while improving prediction accuracy, exacerbate the issue of interpretability. This paper reviews the current challenges in explaining ML predictions and explores existing approaches to Explainable Artificial Intelligence (XAI). Through an extensive literature review of research from reputable sources, we identify key gaps in current methods and provide insights into opportunities for future development. While some algorithms, such as Decision Trees and KNN, offer built-in interpretability, there is no universal solution for explaining the outcomes of complex models. The paper proposes a conceptual framework for developing a common approach to XAI that can address these challenges, providing clarity and consistency in decision explanations. Finally, the paper outlines future research directions to improve the interpretability and adoption of AI models in various sectors.

References

  1. Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems, 4765–4774. https://doi.org/10.5555/3295222.3295366
  2. Caruana, R., Gehrke, J., Koch, P., Krause, A., & Salama, M. (2000). Case-based explanations for decision-theoretic planning. Proceedings of the 17th National Conference on Artificial Intelligence (AAAI-00), 146–153
  3. 1. M. Ravichand, Kapil Bansal, G. Lohitha, R. J. Anandhi, Lovi Raj Gupta, Patel Chaitali Mohanbhai, Narendra Kumar: Research on Theoretical Contributions and Literature-Related Tools for Big Data Analytics, Recent Trends In Engineering and Science for Resource Optimization and Sustainable Development, https://www.taylorfrancis.com/chapters/edit/10.1201/9781003596721-51/research-theoretical-contributions-literature-related-tools-big-data-analytics-ravichand-kapil-bansal-lohitha-anandhi-lovi-raj-gupta-patel-chaitali-mohanbhai-narendra-kumar?context=ubx&refId=00e3f2ad-b5fc-4530-89ae-1ac3269e9566
  4. 2. E. Mythily, S. S. Ramya, K. Sangeeta, B Swathi, Manish Kumar, Purnendu Bikash, Narendra Kumar: Think Big with Big Data: Finding Appropriate Big Data Strategies for Corporate Cultures, Recent Trends In Engineering and Science for Resource Optimization and Sustainable Development, https://www.taylorfrancis.com/chapters/edit/10.1201/9781003596721-46/think-big-big-data-finding-appropriate-big-data-strategies-corporate-cultures-mythily-ramya-sangeeta-swathi-manish-kumar-purnendu-bikash-narendra-kumar?context=ubx&refId=0535aba0-a7b3-4325-b543-79aa313a2168
  5. Chen, J., Song, L., & Zhou, S. (2019). Explainable AI: From black-box to interpretable machine learning models. Journal of Artificial Intelligence Research, 66(1), 1–24. https://doi.org/10.1613/jair.1.11588
  6. Guidotti, R., Monreale, A., Ruggieri, S., & Turini, F. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 1–42. https://doi.org/10.1145/3236009
  7. Ribeiro, M. T., & Guestrin, C. (2018). "Why should I trust you?" Explaining the predictions of black-box models. Communications of the ACM, 61(3), 56–66. https://doi.org/10.1145/3158665
Download PDF

How to Cite

Dr Rania Nafea, (2025-04-28 18:07:22.064). Towards Transparency in AI: A Review of Explainable AI (XAI) Approaches and Research Opportunities. JANOLI International Journal of Applied Engineering and Management, Volume D0yPfb5bqzwnvXQdjUkv, Issue 2.