Explainable AI (XAI): A Survey on Interpretability Techniques and Real-World Applications
Abstract
As artificial intelligence systems become increasingly complex, the demand for explainability and transparency has grown. This review provides a comprehensive analysis of Explainable AI (XAI) techniques, including feature attribution methods, surrogate models, and inherently interpretable architectures. We discuss the importance of interpretability in high-stakes applications such as healthcare, finance, and law enforcement. The paper also examines existing challenges, such as the trade-off between accuracy and explainability, regulatory compliance, and user trust. We conclude with future research directions to enhance the transparency and accountability of AI systems.
Published
2020-08-10
How to Cite
Kiran, D. B. (2020). Explainable AI (XAI): A Survey on Interpretability Techniques and Real-World Applications. Brazilian Journal of Computational Intelligence, 1(2). Retrieved from https://journals.jmlai.in/index.php/BJCI/article/view/2
Issue
Section
Articles