EXPLAINABLE AI MODELS FOR HIGH-STAKES DECISION-MAKING IN FINANCE

Authors
  • Dr. A. Sharma

    Institute of Digital Intelligence, India

    Author

  • Dr. R. Miller

    Center for Financial Computing, UK

    Author

  • Prof. L. Tan

    Asian Institute of Data Innovation, Singapore

    Author

Abstract

The rapid adoption of Artificial Intelligence (AI) in the financial sector has enabled faster, more accurate, and scalable decision-making processes. However, the high-risk nature of financial activities—including credit scoring, fraud detection, market prediction, and insurance underwriting—demands transparent and interpretable AI systems. Recent advancements in Explainable Artificial Intelligence (XAI) provide new methodologies to understand, validate, and govern complex machine-learning models. This paper analyzes the key challenges of deploying AI in high-stakes finance, reviews state-of-the-art explainability techniques, and proposes a hybrid framework combining global and local interpretability. Experimental results using real-world financial datasets demonstrate that integration of explainability improves regulatory compliance, user trust, and model robustness without significantly compromising accuracy. Recommendations for responsible AI governance in financial systems are also provided.

Cover Image
Downloads
Published
2025-11-20
Section
Articles
License
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

How to Cite

EXPLAINABLE AI MODELS FOR HIGH-STAKES DECISION-MAKING IN FINANCE. (2025). Eureka Journal of Artificial Intelligence and Data Innovation, 1(1), 7-16. https://eurekaoa.com/index.php/11/article/view/46