EXPLAINABLE AI MODELS FOR HIGH-STAKES DECISION-MAKING IN FINANCE
- Authors
-
-
Dr. A. Sharma
Institute of Digital Intelligence, India
Author
-
Dr. R. Miller
Center for Financial Computing, UK
Author
-
Prof. L. Tan
Asian Institute of Data Innovation, Singapore
Author
-
- Abstract
-
The rapid adoption of Artificial Intelligence (AI) in the financial sector has enabled faster, more accurate, and scalable decision-making processes. However, the high-risk nature of financial activities—including credit scoring, fraud detection, market prediction, and insurance underwriting—demands transparent and interpretable AI systems. Recent advancements in Explainable Artificial Intelligence (XAI) provide new methodologies to understand, validate, and govern complex machine-learning models. This paper analyzes the key challenges of deploying AI in high-stakes finance, reviews state-of-the-art explainability techniques, and proposes a hybrid framework combining global and local interpretability. Experimental results using real-world financial datasets demonstrate that integration of explainability improves regulatory compliance, user trust, and model robustness without significantly compromising accuracy. Recommendations for responsible AI governance in financial systems are also provided.
- Downloads
- Published
- 2025-11-20
- Issue
- Vol. 1 No. 1 (2025)
- Section
- Articles
- License
-

This work is licensed under a Creative Commons Attribution 4.0 International License.








