Leveraging Advanced Data Science Pipelines and Explainable AI to Identify Systemic Vulnerabilities in Financial Transactions and Medical Billing Systems
Keywords:
Systemic risk, Explainable AI, Fraud detection, Medical billing, Anomaly detection, Model interpretabilityAbstract
Systemic vulnerabilities in financial and healthcare transaction systems present significant operational and compliance risks, particularly in increasingly digitized and automated environments. This research introduces a unified framework that integrates advanced data science pipelines with explainable artificial intelligence (XAI) to detect anomalies and reveal latent structural weaknesses in both financial and medical billing systems.
The study employs machine learning models embedded with interpretability mechanisms to enhance transparency in anomaly detection and fraud analytics. Through case-based analysis and model evaluation, the framework demonstrates improved capability in identifying transaction irregularities and non-compliant patterns, offering both predictive accuracy and interpretive clarity. Findings suggest that the incorporation of XAI significantly improves system auditability and supports regulatory compliance by providing stakeholders with actionable insights into model behavior and underlying risk factors. This approach contributes to the development of more trustworthy and resilient digital infrastructures in high-stakes domains.
References
[1] Amodei, Dario, et al. "Concrete Problems in AI Safety." arXiv preprint arXiv:1606.06565, 2016.
[2] Gundaboina, A. (2022). Quantum Computing and Cloud Security: Future-Proofing Healthcare Data Protection. International Journal for Multidisciplinary Research, 4(4), 1–12. https://doi.org/10.36948/ijfmr.2022.v04i04.61014
[3] Carcillo, Fabrizio, et al. "Detecting Fraudulent Transactions in Real-Time with Unsupervised Anomaly Detection." 2019 IEEE International Conference on Big Data (Big Data), IEEE, 2019, pp. 647–656.
[4] Chen, Yu, et al. "XAI for Insurance Fraud Detection: A Multi-Modal Approach." Journal of Applied AI in Finance & Health, vol. 5, no. 2, 2023.
[5] Gundaboina, A. (2023). Endpoint Detection and Response (EDR) in Healthcare: Mitigating Threats on Critical Devices. Artificial Intelligence, Machine Learning & Data Science, 1(2), 3107–3114. https://doi.org/10.51219/JAIMLD/ramesh-potla/637
[6] Doshi-Velez, Finale, and Been Kim. "Towards a Rigorous Science of Interpretable Machine Learning." arXiv preprint arXiv:1702.08608, 2017.
[7] Guidotti, Riccardo, et al. "A Survey of Methods for Explaining Black Box Models." ACM Computing Surveys, vol. 51, no. 5, 2018, pp. 1–42.
[8] Holzinger, Andreas, et al. "What Do We Need to Build Explainable AI Systems for the Medical Domain? A Review and Commentary." arXiv preprint arXiv:1712.09923, 2017.
[9] Kim, Been, et al. "Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)." Proceedings of the 35th International Conference on Machine Learning, 2018, pp. 2668–2677.
[10] Gundaboina, A. (2023). Data Loss Prevention in Healthcare: Advanced Strategies for Protecting PHI in Cloud Environments. Journal of Artificial Intelligence, Machine Learning & Data Science, 1(2), 3045–3051. https://doi.org/10.51219/JAIMLD/anjan-gundaboina/628
[11] Liu, Fei Tony, Kai Ming Ting, and Zhi-Hua Zhou. "Isolation Forest." 2008 Eighth IEEE International Conference on Data Mining, IEEE, 2008, pp. 413–422.
[12] Lundberg, Scott M., and Su-In Lee. "A Unified Approach to Interpreting Model Predictions." Advances in Neural Information Processing Systems, vol. 30, 2017.
[13] McKinney, Scott M., et al. "International Evaluation of an AI System for Breast Cancer Screening." Nature, vol. 577, no. 7788, 2020, pp. 89–94.
[14] Gundaboina, A. (2023). Securing IoT Devices in Healthcare: Endpoint Protection for Patient Monitoring Systems. International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences, 11(5), 1–11. https://doi.org/10.37082/IJIRMPS.v11.i5.232623
[15] Obermeyer, Ziad, et al. "Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations." Science, vol. 366, no. 6464, 2019, pp. 447–453.
[16] Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard UP, 2015.
[17] Rajpurkar, Pranav, et al. "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning." Radiology: Artificial Intelligence, vol. 3, no. 2, 2021.
[18] Gundaboina, A. (2023). Securing Non-Human Identities (NHIs) in Cloud-Native Healthcare Systems. International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences, 11(5), 1–12. https://doi.org/10.37082/IJIRMPS.v11.i5.232621
[19] Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Why Should I Trust You?: Explaining the Predictions of Any Classifier." Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
[20] Veale, Michael, and Reuben Binns. "Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data." Big Data & Society, vol. 4, no. 2, 2017.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Fritz Otto Noah (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.


