Explainable Fraud and Health Anomaly Detection via SMOTE-Enhanced Deep Models

Authors

  • Max Bannett University of Toronto Author
  • Felix Wagner Department of Computer Science, Stanford University, Stanford, California Author

Keywords:

Fraud Detection, Healthcare Anomaly Detection, SMOTE, Deep Learning, Explainable AI, Cross-Domain Modeling.

Abstract

The rapid growth of digital finance, energy markets, and healthcare systems has increased the need for robust anomaly detection models that are both accurate and transparent. Traditional deep learning approaches face two critical limitations: severe class imbalance that reduces sensitivity to rare but high-risk events, and their black-box nature, which impedes trust in high-stakes decision-making. This study proposes a unified framework that leverages Synthetic Minority Oversampling Technique (SMOTE) and deep neural architectures, integrated with Explainable Artificial Intelligence (XAI) methods, to address these challenges across two distinct domains: fraud detection in finance and energy transactions, and health anomaly prediction for conditions such as stroke and Alzheimer’s disease. The framework applies SMOTE variants to rebalance datasets, followed by training on hybrid deep models that combine convolutional and recurrent structures with attention mechanisms. Interpretability is achieved through SHAP values, LIME analysis, and Grad-CAM visualizations for medical imaging tasks. Evaluation is conducted using widely accepted metrics, including AUC, precision, recall, F1-score, and explanation fidelity, across multiple real-world datasets from financial transactions, energy market operations, and healthcare records. Results demonstrate that SMOTE-enhanced deep models outperform conventional baselines, improving recall by up to 18% in fraud detection and 15% in healthcare anomaly prediction, without compromising precision. Explainability assessments reveal domain-relevant insights: transaction patterns and ESG-related anomalies in finance, and clinically interpretable risk factors in healthcare. Cross-domain transferability tests further show that anomaly detection strategies in financial datasets can inform healthcare models and vice versa, highlighting the potential of shared approaches in combating rare-event prediction challenges. The study concludes that integrating oversampling, deep learning, and explainability yields a transparent and generalizable framework applicable to multiple anomaly detection domains.

Downloads

Published

2025-08-08