An Explainable AI Approach to Intrusion Detection Using Interpretable Machine Learning Models
Keywords:
Explainable AI (XAI), Intrusion Detection System (IDS), Interpretable Machine Learning, SHAP, Explainable Boosting Machine, Cybersecurity, Network Traffic AnalysisAbstract
Intrusion Detection Systems (IDS) are integral to cybersecurity, especially as cyber threats grow in complexity and frequency. While deep learning models have demonstrated high accuracy in identifying malicious activities, their black-box nature limits their application in sensitive domains requiring transparency. This study introduces an Explainable Artificial Intelligence (XAI) framework that leverages interpretable machine learning models to detect intrusions in network traffic. We implement and evaluate models such as Decision Trees, Random Forests with SHAP analysis, and Explainable Boosting Machines (EBMs) on benchmark datasets including NSL-KDD and CICIDS2017. Our methodology emphasizes both predictive performance and interpretability. Experimental results reveal that the proposed approach achieves a strong balance between detection accuracy and model transparency, making it suitable for operational environments where human analysts must understand and trust automated decisions. Furthermore, our analysis highlights key features influencing predictions and demonstrates how interpretability can aid in forensic analysis and compliance. This paper contributes a structured, explainable approach to intrusion detection that advances the field toward more trustworthy and accountable AI-based cybersecurity solutions.