Explainable AI in Intrusion Detection Systems: Enhancing Transparency and Interpretability
Keywords:
Explainable AI, Intrusion Detection Systems, Transparency, Interpretability, LIME, SHAPAbstract
Network and system security against cyber-attacks requires the use of intrusion detection systems (IDS). However, traditional IDS's lack of interpretability and openness makes it difficult to grasp how they make detection judgments. The use of Explainable Artificial Intelligence (XAI) techniques to improve intrusion detection systems' readability and transparency is explored in this article. We suggest a strategy for explaining the post-modeling phase and differentiating between attacks and regular traffic that combines training the NSL-KDD dataset with the use of XAI techniques. To give understandable explanations of the simulation findings, the LIME algorithm is used. The results show that XAI improves the interpretability of IDS by predicting attacks with an accuracy of 94% and regular traffic with an accuracy of 95%, respectively. We can provide analysts the information they need to make wise judgments and increase confidence in the security of networks by integrating XAI into intrusion detection systems.