Explainable AI in Intrusion Detection Systems: Enhancing Transparency and Interpretability

Authors

  • Abdur Rehman Punjab University College of Information Technology (PUCIT), Lahore, Pakistan
  • Amina Farrakh
  • Shan Khan ncbae

Keywords:

Explainable AI, Intrusion Detection Systems, Transparency, Interpretability, LIME, SHAP

Abstract

Network and system security against cyber-attacks requires the use of intrusion detection systems (IDS). However, traditional IDS's lack of interpretability and openness makes it difficult to grasp how they make detection judgments. The use of Explainable Artificial Intelligence (XAI) techniques to improve intrusion detection systems' readability and transparency is explored in this article. We suggest a strategy for explaining the post-modeling phase and differentiating between attacks and regular traffic that combines training the NSL-KDD dataset with the use of XAI techniques. To give understandable explanations of the simulation findings, the LIME algorithm is used. The results show that XAI improves the interpretability of IDS by predicting attacks with an accuracy of 94% and regular traffic with an accuracy of 95%, respectively. We can provide analysts the information they need to make wise judgments and increase confidence in the security of networks by integrating XAI into intrusion detection systems.

Downloads

Published

2023-06-30

Issue

Section

Articles

How to Cite

Explainable AI in Intrusion Detection Systems: Enhancing Transparency and Interpretability. (2023). International Journal of Advanced Sciences and Computing, 2(1), 7-20. http://ijasc.com/index.php/ijasc/article/view/37

Similar Articles

1-10 of 15

You may also start an advanced similarity search for this article.