Abstract
Machine learning and deep learning are widely used in various applications to assist or even replace human reasoning. For instance, a machine learning based intrusion detection system (IDS) monitors a network for malicious activity or specific policy violations. We propose that IDSs should attach a sufficiently understandable report to each alert to allow the operator to review them more efficiently. This work aims at complementing an IDS by means of a framework to create explanations. The explanations support the human operator in understanding alerts and reveal potential false positives. The focus lies on counterfactual instances and explanations based on locally faithful decision-boundaries.
Chapter PDF
Similar content being viewed by others
References
M. Tulio Ribeiro, S. Singh, and C. Guestrin, ““Why Should I Trust You?”: Explaining the Predictions of Any Classifier,” ArXiv e-prints, Feb. 2016.
S. Mishra, B. L. Sturm, and S. Dixon, “Local interpretable model-agnostic explanations for music content analysis.,” in ISMIR, pp. 537–543, 2017.
T. Laugel, X. Renard, M.-J. Lesot, C. Marsala, and M. Detyniecki, “Defining Locality for Surrogates in Post-hoc Interpretablity,” ArXiv e-prints, June 2018.
X. Renard, T. Laugel, M.-J. Lesot, C. Marsala, and M. Detyniecki, “Detecting Potential Local Adversarial Examples for Human-Interpretable Defense,” ArXiv e-prints, Sept. 2018.
S. Wachter, B. D. Mittelstadt, and C. Russell, “Counterfactual explanations without opening the black box: Automated decisions and the GDPR,” CoRR, vol. abs/1711.00399, 2017.
J. A. Nelder and R. Mead, “A simplex method for function minimization,” The computer journal, vol. 7, no. 4, pp. 308–313, 1965.
C. Molnar, “Interpretable machine learning,” A Guide for Making Black Box Models Explainable, 2018.
B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, et al., “Least angle regression,” The Annals of statistics, vol. 32, no. 2, pp. 407–499, 2004.
I. Sharafaldin, A. H. Lashkari, and A. A. Ghorbani, “Toward generating a new intrusion detection dataset and intrusion traffic characterization.,” in ICISSP, pp. 108–116, 2018.
M. Tavallaee, E. Bagheri, W. Lu, and A. A. Ghorbani, “A detailed analysis of the kdd cup 99 data set,” in 2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications, pp. 1–6, IEEE, 2009.
M. Craven and J. W. Shavlik, “Extracting tree-structured representations of trained networks,” in Advances in neural information processing systems, pp. 24–30, 1996.
S. A. Friedler, C. D. Roy, C. Scheidegger, and D. Slack, “Assessing the local interpretability of machine learning models,” arXiv preprint arXiv:1902.03501, 2019.
“Heloc explainable ml challenge.” https://community.fico.com/s/explainablemachine-learning-challenge. Accessed: 2019-03-01.
H. Hofmann, “Statlog data set.” https://archive.ics.uci.edu/ml/datasets/statlog. Accessed: 2019-06-13.
A. Shrikumar, P. Greenside, and A. Kundaje, “Learning important features through propagating activation differences,” arXiv:1704.02685, 2017.
J. Pearl et al., “Causal inference in statistics: An overview,” Statistics surveys, vol. 3, pp. 96–146, 2009.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2021 The Author(s)
About this paper
Cite this paper
Burkart, N., Franz, M., Huber, M.F. (2021). Explanation Framework for Intrusion Detection. In: Beyerer, J., Maier, A., Niggemann, O. (eds) Machine Learning for Cyber Physical Systems. Technologien für die intelligente Automation, vol 13. Springer Vieweg, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-62746-4_9
Download citation
DOI: https://doi.org/10.1007/978-3-662-62746-4_9
Published:
Publisher Name: Springer Vieweg, Berlin, Heidelberg
Print ISBN: 978-3-662-62745-7
Online ISBN: 978-3-662-62746-4
eBook Packages: EngineeringEngineering (R0)