Towards Standardising the Evaluation and Communication of Machine Learning Cyber Security Solutions

  • Omar Alshaikh

Student thesis: Doctoral Thesis

Abstract

Machine learning (ML) utilisation has achieved a vast global impact. This is evident in the cybersecurity sector, where ML has wide-ranging applications, such as identifying and blocking threats, uncovering unusual software and user behaviour, and many others. However, the increase in successful cyberattacks raises questions about the true effectiveness of ML in cybersecurity applications. Although the attacks may be new, ML is often adopted due to its ability to handle diverse and often unforeseen situations – a capability that is not possible using traditional rule-based security mechanisms. As both the rate of attacks and adoption of ML solutions are increasing, there is a need to determine whether ML-based security solutions are meeting the expectations of businesses and whether businesses are genuinely aware of the ML capabilities and limitations. This PhD thesis addresses the critical issue of evaluating and communicating the capabilities of Machine Learning in Cybersecurity (MLCS) application acknowledging a gap between optimistic portrayals and real-world limitations. Employing a systematic approach, the study aims to investigate AI/ML implementation trends, evaluate the orientation of MLCS applications, and propose a key set of criteria to enhance communicating MLCS application capabilities. This study's methodology involves an extensive literature review and two-phase primary data collection. Through interviews with diverse participants, the research constructs an initial framework. The findings reveal discrepancies in understanding ML capabilities, emphasising the need for a standardised approach in communicating MLCS application capabilities. The analysis focus group discussions refine the initial framework, establishing effective communication criteria for MLCS application. This research contributes by summarising existing knowledge on MLCS metrics, developing a communication pattern to align business expectations, and enhancing awareness among beneficiaries for optimal return on investment. Additionally, it proposes a standard to prevent misinterpretations of AI capabilities and offers a framework for critical indicators in MLCS communication and implementation. Limitations of the study include the limited sense of in-field implementation, which might encompass various requirements. This study sets the stage for continued exploration of MLCS communication, providing valuable insights for both cybersecurity practitioners and researchers. Future research aims to collaborate with professional cybersecurity entities to test the framework in real-life scenarios and implement possible changes in multiple aspects.
Date of Award20 Dec 2024
Original languageEnglish
SupervisorSimon Parkinson (Main Supervisor) & Andrew Crampton (Co-Supervisor)

Cite this

'