A contextual framework to standardise the communication of machine learning cyber security characteristics

Omar Alshaikh, Simon Parkinson, Saad Khan

Research output: Contribution to journalArticlepeer-review

Abstract

The widespread integration of machine learning (ML) across diverse application domains has substantially impacted business and personnel. Notably, ML applications in cybersecurity have gained increased prominence, reflecting a discernible trend towards adoption. However, the decisions surrounding ML adoption are susceptible to external influences, potentially resulting in misinterpreting ML capabilities. The communication used when for incorporating ML into cybersecurity applications lacks standardisation and is influenced by various factors such as personal experience, organisational reputation, and marketing strategies. Furthermore, the application of metrics to assess model performance is characterised by dependence, disarray, and subjectivity, introducing probabilities, uncertainties, and the potential for misinterpretation. The different metrics allow for variability in how capability is communicated, often dependent on the restrictive use case, leading to a lack of certainty in their interpretation. Previous research has highlighted the need for a standardised approach. Building upon our earlier work, this paper aims to authenticate beneficiaries' perception of Machine Learning Cybersecurity (MLCS) capabilities, before consulting with domain experts through a focus group to elucidate a prototype standard for comprehending MLCS capabilities, offering a pivotal roadmap and an initial framework for a comprehensive understanding and effective communication of MLCS capabilities in practical implementations.
Original languageEnglish
Article number104015
Number of pages27
JournalComputer Standards and Interfaces
Volume94
Early online date30 Apr 2025
DOIs
Publication statusE-pub ahead of print - 30 Apr 2025

Cite this