Risk Metrics for Cyber Inference Assessment

https://www.ncua.gov/sites/default/files/inline-images/cyber-threat.jpg
Source: NCUA

Presented: November 11, 2014 12:00 pm
Presented by: Dr. Kenric P. Nelson

The ability to accurately forecast the probability of potential cyber threats is critical for making decisions regarding appropriate defenses. Unfortunately, the assessment tools typically focus on the accuracy of the decisions rather than the accuracy of the models. Common examples include the confusion matrix of correct and incorrect decisions and the receiver operating curve. Neither of these popular metrics measure the distance between the statistical models and the distribution of the test data used to assess the cyber defense system. Unfortunately, the correct information theoretic metric – the cross-entropy between the test data and the model – is both unintuitive and extremely sensitive to outliers. Equivalent to the cross-entropy, but easier to understand and interpret, is the geometric mean of the probabilities reported for the actual event. Furthermore, the generalized mean can be used to modify the sensitivity to outliers, providing a spectrum of performance against tolerance of risk. Lowering the sensitivity is equivalent to increasing the tolerance of risk which is required to make a decision. Increasing the sensitivity is equivalent to reducing the tolerance for risk in order to insure the system is robust. Well-designed cyber defense systems require a balancing of decisiveness in reporting potential threats, accuracy in forecasting the probability of threat, and robustness so that unforeseen outliers can be managed.

Focus Areas

Computer Icon

Host a Webinar with CSIAC

Are you interested in delivering a webinar presentation on your DoD research and engineering efforts?

Want to find out more about this topic?

Request a FREE Technical Inquiry!