The Cyber Security Metrics Workshop is a one-day review of current trends in policy, tools and techniques of interest to managers and Cyber Security professionals useful in measuring one’s cyber security vulnerability. The workshop provides an overview of the current state of understanding of what is presently available and will focus on what research needs to be done to develop robust and easy to use techniques to assess and monitor cyber preparedness and vulnerability.
Areas of particular interest to DoD are:
The workshop will feature technical papers, a mid-day panel of experts, and participants from the Air Force Research Laboratory (AFRL/Rome).
Presentations will be provided by representatives from OSD, AFRL, PACOM, MIT-Lincoln Lab, Cyber Research Institute, Raytheon, and SRC.
Cyber Security Metrics Workshop Agenda
November 12, 2014
Dr. Paul Losiewicz
0840-0915 Keynote: Making Cyber Metrics Measure Up
Chet Maciag, AFRL
In recent years the DoD has placed increasing emphasis on the need for quantifiable and measurable means for assessing the impact of its cyber science and technology investment. The 2011 Budget Control Act and Better Buying Power 3.0 emphasize Department fiscal austerity and Enhanced Reliance, while employing effective processes for developing, acquiring, testing, evaluating, and maintaining systems over their entire life cycle. Carefully developed cyber metrics will play a key role in this task. Metric development has been a considerable challenge for both government and industry alike, but we are now starting to make meaningful progress. This talk will explore the journey undertaken by DoD and the AFRL to develop metrics, provide some intermediate findings, and describe some useful ways to bound the application space for specific threats and capabilities.
Richard Aldrich, Booz Allen Hamilton
Too often cybersecurity metrics reports tend to either be based on whatever data was most readily available, or include low-level data of a very technical nature. In such cases the response of the recipient is likely to be, “So what?” This presentation will address four key issues to consider to better anticipate and address that question.
First, know your audience. You can’t answer the “so what?” if you don’t understand to whom you’re presenting the metrics. Senior leaders need a different, usually much higher level set of metrics than the ones who are actually applying patches to systems. To develop meaningful metrics one must understand what is important to one’s audience. If your data is low level, but you’re delivering to a senior audience you need to roll it up. If your audience has budgetary authority you need to incorporate dollars into the metrics. If your audience has directive authority, you need to show what changes in policy need to be made to respond to the identified problems.
Second, know your data. Lots of data has significant limitations to it. Ensure you understand the limitations so you don’t overplay your hand. Also understand that cybersecurity’s value is over time, so generally point-in-time metrics are less meaningful then trending metrics.
Third, know the key types of metrics. Metrics can be broadly grouped into four types: time, risk, ROI, and effectiveness. Each may be important to a particular audience or at a particular time. The first answers the question, “Are we going to make it to the goal line in time?” The second answers the question, “Are we adequately responding to the threat, vulnerability and potential impact?” The third answers the question, “If I had more money to spend, where should I put my next dollars?” or “If I have to save money by cutting a program, where will I be hurt the least?” The last type answers the question, “Are our mitigation and remediation efforts successfully producing the desired result?”
Fourth, keep the message clear. Once you understand all of the above, make sure the presentation is clear. Don’t muddle it with distracting colors, overly complex charts, techno-babble or overly complex math. Make the data speak for itself with clear, intuitive visualizations that plainly make the point.
Ben Pokines and Matt Sweeney, SRC, Inc.
Cybersecurity metrics often focus on compliance and exposure to risk based on factors such as number of attack vectors and duration of exposure to vulnerabilities. Based on trends published in reports such as the Verizon DBIR of 2013, current cybersecurity metrics practice needs to improve in order to detect cyber-attacks quickly and drive business action. Strong consensus needs to be built regarding adoption of a metrics-based continuous cybersecurity monitoring approach that closes the gap between security operations intelligence and business risk. We will discuss current gaps in cybersecurity metrics practice, identify potential methods to close the metrics gap, and share preliminary results gathered from deploying these methods internally at our organization.
Leonard Popyack, PhD, Associate Professor for Cyber Security, Utica College
This talk will summarize the findings of two workshops held in 2014 focused on Cyber Security hard problems, the research challenges, and select industry needs. Invited distinguished researchers and subject matter experts from industry briefed cyber security challenges in diverse fields, such as Defense/Military, Government, Intelligence, Financial, Healthcare, Medical Equipment, Transportation, and Industrial control and Power. Workshop participants produced detailed challenges for the cyber security field.
1100-1130 Cyber Assessment
Dr. Paula Donovan, MIT Lincoln Labs
The field of cyber assessments is developing quickly to meet the needs of the many new cyber technologies that are emerging as well as the users who must quickly adopt these technologies. The presentation will begin with a discussion of the many challenges faced in cyber assessments, as well as processes and approaches to assessment. Next an overview of cyber ranges is presented, including a snapshot of range capabilities and the process of designing and executing an assessment on a cyber range. Finally, the CYBERCOM led effort to equip the new Cyber Protection Teams through a series of exercises on a cyber range will be presented.
Dr. Kenric P. Nelson, Raytheon Company
The ability to accurately forecast the probability of potential cyber threats is critical for making decisions regarding appropriate defenses. Unfortunately, the assessment tools typically focus on the accuracy of the decisions rather than the accuracy of the models. Common examples include the confusion matrix of correct and incorrect decisions and the receiver operating curve. Neither of these popular metrics measure the distance between the statistical models and the distribution of the test data used to assess the cyber defense system. Unfortunately, the correct information theoretic metric – the cross-entropy between the test data and the model – is both unintuitive and extremely sensitive to outliers. Equivalent to the cross-entropy, but easier to understand and interpret, is the geometric mean of the probabilities reported for the actual event. Furthermore, the generalized mean can be used to modify the sensitivity to outliers, providing a spectrum of performance against tolerance of risk. Lowering the sensitivity is equivalent to increasing the tolerance of risk which is required to make a decision. Increasing the sensitivity is equivalent to reducing the tolerance for risk in order to insure the system is robust. Well-designed cyber defense systems require a balancing of decisiveness in reporting potential threats, accuracy in forecasting the probability of threat, and robustness so that unforeseen outliers can be managed.
Ross Roley, PACOM Energy Innovation Office Lead, SPIDERS Operational Manager
Cyber experimenters have faced a perplexing challenge when it comes to performance metrics. There is no commonly accepted standard to compare system resilience to a cyber-attack. Through a series of red team experiments on DoD microgrids, both in labs and against live systems, researchers from U.S. Pacific Command have developed a metric that has proven to be useful and could be adopted as an industry standard. In this session, the metric will be introduced and explained via an example followed by an honest description of its limitations.
John S. Bay, PhD, Executive Director Cyber Research Center
The need and desire for metrics on cybersecurity has been a priority request from OSD leadership for ten years. When “cyber” became a quasi-official warfighting “domain” a decade ago, major programs of record were categorized as “cyber” programs. As such, the programs needed quantitative program parameters so that DoD leadership could track financial progress, technical performance, and capability milestones. Those program parameters, though, surpassed what the science and the state-of-the-art could provide. Eventually, the definition and standardization of workable cybersecurity metrics became a subject of study itself. This talk will summarize the speaker’s experience with DoD needs for cybersecurity metrics, the S&T communities suggestions, the current state of practice, and speculation on additional metrics for the future. In particular, metrics will be proposed that track capabilities, maturity, mission support, cost, and adversarial advantage.
1430-1520 Panel Discussions – VIEW VIDEO
Moderator: Dr. Paul Losiewicz, Quanterion Solutions, Inc.
Panelists: Richard Aldrich (Booz Allen Hamilton), Dr. Kenric Nelson (Raytheon), Dr. John Bay (Cyber Research center), Dr. Len Popyack (Utica College)
1530-1600 Final Discussions/Action Items/Conclusion