Analyzing risk is critical throughout the software acquisition lifecycle. System risk is assessed by conducting a penetration test, where ethical hackers portray realistic threat on real systems by exploiting vulnerabilities. These tests are very costly, limited in duration, and do not provide stakeholders with “what-if” analyses. To alleviate these issues, system models are used in emulation, simulation, and attack graph generators to enhance test preparation, execution, and supplementary post-test analyses.
This article describes a method for developing models that can be used to analyze risk in mixed tactical and strategic networks, which are common in the military domain.
Execution-Based Model Generation
ARL has developed a model creation methodology that uses data collected from several black-box emulation executions. The generated models take the form of decision trees or other complex algorithms and formulas. This approach differs from traditional workflows where models are created and tested before or alongside system development. Some issues with the traditional approach include lack of synchronization between the actual system and the models due to changes in requirements, high cost of manually developed high-accuracy models, and unavailability (either for legal reasons or due to non-existent models). The novel approach starts instead with the end-product (i.e., the executable code). This also makes it possible to extract additional, incidental behaviors such as resilience to adversarial attacks. This methodology has been used to develop models for predicting success or failure of traffic hijacking attacks in MANETs. Unlike previous work in MANET evaluation techniques, the developed models generalize across scenarios and can be used to assess risk in attack graphs.
MANET Security Evaluation
Apart from penetration testing, simulation and emulation are commonly used to evaluate MANETs. This consists of executing multiple scenarios with varying conditions, such as topology and routing protocol. During scenario execution, an attack is executed and performance data (delay, throughput, goodput, etc.) are measured (Pathan, Al-Sakib Khan, 2016). Although this approach is non-exhaustive, well designed scenarios may be credible sources for evaluating security (Andel & Yasinsac, On the credibility of manet simulations, 2006.) Simulation is usually faster, but is more prone to inaccuracies because many times performance gains rely on using abstracted or otherwise incomplete behavior of the network stack and other processes. Emulation is capable of executing real binaries in real runtime environments (operating system, network stack, etc.), but only in realtime. In either case, in addition to being non-exhaustive, results do not generalize to untested scenarios.
Other evaluation techniques include formal methods and machine learning. The former are exhaustive approaches that describe systems mathematically and then, through rigorous analysis, prove or disprove security goals; these methods work for only small and non-mobile networks (Andel & Yasinsac, 2007.) Prior machine learning approaches focus only on high-level survivability metrics such as the average number of critical links and surviving paths over multiple executions (Alsheikh, Lin, Niyato, & Tan, 2014; Wang, 2010.) The novel methodology developed by ARL also uses machine learning, but focuses on providing low-level details that are used to create attack graphs, assess risk, and identify mitigations.
Figure 1 The experimentation workflow.
1 for attackNode in 1…10 2 for topology in “chain” “connected_grid” “cycle” “star” “tree” “two-centroid” “wheel” 3 for protocol in “OLSR” “OSPFv3MDR” 4 for attack in “forwarding” “spoofing” 5 runScenario($attackNode, $topology, $protocol, $attack)
Figure 2 Pseudo code for scenario executions