• Home
  • Resources
    • Find Resources by Topic Tags
    • Cybersecurity Policy Chart
    • CSIAC Reports
    • Webinars
    • Podcasts
    • Cybersecurity Digest
    • Standards & Reference Docs
    • Journals
    • Certifications
    • Acronym DB
    • Cybersecurity Related Websites
  • Services
    • Free Technical Inquiry
    • Core Analysis Task (CAT) Program
    • Subject Matter Expert (SME) Network
    • Training
    • Contact Us
  • Community
    • Upcoming Events
    • Cybersecurity
    • Modeling & Simulation
    • Knowledge Management
    • Software Engineering
  • About
    • About the CSIAC
    • The CSIAC Team
    • Subject Matter Expert (SME) Support
    • DTIC’s IAC Program
    • DTIC’s R&E Gateway
    • DTIC STI Program
    • FAQs
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
Login / Register

CSIAC

Cyber Security and Information Systems Information Analysis Center

  • Resources
    • Find Resources by Topic Tags
    • Cybersecurity Policy Chart
    • CSIAC Reports
    • Webinars
    • Podcasts
    • Cybersecurity Digest
    • Standards & Reference Docs
    • Journals
    • Certifications
    • Acronym DB
    • Cybersecurity Websites
  • Services
    • Free Technical Inquiry
    • Core Analysis Task (CAT) Program
    • Subject Matter Expert (SME) Network
    • Training
    • Contact
  • Community
    • Upcoming Events
    • Cybersecurity
    • Modeling & Simulation
    • Knowledge Management
    • Software Engineering
  • About
    • About the CSIAC
    • The CSIAC Team
    • Subject Matter Expert (SME) Support
    • DTIC’s IAC Program
    • DTIC’s R&E Gateway
    • DTIC STI Program
    • FAQs
  • Cybersecurity
  • Modeling & Simulation
  • Knowledge Management
  • Software Engineering
/ Journal Issues / Cyber Science & Technology at the Army Research Laboratory (ARL) / The Cyber Security Collaborative Research Alliance: Unifying Detection, Agility, and Risk in Mission-Oriented Cyber Decision Making

The Cyber Security Collaborative Research Alliance: Unifying Detection, Agility, and Risk in Mission-Oriented Cyber Decision Making

Published in Journal of Cyber Security and Information Systems
Volume: 5 Number: 1 - Cyber Science & Technology at the Army Research Laboratory (ARL)

Authors: Patrick McDaniel and Ananthram Swami
Posted: 01/23/2017 | Leave a Comment

Detection

The goal of the detection thrust is to develop theories and models that relate properties and capabilities of cyber threat detection and recognition processes/mechanisms to properties of a malicious activity, and of properties of Army networks. More concretely, the goal is to determine whether there is an ongoing cyber-threat that can negatively affect the operation and provide assessments on: (i) what is the most likely threat; (ii) what impact will it have on the operation (e.g., leakage of data, system breakdown, etc.) in terms of increase in cost or decrease in payout; and (iii) the confidence in the process (based on evidence collected). Detection is influenced by (i) the actions of the attacker, and (ii) the dynamics of the environment (which can itself influence the attacker to behave in certain ways). The CRA’s efforts in detection to date have been focused on three areas addressing the needs of operations: (a) advancing traditional intrusion detection, (b) understanding defender’s decision processes, and (c) developing a science of evidence collection.

Note that collection of data and transmission to a central fusion center can place demands on already constrained communications media in tactical networks. Even collecting and local processing may pose challenges in the often energy and computing constrained environment. Strategic networks may have greater resources, but support a larger diversity of operations. One of the key investigations within the CRA is the calibration of detection apparatus based on the resource cost for the target (tactical or strategic) as based on an understanding of operational requirements.

The operational model uses inputs from intrusion detection systems to infer the model state. However, current systems are limited in their accuracy and false-positive rates [25]. The team is looking at several alternate models and scientific challenges to traditional detection. One alternate model developed within the CRA is diagnosis-enabling intrusion detection (DEID) [20]. Departing substantially from traditional signature and anomaly-based detection, DEID infers high level attacks and effects using correlations, automated reasoning, and forensic techniques. In DEID: (i) A large volume of data that encompasses all levels of operation at each node (human actions, sensors, applications, OS, network behaviors) and across a multitude of monitors is collected. (ii) The observed, correlated evidence are examined and an attempt is made to map them onto expected correlated behaviors derived from the models of both the system and human actors; the mappings allow the determination of normal/attack behaviors with high accuracy (diagnosis). (iii) If the system is unable to map the observed correlated behaviors to known attacks (e.g., may be a zero-day attack) appropriate information is exported to the human defenders.

Another effort seeks to expand and formalize the science of detection by exploring the vulnerabilities and countermeasures inherent to the underlying machine learning algorithms upon which most detection systems are based. In particular, we are developing intrusion detection techniques that will be robust in the face of adversaries, work with limited information, and greatly reduce the attack surface that adversaries may leverage undetected. For example, we have developed novel algorithms and defenses for adversarial samples—adversarially crafted detection sensor inputs that use model error to bypass detection [39][30]. In defining the new science of this area, we introduced a taxonomy formalizing the space of adversaries targeting deep neural networks used for classification tasks [35]. We then investigated the case of source-target misclassification: forcing the targeted classifier to misclassify samples from any source class into any chosen adversarial target class. Our algorithms exploit a precise understanding of the sensitivity of the mapping between inputs and outputs using the forward derivative—the Jacobian of the model learned by the DNN classifier. Adversarial saliency maps build on the forward derivative to compute a score indicating the likelihood that each input component contributes to the adversarial goal of source-target misclassification. Perturbations are iteratively selected using adversarial saliency maps and added to the sample until it becomes adversarial—misclassified by the deep neural network. We extend that work to create defense against such attacks [36]. The intuition is to make the models learned by deep neural networks smoother to increase the average minimal perturbation magnitude an attacker needs to introduce to craft adversarial samples. This minimal perturbation characterizes a neighborhood around points in which the model’s decision is constant, which in turn defines a robustness metric for detection models. We proposed the use of defensive distillation to increase model robustness. Thus, distilled models are harder to attack: adversarial samples need to be more perturbed in order for the model to misclassify them. We are applying similar approaches and validation techniques to other ML techniques and measuring the resilience of detection systems again these attacks.

A challenge for intrusion detection systems is the integration of hard-earned information relevant to an attack, but that is not measurable at run time. Recent advances by CRA PIs in Learning Using Privileged Information (LUPI) [40] provide some insights.

Beyond the detection algorithms, the CRA is exploring the “evidence” collection processes and systems—the quality of any detection system is critically defined by the completeness and accuracy of its sensor inputs. There are several challenges addressed by the CRA in configuring evidence collection in military systems. First, monitor placement is often ad hoc and accidental. Large, complex environments can contain thousands of devices and services with subtle interactions and behaviors. How one places sensors in these environments is key to getting an accurate vision of the environmental state. The team is studying measures of coverage and developing algorithms for sensor placement (minimal number, optimal locations), e.g., [26]. The team is exploring several strategies including max-coverage, min-resource, and game-theoretic strategies for placement algorithms. Such algorithms are being developed both as static and dynamic placements —the latter of which is a form of system agility discussed below.

Pages: Page 1 Page 2 Page 3 Page 4 Page 5 Page 6

Previous Article:
« Cyber Science and Technology at the Army...
Next Article:
Machine Learning and Network Intrusion Detection: Results... »

References

  1. K.B. Alexander. Warfighting in cyberspace. Joint Force Quarterly, Issue 46, July 2007. .
  2. T. Azim, I. Neamtiu, and L. Marvel. Towards self-healing smartphone software via automated patching. Proc. 29th IEEE/ACM International Conference on Automated Software Engineering (New ideas track), ASE 2014, September 2014
  3. N. Ben-Asher, A. Oltramari, R. Erbacher, C. Gonzalez. Ontology-based Adaptive Systems of Cyber Defense. The 10th International Conference on Semantic Technology for Intelligence, Defense, and Security (STIDS) 201
  4. D.P. Bertsekas. Dynamic programming and optimal control. Athena Scientific, 2005.
  5. D.P. Bertsekas and S.E Shreve. Stochastic optimal control: The discrete time case. Academic Press, 2007.
  6. M. Cains, D. Henshel, B. Hoffman, C. Sample. Integrating Cultural Factors into Human Factors Framework for Cyber Attackers. Proc. 7th Intl. Conf. Applied Human Factors and Ergonomics (AHFE), 2016
  7. Y. Cao, Z. Qian, Z. Wang, T. Dao, S.V. Krishnamurthy, L. M. Marvel. Off-Path TCP Exploits: Global Rate Limit Considered Dangerous (CVE-2016-5696). Proc. USENIX SECURITY 2016, Austin, TX, 2016
  8. Z. B. Celik, N. Hu, Y. Li, N. Papernot, P. McDaniel, J. Rowe, R. Walls, K. Levitt, N. Bartolini, T.F. La Porta, and R. Chadha. Mapping Sample Scenarios to Operational Models.  Proceedings of the IEEE Military Communications Conference (MILCOM), Nov 2016, Baltimore, MD.
  9. J.-H. Cho, H. Cam, A. Oltramari. Effect of personality traits on trust and risk to phishing vulnerability: Modeling and analysis. Proc. IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA’2016), March 21-25, 2016, San Diego
  10. Cyber-Security Collaborative Research Alliance, Webpage, August 2016,
  11. Cyber virtual ad hoc network (CyberVan). http://www.appcomsci.com/research/tools/cybervan [Online; accessed 5-September-2016].
  12. G. Deckard, L.J. Camp. Measuring efficacy of a classroom training week for a military cybersecurity training exercise. Proc. IEEE International Conference on Technologies for Homeland Security, (Waltham, MA) 10-16 May 2016.
  13. K. Durkota, V. Lisy, B. Bosansky, and C. Kiekintveld. Optimal network security hardening using attack graph games. Proc. IJCAI, 2015.
  14. X, Feng, Z. Zheng, D. Cansever, A. Swami, and P. Mohapatra. Stealthy Attacks with Insider Information: A Game Theoretic Model with Asymmetric Feedback. Proc. IEEE MILCOM 2016, Baltimore, MD, Nov 2016.
  15. X, Feng, Z. Zheng, P. Hu, D. Cansever, and P. Mohapatra. Stealthy Attacks Meets Insider Threats: A Three-Player Game Model. Proc. IEEE MILCOM 2015, Tampa, FL, Oct 2015.
  16. C. Gonzalez, N. Ben-Asher, J. Martin, V. Dutt. A cognitive model of dynamic cooperation with varied interdependency information. Cognitive Science, 39, 457-495, 2015.
  17. C. Gonzalez, N. Ben-Asher, A. Oltramari, C. Lebiere. Cognition and technology. In Cyber Defense and Situational Awareness, pp. 93-117. Springer International Publishing, 2014
  18. D. Henshel, A. Alexeev, M. Cains, B. Hoffman, I. Neamtiu, J. Rowe. Modeling cybersecurity risks: Proof of concept of a holistic approach for integrated risk quantification. Proc. IEEE Intl. Symp. Technologies for Homeland Security (HST), 2016
  19. D. Henshel, M. Cains, B. Hoffman, T. Kelley. Trust as a human factor in cyber security risk assessment. Proc. 6th Intl. Conf. Applied Human Factors and Ergonomics (AHFE), July 2015
  20. D. Henshel, G. Deckard, B. Lufkin, N. Buchler, B. Hoffman, L. Marvel, S. Cannello, and P. Rajivan. Predicting Proficiency in Cyber Defense Team Exercises. Submitted to Military Communications Conference, MILCOM 2016-2017 IEEE, IEEE 2016
  21. C. Jackson, R. Erbacher, S. Krishnamurthy, K. Levitt, L. Marvel, J. Rowe, A. Swami. A Diagnosis-based Approach to Intrusion Detection. 20th European Symposium on Research in Computer Security (ESORICS 2015), Vienna, Austria.
  22. S. Jajodia, A.K. Ghosh, V. Swarup, C. Wang, and X.S Wang, Eds. Moving Target Defense: creating asymmetric uncertainty for cyber threats, volume 54. Springer Science & Business Media, 2011
  23. S. Jajodia, A.K. Ghosh, V.S Subrahmanian, V. Swarup, C. Wang, and X.S. Wang, Eds. Moving Target Defense II: Application of Game Theory and Adversarial Modeling. Springer Science & Business Media, 2013.
  24. D.N. Jones and D.L. Paulhus. Introducing the short dark triad (sd3): A brief measure of dark personality traits. Assessment, 21(1):28{41, 2014.
  25. T. Kelley, B. Bertenthal. Attention and past behavior, not security knowledge, modulate users’ decisions to login to insecure websites. Information and Computer Security, 24(2), 2016
  26. R.A. Kemmerer, and G. Vigna. Intrusion Detection: A Brief History and Overview. IEEE Security & Privacy, 2002.
  27. K. Khalil, Z. Qian, P. Yu, S. Krishnamurthy, A. Swami, Optimal Monitor Placement for Detection of Persistent Threats. IEEE Globecom, Washington DC. 4-8 Dec 2016.
  28. A. Kusum, I. Neamtiu, and R. Gupta. Adapting graph application performance via alternate data structure representation. Proc. 5th International Workshop on Adaptive Self-tuning Computing Systems, 2015.
  29. L. Marvel, S. Brown, I. Neamtiu, R. Harang, D. Harman, and B. Henz. A framework to evaluate cyber agility. Proc. IEEE MILCOM 2015, Tampa, FL, Oct 2015.
  30. P. McDaniel, T. Jaeger, T.F. La Porta, N. Papernot, R. Walls, A. Kott, I. Neamtiu, L. Marvel, A. Swami, P. Mohapatra, S. Krishnamurthy. Security and Science of Agility. Proceedings of the First ACM Workshop on Moving Target Defense, 2014
  31. P. McDaniel, N. Papernot, and Z.B. Celik. Machine learning in adversarial settings. IEEE Security & Privacy, 2016.
  32. P. McDaniel, B. Rivera, and A. Swami, Toward a Science of Secure Environments. IEEE Security & Privacy Magazine, 12(5), July-August, 2014
  33. A. Oltramari, L.F. Cranor, R. Walls, and P. McDaniel. Computational Ontology of Network Operations. Proceedings of the IEEE Military Communications Conference (MILCOM), October 2015. Tampa, FL.
  34. A. Oltramari, D. Henshel, M. Cains, B. Hoffman. Towards a Human Factors Ontology for Cyber Security. Proc. Semantic Technology for Intelligence, Defense, and Security (STIDS), 2015.
  35. N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z.B. Celik, and A. Swami. Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697, 2016.
  36. N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z.B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. IEEE European Security and Privacy Symposium, Mar 2016.
  37. N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. IEEE Security and Privacy Symposium, May 2016.
  38. C. Sample. Cyber + Culture Early Warning Study. CMU/SEI-2015–SR-025, 2015, Online at: http://resources.sei.cmu.edu/asset_files/SpecialReport/2015_003_001_449739.pdf
  39. Z. Shan, I. Neamtiu, Z. Qian, and D. Torrieri. Proactive restart as cyber maneuver for android. Proc. IEEE MILCOM 2015, Tampa, FL, Oct 2015.
  40. R. Shay, L. Bauer, N. Christin, L.F. Cranor, A. Forget, S. Komanduri, M.L. Mazurek, W. Melicher, S.M. Segreti, and B. Ur. A Spoonful of Sugar?: The Impact of Guidance and Feedback on Password-Creation Behavior. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2903-2912. ACM, 2015.
  41. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. Proceedings of the 2014 International Conference on Learning Representations. Computational and Biological Learning Society, 2014.
  42. V. Vapnik and R. Izmailov. Learning using privileged information: Similarity control and knowledge transfer. Journal of Machine Learning Research, pp. 2023-2049, 2015
  43. H. Zhang, D. She, and Z. Qian. Android root and its providers: A double-edged sword. Proc. 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS ’15, pp 1093–1104, 2015
  44. B. Zhou, I. Neamtiu, R. Gupta. How Do Bug Characteristics Differ Across Severity Classes: A Multi-platform Study. Proc.26th IEEE International Symposium on Software Reliability Engineering, November 2015

Authors

Patrick McDaniel
Patrick McDaniel
Patrick McDaniel is a Distinguished Professor in the School of Electrical Engineering and Computer Science at Pennsylvania State University, Fellow of the IEEE and ACM, and Director of the Institute for Networking and Security Research. Professor McDaniel is also the program manager and lead scientist for the Army Research Laboratory's Cyber-Security Collaborative Research Alliance. Patrick's research focuses on a wide range of topics in computer and network security and technical public policy. Prior to joining Penn State in 2004, he was a senior research staff member at AT&T Labs-Research.
Ananthram Swami
Ananthram Swami
Ananthram Swami is with the US Army Research Laboratory and is the Army's Senior Research Scientist (ST) for Network Science. Prior to joining ARL, he held positions with Unocal Corporation, USC, CS-3 and Malgudi Systems. He was a Statistical Consultant to the California Lottery, developed a MATLAB-based toolbox for non-Gaussian signal processing. He has held visiting faculty positions at INP, Toulouse, and currently at Imperial College, London. Swami's work is in the broad area of network science, including network security. He is an ARL Fellow and a Fellow of the IEEE.

Reader Interactions

Leave a Comment Cancel

You must be logged in to post a comment.

sidebar

Blog Sidebar

Featured Content

The DoD Cybersecurity Policy Chart

The DoD Cybersecurity Policy Chart

This chart captures the tremendous breadth of applicable policies, some of which many cybersecurity professionals may not even be aware, in a helpful organizational scheme.

View the Policy Chart

Featured Subject Matter Expert (SME): Daksha Bhasker

A dynamic CSIAC SME, Senior Principal Cybersecurity Architect, Daksha Bhasker has 20 years of experience in the telecommunications services provider industry. She has worked in systems security design and architecture in production environments of carriers, often leading multidisciplinary teams for cybersecurity integration, from conception to delivery of complex technical solutions. As a CSIAC SME, Daksha's contributions include several published CSIAC Journal articles and a webinar presentation on the sophiscated architectures that phone carriers use to stop robocalls.

View SME's Contributed Content

CSIAC Report - Smart Cities, Smart Bases and Secure Cloud Architecture for Resiliency by Design

Integration of Smart City Technologies to create Smart Bases for DoD will require due diligence with respect to the security of the data produced by Internet of Things (IOT) and Industrial Internet of Things (IIOT). This will increase more so with the rollout of 5G and increased automation "at the edge". Commercially, data will be moving to the cloud first, and then stored for process improvement analysis by end-users. As such, implementation of Secure Cloud Architectures is a must. This report provides some use cases and a description of a risk based approach to cloud data security. Clear understanding, adaptation, and implementation of a secure cloud framework will provide the military the means to make progress in becoming a smart military.

Read the Report

CSIAC Journal - Data-Centric Environment: Rise of Internet-Based Modern Warfare “iWar”

CSIAC Journal Cover Volume 7 Number 4

This journal addresses a collection of modern security concerns that range from social media attacks and internet-connected devices to a hypothetical defense strategy for private sector entities.

Read the Journal

CSIAC Journal M&S Special Edition - M&S Applied Across Broad Spectrum Defense and Federal Endeavors

CSIAC Journal Cover Volume 7 Number 3

This Special Edition of the CSIAC Journal highlights a broad array of modeling and simulation contributions – whether in training, testing, experimentation, research, engineering, or other endeavors.

Read the Journal

CSIAC Journal - Resilient Industrial Control Systems (ICS) & Cyber Physical Systems (CPS)

CSIAC Journal Cover Volume 7 Number 2

This edition of the CSIAC Journal focuses on the topic of cybersecurity of Cyber-Physical Systems (CPS), particularly those that make up Critical Infrastructure (CI).

Read the Journal

Recent Video Podcasts

  • A Brief Side-by-Side Comparison Between C++ and Rust – Part 3 Series: Programming Language Comparisons
  • A Brief Side-by-Side Comparison Between C++ and Rust – Part 2 Series: Programming Language Comparisons
  • A Brief Side-by-Side Comparison Between C++ and Rust – Part 1 Series: Programming Language Comparisons
  • Digital Engineering Implementation Progress and Plans Series: CSIAC Webinars
  • Assessing the Operational Risk Imposed by the Infrastructure Deployment Pipeline Series: The CSIAC Podcast
View all Podcasts

Upcoming Events

Fri 26

SANS Cyber Security East: Feb 2021

February 22 - February 27
Organizer: SANS Institute
Jan 28

Data Privacy Day

January 28, 2022
Jan 28

Data Privacy Day

January 28, 2023
View all Events

Footer

CSIAC Products & Services

  • Free Technical Inquiry
  • Core Analysis Tasks (CATs)
  • Resources
  • Events Calendar
  • Frequently Asked Questions
  • Product Feedback Form

About CSIAC

The CSIAC is a DoD-sponsored Center of Excellence in the fields of Cybersecurity, Software Engineering, Modeling & Simulation, and Knowledge Management & Information Sharing.Learn More

Contact Us

Phone:800-214-7921
Email:info@csiac.org
Address:   266 Genesee St.
Utica, NY 13502
Send us a Message
US Department of Defense Logo USD(R&E) Logo DTIC Logo DoD IACs Logo

Copyright 2012-2021, Quanterion Solutions Incorporated

Sitemap | Privacy Policy | Terms of Use | Accessibility Information
Accessibility / Section 508 | FOIA | Link Disclaimer | No Fear Act | Policy Memoranda | Privacy, Security & Copyright | Recovery Act | USA.Gov

This website uses cookies to provide our services and to improve your experience. By using this site, you consent to the use of our cookies. To read more about the use of our site, please click "Read More". Otherwise, click "Dismiss" to hide this notice. Dismiss Read More
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Non-necessary

Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.

SAVE & ACCEPT