Machine learning is revolutionizing many industries and forever changing how we interact with computer systems and each other. The revolution is particularly relevant to cyber security, where machine learning is used to help human analysts make sense of increasingly large amounts of data to identify threats and mitigate vulnerabilities. While this symbiotic relationship is improving our security posture, it is fundamentally limited by humans making decisions at human speed. Significant gains in security can be realized if autonomy is embraced and “trusted” machine learning is given license to make actionable decisions at machine speed. Despite the potential benefits, there have been relatively few research efforts and applications of autonomous machine learning towards cyber security. This webinar describes the benefits of machine learning based approaches for autonomous control in the cyber domain. We discuss emerging autonomous machine learning technologies and their recent successes, technical and non-technical challenges to overcome, and potential near-term applications to cyber security.
Read the companion CSIAC Report: https://www.csiac.org/csiac-report/learning-to-win-making-the-case-for-autonomous-cyber-security-solutions/
This question was asked during the webinar:
Is policy always decided by humans or is policy autonomously updated?
This question was asked during the webinar:
Why is it that reinforcement learning can greatly improve cyber security? Can you comment more on the correlation between these two aspects?
This question was asked during the webinar:
Many Cyber Security concerns are not well defined in practice, take for example monitoring an organization ecosystem, how do you think the Autonomous Cyber Security problem can be framed?
This question was asked during the webinar:
Is there a limit to the reinforcement capacity of algorithms, i.e. if you pit a “Blue Team” algorithm against a “Red Team” algorithm, it would sound as if they would eventually exhaust all actions until a clear winner emerged, at which point learning would end. Can those algorithms then be transported to a different “scenario” and further progressed in their learning?
From Presenter:
We tried to keep the slides fairly high-level to provide an introduction to these topics, so if you would like to discuss anything in more depth, want some resources to learn more, or have any further questions please reach out by submitting a comment here!