Synergistic Architecture for Human-Machine Intrusion Detection

synergistic

Posted: January 26, 2017 | By: Dr. Noam Ben-Asher, Paul Yu

Synergistic Analyst-in-the-Loop Cyber Detection

To formulate and demonstrate the synergistic architecture for intrusion detection, we consider the following simplified detection process. Cyber activity impacts the environment. The state of the environment is measured by monitors and yields evidence. Evidence is provided to a detection mechanism (Inference), which decides hypotheses by assigning them weights.

Based on existing research on intrusion detection and human data interaction, our hypothesis is that having each component interact and share relevant information with the other components allows better decisions to be made at higher speed about threats both new and old. We first introduce the components of the detection process and then discuss the interactions between them.

Components

The three central components of the detection framework are the Evidence Collection Mechanism (denoted by ), the Detection/Inference Engine (D), and the human analyst (A) (see Figure 3). The ultimate decision of threats is made by A with support provided by D and .

  • , the evidence collector, manages the monitors that report information about the behavior of observable activities for use by the detection engine. The monitors can be deployed at the network level to inspect traffic (e.g., deep packet inspection) or at the host level to monitor processes. The information is then processed into the evidence that is requested by D or A.
  •  is aware of the types of evidence that it can collect as well as the cost of doing so. The evidence has many properties such as update frequency, bit rate, variance, reliability, etc. It may set the frequency of the evidence collection to trade accuracy with storage or bandwidth requirements. It may also be aware of the reliability of the evidence, a metric that can vary in real-time (e.g., through interfering processes, network traffic, etc.) These capabilities allow  a variety of behaviors. For example, it can calculate the cost to deploy a proposed set of monitors, or it can message the analyst when the reliability of the collected evidence has been degraded.
  • D, the detection engine, processes the available evidence and generates likelihoods for possible threats. The important capability of D is the handling of vast quantities of information. It is able to refine its detection and adapt to changes in the environment through human interaction (e.g., supervised machine learning [13]).
  • For a particular threat, D understands to some degree the relationship between the likelihood and evidence, e.g., as necessary or sufficient conditions for a threat. It is able to provide the human with an understanding of how the threat likelihoods are calculated. This implies that its internal logic can be shared with the human in a legible format.
  • A, the human analyst, decides which threats are occurring, generally with consideration given to the likelihoods computed by the detection engine.

Rather than focusing on the structure of D and , this note focuses on how the human analyst can interact with D and . In the following, we introduce the supporting framework for these interactions.

Interactive Detection Workflow

Figure 3 illustrates the detection flow where solid arrows indicate on the direction of information from source to destination. The detection flow starts with network and host activities that are observed by a set of monitors. The evidence collector  deploys the monitors and converts the gathered information into evidence. The detection engine  makes inferences on which threats are likely given the provided evidence. The likelihoods are presented to the analyst as a set of weights.

The goal of the analyst is to observe the set of hypotheses and weights and decide what types of activities occur. In order to improve the performance (accuracy, efficiency, etc.) of the overall detection flow, the analyst can interact with  and D as illustrated by the dashed lines in Figure 3.  and D can also interact directly. Such interactions can be the propagation of analyst’s interaction with one component to the other, or a result of the ongoing execution of a process. We formalize the interaction between the components of the detection processes in terms of queries and operations.

Figure 3: Synergistic detection process flow of information (solid lines) and interactions (dashed lines)

Interactions

Interactions between detection processes fall into one of two possible categories:

  1. Queries – Any type of request for information (i.e., a query) that does not change the operation of a process. A query is always followed by an answer.
  2. Operation – Any type of request that changes how processes perform. An operation is always followed by feedback indicating at least whether the operation was successfully completed or not.

Within a synergistic detection framework queries and operations can take many forms. Instead of enumerating all possible queries and operations, to illustrate the flow of interaction and of some of the most significant queries and operations we use the following simplified example.

  1.  deploys a set of monitors to track network activities.
  2. Based on the collected evidence, D provides A alerts that the activities a1, a2 are likely, and that activity a3 is unlikely. Here, we let a1 represent an unauthorized privilege escalation, a2 an ongoing SYN flood attack where large amounts of SYN packets is observed by , and a3 a malware beaconing to a control server.
  3. (a1) Based on experience, A knows that the D generates accurate alerts for a1 and as such confirm that a1 is true. This type of update interaction provides feedback to D that reinforces the use of the mechanism that yielded a1 with high likelihood.
  4. (a2A is aware of the current state of the network and the ephemeral tasks the network support. As such A is capable of using this contextual information to dismiss the alert regarding a2. Again, this interaction provides feedback to D and influences its future operation.
  5. (a3) With respect to a3A queries D for details regarding the type of evidence D used when making the decision. Given the high risk of overlooking malware activity, A can submit a query to D what additional evidence is required to state whether a3 occurs or not, with higher confidence. Based on the response, A can instruct  through an operation to collect targeted evidence to resolve the ambiguity around a3 processes the operation request and modifies the monitors to accommodate it. When complete,  informs A on the successful execution of the request and D on the modifications in the stream of evidence. Now, with the additional evidence, D can provide A a more confident alert regarding activity a3.

Want to find out more about this topic?

Request a FREE Technical Inquiry!