BEC: Applying Behavioral Economics to Harden Cyberspace

https://api.army.mil/e2/c/images/2020/10/28/49bbbbea/size0-full.jpg

Posted: February 10, 2016 | By: Victoria Fineberg

3 BEC Framework Structure

Human actors play various roles in cyberspace, and their biases affect different areas of risk. For the purpose of BEC, the three principal categories of cyberactors are Users, Defenders and Attackers. Users utilize cyberspace services and functions. Defenders create, operate and strengthen cyberspace. Attackers aim to penetrate cyber defenses and to exploit Users’ systems and data. The irrationalities of Users and Defenders represent cyber-system vulnerabilities; the irrationalities of Attackers are potential opportunities to mitigate threats. Table 1 summarizes normal threats and vulnerabilities posed by cyberactors as well as their mitigation using BEC-based countermeasures.

Table 1. Risk components in BEC

2014-04-11_1223

BEC affects two aspects of Risk Management, Risk Assessment and Risk Response. In Risk Assessment, BEC specifically adds focus on human vulnerabilities; and in Risk Response, BEC controls the human-related risk by using all three components of the DID, i.e., people, operations, and technologies. In fact, when the causes of human decision-making biases are understood, operational and technical countermeasures are frequently more effective than purely human countermeasures such as training people or raising their awareness. Table 2 provides examples of human, operational and technical countermeasures that control various vulnerabilities and threats, emphasizing those caused by people.

This paper refines the BEC approach further by presenting it similarly to Zachman’s (1997) framework that organizes modeling artifacts along two dimensions, representative stakeholders and a set of perspectives. In the BEC framework, the stakeholders are Users, Defenders and Attackers, and the perspectives on their irrationality are the security services they undermine, i.e., Confidentiality, Integrity and Availability. Table 3 shows the structure of this framework.

Table 3. Structure of the BEC framework

2014-04-11_1223_001

The BEC framework will help decision makers prioritizing security services that humans put at risk and selecting corresponding countermeasures. The focus could be different for government agencies whose highest priority is protecting Confidentiality of their information, utilities primarily concerned with maintaining Integrity of their Supervisory Control And Data Acquisition (SCADA) systems, and online businesses for whom Availability is the matter of survival. Section 4 provides an expanded approach to BEC and Figure 3 shows the overall view of the BEC framework1.

BEC cube 20140218 (1)

Figure 3.  Complete BEC framework

The BEC framework can be applied flexibly. Some organizations may focus on the threat entry points where human biases manifest themselves. For example, Users jeopardizing system Integrity by choosing weak passwords would be treated as the vulnerability of the User-Integrity (U-I) block of the framework. Others may consider the risk of human vulnerabilities primarily based on their impacts. For example, if Users’ weak passwords ultimately put at risk the information Confidentiality, the focus would be on the mitigation of the User-Confidentiality (U-C) block.

Table 2.  Human vulnerabilities and corresponding countermeasures
2014-04-11_1223_002

4 Refinement of the BEC Framework

The general principles of BEC can be further refined by illustrating how known cyberactor biases map into this framework. Table 4 expands Table 3 with examples of Users’ and Defenders’ vulnerabilities and of Attackers’ goals.

Table 4.  Examples of BEC applicability

Image11145

In the Table 4 examples, the primary types of the BEC actors, Users, Defenders and Attackers, are further classified into subcategories. The characteristics of these categories are as follows.

  • Users. Vulnerabilities are associated with the use of cyber resources as they carry their business and leisure activities.
    • Individual. These users include individuals and small businesses in which personal and business technologies are not separated. Individuals choose and install their own hardware and software and provide limited defense of their own cyber environment. Their vulnerabilities are self-inflicted.
    • Organizational. These users differ from individual users in that their biases produce vulnerabilities for their organizations thus creating the principal-agent problem.
    • External. Vulnerabilities (externalities) that individual and organizational users create for the third parties, e.g., user devices joining botnets used to attack external targets.
  • Defenders. Vulnerabilities are associated with the protection of cyber resources.
    • Developer. Vulnerabilities are introduced before a system becomes operational.
    • Operator. Vulnerabilities are created in the process of defending an operational system. These vulnerabilities also include unintended consequences of organizational policies and procedures.
  • Attackers. Among various types of attackers, the following entities present distinct threats and may be controlled by different human-focused countermeasures: foreign government, foreign military, non-state combatant, business, criminal, hacker, and terrorist.
  • [Insiders.] From the cybersecurity perspective, insiders act as attackers. However, their behavior is influenced and mitigated by their organizations, and thus they are considered as organizational users.

5 Examples of Applying BE to BEC

5.1 Defender biases in Risk Assessment and Risk Response

In the seminal Behavioral Economics paper Judgment under uncertainty: Heuristics and biases (1974), Amos Tversky and Daniel Kahneman described a heuristic adjustment from an anchor and the resulting biases of overestimating the probability of conjunctive events and underestimating the probability of disjunctive events. Influenced by the first of these biases, the subjects who were given a high probability (90%) of a certain event, were consistently betting that the probability of this event occurring seven times in succession was still high. The second bias is complementary; after the subjects were given a low probability (10%) of a certain event, they were consistently betting that the probability of this event occurring at least once in seven consecutive experiments was still low.

By extension, in cyber Risk Assessment, underestimating the probability of disjunctive events may lead to understating the risk when low-probability threats are introduced multiple times in a disjunctive fashion. Likewise, in cyber Risk Response, a series of conjunctive controls may be psychologically anchored on high effectiveness of individual controls without the recognition of a lower level of protection that they provide collectively. Underestimation of risks and overestimation of controls are likely to influence intuitive security decisions. However, these errors of judgment may also affect formal Risk Management processes in which decision makers have some preconceived notions. This usually happens due to the confirmation bias, a human trait of seeking and emphasizing the information that supports existing beliefs.

5.2 Defender and User biases undermining Integrity

In the book The (honest) truth about dishonesty, Dan Ariely (2012) describes the Behavioral Economics of honesty and dishonesty, some of which may have profound impact on cybersecurity. For example, the experimental findings that people cheat less when they are given timely reminders of ethical standards (p.41) could be used for preventing some Insider actions and warning Organizational Users against violating Acceptable Use Policies (AUP).

Financial conflicts of interest are exacerbated when financial instruments are complex, people are not dealing with real money, and all their colleagues are committing similar offenses (Ariely, 2012, p.84). Some of these situations can be prevented by standard measures such as separation of duties of financial advisers and financial managers (p.94). A similar potential for conflict of interest exists in cyberspace, and the International Information Systems Security Certification Consortium recommends administrative controls such as separation of duties of the individuals who are requesting and authorizing critical actions and expenditures; those performing backups and restoration; application developers of the development, testing and production environments; and personnel in other areas where abuse is likely ((ISC)2®, 2010, pp.12-13). In the context of the BEC framework, current (ISC)2’s recommendations can be fine-tuned for specific cognitive biases.

Frequently people break rules under excessive cognitive load when they have so much on their mind that there is little room for resisting temptation (Ariely, 2012, p.99). Experiments show that “when our deliberative reasoning ability is occupied, the impulsive system gains more control over our behavior” (p.100). In cyberspace, this phenomenon is particularly dangerous for Defenders whose cognition may be depleted by long shifts and continuing vigilance, thus leading to errors in judgment such as missing alarms or opening phishing messages. When these scenarios are well understood, it may turn out that the most appropriate cyber controls are not additional education and training but technical and operational reminders, checks, and constraints.

Altruistic cheating is a paradoxical phenomenon making “it easy for group-based processes to turn collaborations into cheating opportunities in which individuals cheat to a higher degree because they realize that their actions can benefit people they like and care about” (Ariely, 2012, p.222). In cybersecurity this bias may work against standard anti-collusion measures. Furthermore, people’s desire to be nice and social is at heart of many successful social-engineering attacks. Ariely points out that “those in the spotlight: politicians, public servants, celebrities, and CEOs” should be particularly diligent in setting the right example, but in reality, they “are too often rewarded with lighter punishments for their crimes than the rest of the population” (p.215). Edward Amoroso, the Chief Security Officer of AT&T, observes a similar phenomenon in cybersecurity describing the symbolism of the managers’ behavior and the perils of “executive exemption” when the most senior executives evade security controls and often commit security policy violations (Amoroso, 2011, pp.125-126). The issue is three-fold: the executives are the primary targets of attacks, they are frequently less knowledgeable about technology than their workforce, and their staff is reluctant to enforce the policies. Amoroso calls for “major national infrastructures solicitations” to support security staff in their efforts to control executives who outrank them. A formal, scientifically supported BEC framework will help reinforcing security at the highest organizational levels.

Want to find out more about this topic?

Request a FREE Technical Inquiry!