How Does an Analyst Select M&S to Support the Entire DoD Acquisition Lifecycle Process?

https://images.unsplash.com/photo-1571171637578-41bc2dd41cd2?ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=1500&q=80
Free to Use Image from Unsplash

What Is: Executable Architecture Systems Engineering

Background

The goal of EASE is to lower the barrier of entry to the use of M&S. EASE provides a single interface for systems engineers, software developers, information technology professionals and analysts to work together. These individuals define the simulation systems engineering data and execute the appropriate applications in order to support the M&S user’s goals. EASE provides an interface to M&S users to select the capabilities they require and the scenario necessary to stimulate the appropriate warfare circumstances. The selection criteria are used to filter and display the most appropriate executions for the user to choose from. The user can then adjust configuration elements that have been exposed by the developers, select the number of runs they need to execute, schedule runs and hit the “Go” button to execute. The web-based interface provides a mechanism to launch potentially complex M&S in the cloud or on specific computing hardware. The systems engineers, developers and integrators can centrally manage all aspects of EASE and the execution of the proper M&S systems to achieve the M&S users’ requirements. Having a data-driven and easy to use interface keeps the systems engineering technical information (i.e. interface specifications) current. In turn, each user can be assured that they’re referencing and updating the latest information.

Needs Derivation

Simply learning which M&S and analytical tools exist within the DoD is challenging enough let alone actually obtaining them for use. Once users receive these systems, they still need to be trained and/or read lengthy and complicated user manuals on how to configure the systems and which execution options to use for a desired effect. This process is painful, time consuming and costly; so much so that users will opt for a simpler, but less effective solution. In order to ensure that the best tools DoD has to offer are used there is a need to quickly and easily find execution options for specific M&S needs.

After users become educated in the systems they use, that knowledge is generally not documented and remains only in their head. The complexity and nuances of running highly technical systems is if often too difficult or too time consuming for them to share the information with their peers. Each system is also delivered with its own types of documentation and few seem to follow existing standards when creating this documentation. There needs to be a method for capturing systems technical information in a common format for Systems Engineers (SEs). This method should connect functions across systems, understand the warfare capabilities of each element within the system and link the M&S solutions to experiment goals without adding more cost when compared to activities already being conducted to execute the experiment. In order to maximize the user’s derived knowledge and time expended, there is a need to link systems engineering information with execution details.

Currently, the warfare functions of each M&S system are described through brochures, slides or user manuals in human readable text. This is only a precursor to what engineers and analysts need. Specifically, there needs to be more detailed information available and captured within a common systems engineering tool. Items, such as object model elements, middleware types, versions and execution options, need to be linked and the consequences of choosing each option understood as it relates to the warfare functions represented. For example, configuring a system to have the right resolution for the function under analysis is a configuration option and needs to be linked to the correct function. This requirement leads to the need to determine necessary technical systems, object models and middleware based on warfare functions required.

Knowing that two simulations represent warfare functions that seemingly compliment a larger analytical goal (e.g. a weather simulation and a chemical agent dispersion simulation to model a chemical release) does not necessarily imply that they will work together semantically. Even if two systems work on the same middleware and use the same object model, they still might not be interoperable when it comes to which data elements within the object model each system sends or receives. These important distinctions lead to the need to capture technical interface details to facilitate identification of integration gaps and understanding the data provided for analysis.

The semantics or reasons for systems communicating are also very important in order to determine that the two systems are indeed sharing the appropriate data. The M&S user needs the capability to easily capture these technical details and have better visibility to discover gaps for interoperability and how systems can be integrated. Providing a tool that assists users in integrating systems with true interoperability is the objective.

Software development schedules are often delayed. In turn, when multiple applications are designed to share data the development teams become reliant on others’ schedules. This has major impacts to overall schedule and cost. Having the ability to quickly generate a surrogate application to replicate the functionality of a missing system allows the other systems to integrate into the distributed system and test their interfaces, timing and so on. This provides cost avoidance in those cases when a simulation system is unable to integrate. This leads to a requirement to create surrogates when key systems are delayed.

The simulation community needs a rapid application development mechanism to quickly generate the software for connecting distributed simulations. This technology can be generic enough to be applicable across any model use case. Having the ability to generate source code will greatly reduce the software development cost of developing new models and integrating existing models into distributed environments. The generated code includes the ability to connect to the appropriate middleware, send and receive the right messages and even has software constructs that will simplify a modeler’s learning curve into distributed simulation environments. This leads to the need to quickly, easily and more cost effectively modify a model to work within a distributed simulation environment.

Managing computers in a laboratory can be time consuming, redundant and tedious. Executing simulation systems across a laboratory can add to that burden. Launching a large distributed simulation environment can often take over an hour wherein the users have to manually script how the systems will be launched or even worse, walk around the lab and launch each system on each computing device manually. A system to manage the computers and launch applications according to the correct execution details and order is required. This leads to the need to launch complex computing assets easily from a single point.

In laboratories that execute many simulation environments, each one can be slightly different from the previous one. Managing how each system needs to be modified for changing scenarios or even technical constraints like middleware or object model differences requires engineers to spend much of their time configuring and testing systems. This leads to the need to orchestrate the order and cooperation of systems as appropriate to the scenario and technical interoperability details.

Hardware requirements change depending on the applications, the scenarios they need to represent and the exercise architecture, among other things. Having to procure additional hardware can be expensive and unnecessary. Moreover, each computer in the lab has a finite useful life. Once the systems and scenarios grow, the hardware becomes unable to support the execution without upgrades. Having a cloud-based system to dynamically add and allocate processors, memory and network bandwidth will help alleviate the lab management of limited life time hardware. This leads to the need to flexibly allocate computing resources (memory and processors) to simulation systems based on scenarios, configurations and application-specific details.

Software integration with middleware specifications, such as the High Level Architecture (HLA) [8], can be complicated and error prone. Once integrated and tested, other software developers can reuse the software library for their own use. Making the software library generic to work across any object model and adding plug-ins to work across multiple middleware specifications allows this library to be reused across a wide spectrum of simulation systems. It additionally facilitates interoperating simulations which were not originally planned to work with other simulations. This leads to the need to abstract away technical middleware details from business logic to facilitate reuse and remove errors.

Requirements written in human readable text and provided to software developers can often be misinterpreted, especially if those requirements do not include enough detail or the semantics of the requirement. When system developers arrive to integrate their system for an analysis, any misinterpretation of the requirements will be discovered through trial and error. Another problem that occurs frequently is that system developers write their own simulation test procedures so any errors that they have in their minds will also be in their tests. These problems include erroneous encoding and decoding of simulation communication messages and middleware specification errors. Instead of discovering problems at the exercise site while personnel are on travel and using funds for hotel, per diem and other expenses, it would be useful if tests could be generated for the developers that properly test everything possible prior to developers traveling to the exercise site. These generated tests should test an application’s middleware connection as well as the object model elements it needs to receive and send. This leads to the requirement to test systems prior to integration events based on an agreed upon system design.

The design of an analysis changes frequently as analytical goals are modified as well as between analyses that may leverage elements of the same simulations. Having to manually update test cases for simulations involved will lead to configuration management problems and be a time and cost driver. Being able to automatically update test cases based on a systems engineering tool that captures the methods and means of the analysis will save time and reduce errors. This creates a requirement to quickly update tests from design via automation and a data-driven export mechanism.

Components

EASE consists of the following components and associated software:

  • Interview Component
  • System Design Document
  • Surrogate Generation Capability
  • Deployment Management System
  • ProtoCore
  • Advanced Testing Capability

The Interview component of EASE is the interface for the M&S users, systems engineers, integrators and software developers to access and manage their respective areas of complex simulation. The user has the ability to search the system for scenarios that are applicable to their specific needs, configure those scenarios and execute the simulation environment on dedicated hardware assets by simply clicking on a button within the web browser. They can later return to the Interview interface to access the data artifacts that resulted from the scenario they previously executed. Systems Engineers enter into the framework what applications can perform what functionality, which then informs how scenarios can be created. Integrators create adjustable configuration fields for M&S users to configure complex simulation applications through an easy interface. This allows constraints to be put on the models, simulations and tools that ensure that the systems do not operate outside of their limits. Following the rules laid out by the systems engineers, the developers can upload, configure and approve their software for future execution within EASE.

The SDD is a systems engineering tool used by the systems engineer to capture the design details of a distributed computing environment. The SDD links high level requirements to subsystem specific details through Modeling Design Decisions that describe how the simulations will communicate, including sequence diagrams and architectural strategies. The SDD is a database driven tool which stores all of its information in the form of database fields with links across the database tables. This database driven approach allows the system to quickly generate systems engineering artifacts with database queries and templates for their output and subsequent use by the systems engineer. If a change is made to any of the systems engineering data, this artifact generation can be repeated automatically by the systems engineer. This ensures that systems engineering artifacts remain current with little effort, compared to most projects that need systems engineers to constantly update and configuration manage Microsoft Word, Excel or PowerPoint documents to ensure currency and consistency.

The Surrogate Generation Capability uses the SDD’s ability to generate artifacts based on the SDD database. An SE can enter simulation business logic into the SDD and export a working software application that will execute within a distributed simulation environment based on the appropriate middleware and object model. This capability eliminates the need for the SEs generating a surrogate to: understand the simulation middleware details; know how to write interface details that are often repeated; or, know how to write a multi-threaded software application optimized for distributed simulation. The Surrogate Generation Capability includes an interface that is already filled in by the SDD based on which warfare function is to be surrogated. The correct events have already been included, with fields available for the user to manipulate and/or add their own simulation business logic. Once completed, the systems engineer can save their work back into the SDD, export the software application to their local desktop for further development or use and can have the surrogate they created automatically deployed to EASE for use by users in future executions.

The Deployment Management System component of EASE is responsible for the automated orchestration of simulation executions using dedicated hardware assets. In any distributed simulation environment, there is a specific order and configuration of the components for them to execute properly. This is often known only by a handful of integrators on each project. The Deployment Management System component captures this knowledge and automates it so that anyone can execute complex simulation environments. As a part of that orchestration, applications must be configured for the middleware, the application’s performance data and for the specific scenario to implement, among other areas. Each component is executed in an emulated computing environment, known as a virtual machine, and via a virtual machine management interface. This allows EASE to dynamically partition processors and memory to each virtual machine, as appropriate, rather than be tied to the limitations of an existing piece of hardware with its associated operating system. Instead, each application gets the operating system and hardware required to properly execute. Those virtual machine executions can also be scheduled, repeated, started, stopped and monitored by the Deployment Management System component. A video stream is provided to the user to monitor each virtual machine while it runs, which is key to supporting Human-In-The-Loop (H-I-T-L) simulations [9]. After a simulation run has been completed, the data artifacts are gathered and exposed to the Interview component for the users to get data for their analysis. This implementation allows for easy scaling and management of hardware, software and their connection to requirements and goals of the simulation execution.

ProtoCore [10] is a software library developed to allow software developers to create simulations capable of communicating with other simulations in a distributed architecture without having to be experts on distributed simulation. Most distributed simulation middleware architectures have very similar concepts such as joining, subscribing, publishing and exiting. Distributed simulation environments also have some common simulation business logic, such as dead reckoning, time representation and coordinate conversions. These types of common concepts and utilities are included within ProtoCore so software developers do not have to write their own implementations. This saves developers time and it also helps ensure accurate implementation since the logic has been peer reviewed and used across many different simulations. An additional benefit of ProtoCore is its ability to provide these capabilities across a variety of middleware architectures due to its plug-in architecture. Plug-ins exist for HLA 1.3, HLA 1516, Distributed Interactive Simulation (DIS) [11] and Test and Training Enabling Architecture (TENA) [12] so a software developer using ProtoCore can write their code once and choose which middleware that it will use at run-time. This allows software developers that support multiple projects on different middleware architecture to write their software once and allow it to work across several environments.

The Advanced Testing Capability [13] is a software tool that is used to test distributed simulation applications under controlled conditions without needing every simulation involved in a scenario. First, ATC is started along with any necessary middleware architecture components. Then, the simulation under test is started and connects to the middleware and ATC. ATC provides the stimuli to the application that is required for the scenario being tested and verifies the application’s responses as sent over the middleware. This type of testing ensures that the application can properly join the middleware, transmit the data based on the middleware architecture’s guidelines and publish and subscribe to the correct events. The ATC tests are presented as sequence diagrams where a tester can edit details, such as the events’ attributes and the timing of each event. The ATC stores the test cases into an eXtensible Markup Language (XML) file called the Test Case Markup Language (TCML). The TCML file storage allows other tools to read, manage and export test cases. Additionally, the SDD can export TCML files based on system design information captured within its database.

Use Case

Beginning with a hypothetical problem, assume an acoustic sensor has a requirement to detect and discriminate targets, such as manned and unmanned ground vehicles, in urban environments with a specified false alarm rate [14]. During developmental testing, this acoustic sensor appears susceptible to background noise that could appear in some urban environments and, in turn, is not able to detect and discriminate targets in these environments with the required false alarm rate. It is, however, able to discriminate all required targets in non-urban environments, as well as a subset of urban environments that may be relevant to future operations, within the required false alarm rate. The current fielded acoustic sensor is significantly less reliable in the urban environments of interest. The PM wishes to make the argument that this new system should pass Milestone B due to the gains it provides to the force.

The analyst creates an experimental design that compares the current acoustic sensor to the one under development including operational scenarios in relevant urban environments. While he would like to use available empirical data, he is also interested in using physics-based models that replicate the acoustical phenomenology at hand and show how the sensors will perform as background noise is varied. The analyst logs in to EASE and sees that he already has models for the current system from when it was developed and fielded. Moreover, he has models of the system being developed from Pre-Milestone A. Using EASE, he modifies the scenario he had from Pre- Milestone A to reflect the operations in Milestone B and adds both sensors for comparison. He then modifies parameters within the simulations reflecting the background noise as input. Finally, he schedules multiple replications due to the stochastic nature of the physics-based models being used and hits the “Go” button. EASE then runs the simulations using available resources and provides the analyst with data when complete. Conveniently for this analyst, he was able to load his simulation post processors which modify the data for use as information after the runs are complete into the EASE system further automating the process. Through this analysis, he is able to show a comparison of the developmental system to the current system and operationally make the argument that there is utility to the developmental system. It is then up to the decision makers whether the operational utility outweighs the cost and sustainment footprint for a new system that is not meeting all requirements.

It should be stressed that there is no magic in this hypothetical situation. In our example, M&S professionals developed models that represented the acoustic sensors in question and systems engineers took the time to integrate them into EASE. Moreover, the analyst knew how he wanted to present the data and built post processors to facilitate the process. The key here was that as these models were developed, they were put into the EASE framework. In doing so, the constraints and capabilities were known as well as how to execute them. This allowed our analyst to take advantage of work done previously, possibly on an analysis of another weapon system for another PM, without having to call on the M&S experts or become an M&S expert himself. EASE also allowed the analyst to easily modify parameters, schedule runs and receive data. If the data looked incorrect, for whatever reason, the analyst could easily change the inputs and run again. Normally, this process is done by hand and is error prone, but the rigor of EASE ensures that this is not an issue. Should there be the need for a new model, the experts would then be called upon. Furthermore, should the question change from a comparison of acoustic sensors in environments for which the PM understood to a more SoS-like situation where the acoustic sensors had to interface with other operational systems, additional models may need to be entered into EASE. If these models were entered into EASE and a SoS question arose, various PMs would have the ability to leverage models from other PMs (presumably with some level of accreditation) to answer analytical questions that do not have just one PM. EASE provides that ease of use access to the M&S while facilitating re-use.

art4fig2 

Want to find out more about this topic?

Request a FREE Technical Inquiry!