Technology Today

2013 Issue 1

Virtual Environments: Creating Immersive Simulations and Trainers

The increasing complexity and scope of operations conducted by today’s warfighter have created a corresponding need for increasingly complex modeling, simulation and analysis methods to represent and assess these missions. For example, U.S. ground forces operating in urban areas may suddenly encounter an ambush that requires them to quickly adapt to and gain greater situational awareness of their immediate surroundings.

Figure 1. Game tokens not required. Virtual
Battlespace 2 has provided an off-the-shelf
simulation environment for land combat.

Conventional constructive simulations used for mission analysis are limited in the realism with which they can capture mission dynamics because constructive simulations use simulated people, systems, and environments. During combat, real soldiers are inundated with stimuli and information that must be quickly translated into decisions. The use of simulated people in constructive simulations requires that human factors such as situational awareness and decision making be simplified or altogether ignored due to the complexity of human behavior. Because human behavior is such an important part of a simulation’s realism and effectiveness, this was a shortcoming of these conventional simulations.

The availability and maturity of commercial off-the-shelf (COTS) tools — such as Virtual Battlespace 2(VBS2, Figure 1), Virtual Reality Scene Generator(VRSG), and Unity— have brought immersive 3-D visualization of modern warfare to military simulations. The use of videogame engines with simulation is officially called “serious gaming.”

The use of videogames for simulation experiments at Raytheon dates as far back as the use of Fleet Command for naval traffic in the year 2000. In addition to major advances in visual detail, vendors also provide software such as LVC-Gamewith VBS2 to simplify integration with Raytheon simulations by providing interfaces based on standard defense industry distributed simulation protocols such as Distributed Interactive Simulation (DIS) and High-Level Architecture (HLA).

Serious gaming has provided a means to address much of the realism gap by inserting real people using simulated systems into simulated environments. In industry standard parlance, these operator-in-the-loop simulations are called “virtual simulations” or “virtual environments” (VEs). Some COTS visual tools originated as popular game engines, but others were purpose-built as simulation tools to represent a sensor’s view of the battlefield. One important distinction that stems from this difference is that game engines such as VBS2 generally only represent passive sensors (e.g., infrared) whereas some real-time scene generators such as VRSG can simulate active sensors (e.g., radar). The modeling priorities of the genres also differ. In general, the videogame community heavily emphasizes the visual experience more than the physics, but the simulation community values physics more than visual detail. This is why the linkage of technologies from the two communities is so powerful; their respective strengths are complementary.

Figure 2. The Virtual Reality Scene Generator (VRSG) enables
users to rapidly create functional representations of new sensor
technologies such as wide area sensors that simultaneously
capture video of multiple areas on the ground. Shown in the
figure are multiple parts of a single wide area sensor video
frame. Using VEs, the operational utility of new technologies
such as this wide area sensor can be assessed and prioritized
based on greatest warfighting benefit.

VEs are not a modeling panacea, but they have enjoyed tremendous popularity and use within the U.S. Department of Defense and defense industry because they can capture much of the complex and nonlinear dynamics of today’s missions. The most common uses of VEs include training, mission rehearsal, experimentation, mission analysis and wargaming. For instance, the ground forces in the ambush scenario mentioned at the start of this article could participate in a simulation experiment where they are inserted into vehicles in a convoy in the Middle East and forced to fight through the ambush using current technologies and tactics. The mission can then be repeated using systems with proposed new capabilities such as a wide area sensor whose video simultaneously covers an entire urban area as shown in Figure 2. Qualitative and quantitative assessments can be made of the impact of these new technologies on mission effectiveness. The increased realism provided by VEs translates into deeper insights into whether new concepts or technologies can change battle outcomes in spite of the fog of war.

Some of the benefits of using Virtual Environments based on commercial, government and Raytheon off-the-shelf tools include:

  • Better representation of the fog of war.
  • Deeper insights into whether concepts or technologies can improve battle outcomes.
  • Easier experimentation with varied tactics, techniques, and procedures (TTPs).

  • Greater focus on modeling military systems instead of tedious reproduction of the mission environment (terrain, buildings, people, vehicles, weather, etc.).
  • More immersive scenarios and environments for improved training.
  • Faster progression of concepts from PowerPoint® to evaluation in virtual missions.
Figure 3. The Army’s Blended Training Model allows soldiers
to train as they fight by blending live, virtual, constructive
and gaming environments.

The latest generation of warfighters has grown up with realistic video games and smartphones, which have made them comfortable interacting with virtual environments and avatars. The U.S. Army has recognized this ease with technology and has been actively enhancing their training curricula to exploit the warfighter’s ability to learn through this new and interactive medium. The Army is moving toward a new model of training that mixes traditional live training with virtual, constructive and gaming simulations, creating a seamless, blended environment that allows soldiers to train as they fight (Figure 3). This new model, called blended training, is continuously available, cost effective and flexible, giving soldiers the ability to train more often and in scenarios that are difficult to create in live exercises.

Despite all of its advantages, however, there are negative aspects of the new blended training model, stemming primarily from the gaming simulations. These simulations are powerful tools that let an entire unit rehearse missions collectively, but commercial companies often create these simulations by adapting existing, mass-market video games. As a result, the weapon system models included in these simulations, including models of Raytheon products, are only gaming representations of the real system. They are controlled by a mouse and keyboard and typically do not behave like the actual system. For instance, the Javelin system included in a popular Army gaming simulation automatically locks on to targets for the player and simply requires a mouse click to fire the missile. To use this model would not only misrepresent the capabilities of a Raytheon product but also would miss an opportunity to train the player on how to employ the weapon properly.

Raytheon is working to correct this problem through an innovative new set of products called Virtual Combat Systems (VCSs), which are accurate representations of Raytheon systems within the various gaming simulations (Figure 4). The VCS for each system consists of two parts: the software plug-in for the gaming simulation and a training replica of the weapon hardware that is used to control the software. Providing the warfighter with both of these components ensures that the form, fit and functionality of the tactical weapon system are accurately recreated in the virtual training. With VCS, a warfighter can now gain all of the benefits of blended training while still receiving positive training on the weapon systems.

Figure 4. Raytheon’s Virtual Combat Systems provide warfighters with accurate,
virtual representations of their weapon systems.

Raytheon engineers and graphic artists collaborate to construct each Virtual Combat System. First, graphic artists collaborate to take detailed engineering drawings of the weapon system and use them to create an in-game model of the weapon. This process is different for each gaming engine. Currently, VCS efforts have been based on the VBS2 commercial gaming engine. Once the weapon has been recreated within VBS2, engineers begin coding the functionality of the weapon system. Engineers ensure that the virtual system and its graphical user interface (GUI) look and behave like the real system: facsimile buttons trigger the appropriate action, munitions fly accurately, optical capability is correctly represented, error messages are displayed, etc.

The final step is to integrate or construct the replica hardware that controls the software. This is done by remapping the controls of an existing training device or creating entirely new hardware with movement-recording technology (such as gyros and accelerometers) similar to what is embedded in most modern smartphones. The movement information is combined with information from the various buttons and switches on the system and sent back to the software, which then translates the information into the appropriate commands for the gaming simulation. When a soldier moves the hardware or presses a button, his virtual avatar matches his actions.

An example VCS for Javelin shown in Figure 5. The user looks through the viewport on the hardware controller (lower right inset) and sees the weapon system’s GUI (lower left inset). All movement and button presses are then translated to the avatar in VBS2, and the GUI updates to the new state. The power of using VCSs for training is enormous. One day a soldier can be learning the basic skills of the weapon system on a virtual representation of a familiar firing range, and the next day he or she can be rehearsing a virtual mission in Afghanistan with their unit. VCS adds requisite fidelity to the gaming representation of Raytheon products, ensuring that the warfighter has the best training tools possible. This not only provides better training for the warfighter, but also helps ensure that Raytheon products are being used to their full potential.

Figure 5. The Javelin Virtual Combat System (VCS)

Aside from the military training application, VCSs have also proven to be powerful tools during earlier program phases for visualizing and test-driving new technologies and capabilities with customers. This ability has proven invaluable for the growing International market. With a VCS, not only can you provide the customer with a solid understanding of the weapon system, but you can easily do it in a scenario that demonstrates your understanding of the customer’s problem space. With a virtual representation of the weapon, demonstration is easier in any language and scenario.

To date, Virtual Combat Systems have been created for the four programs shown in Figure 6: Javelin, Serpent, Tube-launched, Optically-tracked, Wire-guided (TOW) missile, and Stinger. Raytheon is considering extension to other products and is exploring how this technology can be applied elsewhere, including light-weight mobile applications.

As new hardware and software technologies mature, future virtual environments will continue to expand into many more markets than simulation and training. Although virtual reality (VR) has been touted as transformational, visions of the future such as the holo-deck in “Star Trek®” have given many people an expectation of VR that goes beyond the limits of today’s technology. However, the gap between simulation and reality is continually shrinking due to advances in animation software, computing resources and new human interfaces that promise greater operator immersion.

Aside from the military training application, VCSs have also proven
to be powerful tools during earlier program phases for visualizing and
test-driving new technologies and capabilities with customers.

One example of a new interface technology was developed by VirtuSphere, Inc. and is akin to a “hamster wheel” for humans. It breaks down one of the big barriers to realism: the lack of physical movement to induce fatigue when “running around” in a VE. The ability to physically run within a VE can reveal secondary effects of proposed systems such as the impact of carried weight on mission tempo and operator effectiveness.

Another area of much needed improvement is virtual reality goggles that allow a user to see their virtual environment in any direction they look. Unfortunately, most goggles available today offer a field of view (FOV) similar to looking at the world through a tunnel. However, things are improving and new goggles are beginning to offer horizontal FOVs roughly equal to the full human FOV.

Figure 6. Four Virtual Combat System (VCS) products have been developed so far and further applications are being considered.

Although Hollywood’s latest blockbuster movie visuals can raise expectations of simulated reality to unrealistic levels, it does give a compelling glimpse into future VEs. The video games of yesteryear now easily run on an average smartphone. Similarly, the major difference between the computer graphics in movies today, which are almost indistinguishable from reality, and future virtual environments is just the date on the calendar.

Patrick V. Lewis and Jon Peoble

Share This Story



Top of Page