Technology Today - Home
Mission- and safety-critical systems require a very high degree of reliability and availability, typically measured in many nines. Examples of such systems include command and control, fire control, and weapon control systems in the military domain, as well as numerous civilian systems such as air traffic control, power grid controls (SCADA) and power plant controls. Consequences of data corruption or a shutdown of these systems have the potential to cause significant loss of life, commerce or military objectives.

When it comes to accidental hardware component failures and software malfunctions, these systems are designed to be robust and fault tolerant, and able to recover with minimal operator intervention and no interruption in service, while maintaining absolute data integrity. But this is not the case when it comes to malicious attacks, where the approach is still focused on preventing intrusions and hardening the systems to make them as impenetrable as possible.

Mission-critical systems are facing increasingly sophisticated cyberattacks. Our nation needs to develop novel technologies that enable systems to recover and reconstitute in real time, and continue to operate correctly after an attack. For the past five years, Raytheon has been conducting research into intrusion-tolerant and self-healing systems as part of its internal research and development, as well as in partnership with its U.S. government customers.

The Current State
One problem is that the number of software vulnerabilities is innumerable and growing constantly. The Common Vulnerabilities and Exposures (CVE) database currently contains more than 36,000 unique vulnerabilities. Even a secure operating system such as SELinux has 15 identified software flaws (as of July 2009). The threat posed by these vulnerabilities is asymmetric; defenders must close all holes, while the attackers need to find only one. However, it is impractical to probe and patch every single defect. Unlike random hardware faults, the probability of occurrence of this event cannot be modeled stochastically, because a single undefended but exploitable vulnerability creates a modeling singularity. So it is hard to quantify probability of mission success or failure for a system that relies solely on preventive methods.

In addition to software flaws, systems also suffer from configuration errors. These are even harder to control as systems are continually upgraded and components added, deleted or modified. What about the argument that a system is less vulnerable if it does not use commercial off-the-shelf software but has high-assurance, validated software? In fact, the most highly tested mission-critical software, such as the Space Shuttle flight control software, was still found to have about one error per 10,000 source lines of code. Most military command and control systems do not go through such rigorous testing. The conclusion is that technology does not exist today to design, code, test and deliver defect-free software for a system of realistic complexity, and it is not likely to be available in the near future.

Another argument usually put forward in favor of preventive measures is that military systems are inaccessible to unauthorized users, and access control mechanisms are sufficient to keep intruders out. This would be the case if physical access or remote login access were the only means of getting inside these systems. Any networked information system has many entry points, and boundary controllers are not completely effective in separating malicious activity from normal traffic. For example, it is difficult to identify hidden scripts in legitimate documents. Furthermore, where humans are concerned, one should not underestimate the power of social engineering in bypassing access control mechanisms. As a result, it is prudent to assume that penetrations of multiple layers of defensive layers are not only possible but quite likely, especially if the threat is a goal-oriented, well-resourced and determined adversary.

In fact, that is why intrusion detection sensors are now routinely deployed not only at network gateway points, but also in internal routers and on hosts, servers, and more and more end devices. What is the efficacy of current intrusion detection sensors? The most common principle is to look for a signature of malicious code by matching bits to known fragments. This has an obvious limitation of not being able to detect novel attacks. Even minor variations of known viruses can escape detection. Keeping such sensors up to date in light of a daily onslaught of new variants is a burdensome task. New attacks must be caught, their code analyzed, a signature created, and pushed out to all target machines as soon as possible to close the window of attack vulnerability. This task is even harder than probing and patching vulnerabilities because of the infinite number of mutations of a virus. A less common principle of detecting intrusions is to detect anomalous behavior. This assumes that it is possible to define normal behavior. Except for some very simple, deterministic state machines, it is extremely difficult to specify the bounds of normal behavior that will never be breached. That is why anomaly detection sensors have unacceptably high false-alarm rates.

Therefore, preventive layers will be penetrated by a determined adversary, and detection layers may, or may not, detect such an event. This is a very realistic scenario for today's mission-critical systems.

A Paradigm Change
Almost all research and development on cybersecurity is still aimed at preventing and detecting intrusions. This paradigm must change and U.S. government officials at the highest levels are coming to the same conclusions, as noted in a "New York Times" article about a review of the nation's cybersecurity conducted for the Obama administration by Cybersecurity Advisor Melissa Hathaway:

"As Mr. Obama's team quickly discovered, the Pentagon and the intelligence agencies both concluded in Mr. Bush's last years in office that it would not be enough to simply build higher firewalls and better virus detectors or to restrict access to the federal government's own computers."

"The fortress model simply will not work for cyber," said one senior military officer who has been deeply engaged in the debate for several years. "Someone will always get in."1

The question now is: What do we do when, not if, a system has been penetrated due to a cyberattack?

One course of action is to take an offensive approach and strike back to neutralize the threat if it is possible to trace the attack back to the perpetrator whether a non-state actor or a nation-state.2 Developing an offensive capability may also serve as a deterrent — at least for nation-states, if not for terrorist organizations. However, the focus of this article is on the defense of our networked systems.

The defense-in-depth strategy requires augmenting the prevention and detection layers with the next logical mechanisms that allow systems to recover from attacks, repair the damage and reconstitute their full functional capabilities in real time or near-real time for mission-critical systems, and with minimal human involvement. Systems that have such properties have been called intrusion-tolerant systems and self-healing systems.

An intrusion-tolerant system continues to perform all critical functions and provide the user services it was designed for, even in the face of a cyberattack. A self-healing system goes further and purges itself of the malware just as a biological entity neutralizes an infection. This ensures that all compromised components are infection-free. It repairs all damaged databases just as a biological system heals wounds and grows new tissue. This process reconstitutes full functional capabilities as existed prior to attack.

Starting in 2003, several DARPA programs explored a number of novel ideas, including redundancy, artificial diversity, randomness and deception, among others. Along with Cornell University, Raytheon participated in a DARPA program to develop technology for self-regenerative systems. In 2008, Raytheon received a DARPA contract to evaluate the effectiveness of new technology for countering cyberthreats from inside users. Details of DARPA's research projects can be found at Some of the fundamental concepts that came out of the DARPA programs are described in the book "Foundation of Intrusion Tolerant Systems," published in 2003 by IEEE Computer Society Press.

Until industry and government are able to design and build defect-free and vulnerability-free components, intrusions will occur, and some of them may not even be detected. For mission- and safety-critical systems, it is paramount to architect them from the ground up so that in the event of a cyberattack, they continue to function correctly, keep data integrity and continuity of service for critical functions in real time, and reconstitute full functionality over time.

Jay Lala

1 Sanger, D.E., et al., U.S. Plans Attack and Defense in Cyberspace Warfare, "The New York Times," April 28, 2009.
2 Owens, A. W., et al, editors, "Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities," The Computer Science and Telecommunications Board, National Research Council, Washington, D.C., May 2009,