ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING AT RAYTHEON

Raytheon’s customers are embracing the intelligent machine era with the knowledge that AI will profoundly affect the character of war.

The 2018 National Defense Strategy has listed the military application of AI and ML as a key element of the U.S. military modernization strategy.1 Artificial Intelligence (AI) and Machine Learning (ML) will serve as an enabling technology of next generation battle networks; human-machine collaboration & combat teaming; human-assisted operations; and network-enabled autonomous weaponry. Raytheon continues to strategically invest in AI and ML technology development as a key contributor in delivering and operationalizing cognified capabilities to achieve technological overmatch for the United States and its allies.

AI and ML technologies are reshaping the commercial world as well as the military. In fact, commercial industry and academia are leading much of the basic AI and ML technology developments. In his book, The Inevitable, futurist and internet pioneer, Kevin Kelly, describes the forces and inherent biases that direct the course of technology. Inexorable momentum drives these shifts in technology, making them “inevitable.” Kelly describes these forces not as nouns but as verbs due to the relentless change inherent in modern technology; technology that is in a continuous state of “becoming,” rapidly evolving in ways outside the realm of human control. Among the most impactful of these forces is “Cognify,” the tendency for technology to get smarter.2 Over time, objects tend to evolve an embedded intelligence that makes them exponentially more effective. Indeed, cognified things are transformative. Like Gutenberg’s printing press and electrification, intelligent machines are transforming economics, government, healthcare, social interactions, as well as warfare.

Technological cognification has accelerated in recent years through the potent combination of ever more cost effective, accessible and powerful computing hardware; massive volume and availability of data; virtually unlimited scalability of software platforms and architectures; and AI and ML algorithms and software. ML is often considered an enabler for the broader notion of AI that includes man-machine teaming and autonomous operations. Unlike traditional rules-based analytics that are explicitly programmed, ML is programmed through learning; automating the creation of rules through “training” on sample datasets. The result of this training is a “model” which knows enough about the dataset to make an inference or “prediction” about statistically similar unseen datum (Figure 1). ML is important because it avoids the bottleneck of traditional analytics, the cataloging and implementation of rules. Traditional approaches do not scale well for scenarios in which all the rules or situations are not known ahead of time. For example, consider object recognition in an image. An engineer, or even a team of engineers, could take years to program all the rules required to identify an object (e.g. a cat) in an image.

Figure 1 : Machine Learning Development Cycle

Despite its proven utility, ML also has some idiosyncrasies. Although the fundamental algorithms can be generalized, its performance is typically context and data specific, tightly coupled to a single solution-space and to attributes of a particular data domain. For instance, an ML object detection algorithm for automatic target recognition (ATR) trained to identify tanks in a desert environment will likely not correctly recognize tanks in a dense forest. The implications are that the technology does not yet provide the human-like, general intelligence AI (or “Strong AI”) that one might see depicted in movies.

As AI and ML is introduced into systems, especially those that perform missions of high consequence, developers and users will have to deal with a range of challenges, including opacity, perpetual upgrades, and Operational Testing and Evaluation (OT&E); Figure 2.

Figure 2: Challenges of Machine Learning

Opacity
AI and ML aligns with a common trend across many of today’s advanced technologies, the danger of overwhelming complexity. Humans now build machines that we cannot fully comprehend and AI and ML exacerbates the issue many times over.3 A compounding factor in ML systems is opacity where the inner mechanisms operate as a black box (“algorithmic mystery”).4 This is particularly the case with Deep Learning where many layers of thousands of simulated neurons embody the network reasoning for functions like computer vision. Effective solutions for explaining why ML algorithms arrived at a particular answer will be a key contributor in cultivating trust, the primary roadblock to operationally effective human-intelligent machine interactions.5

Perpetual Upgrades
All analytics models, particularly ML models, have a lifecycle and eventually grow stale due to change. Things like evolving mission requirements, dynamic environments and enemy counter-measures mean the effective life of an analytic model can vary from years to days, or in extreme cases, perhaps even seconds. Similarly, AI technology is becoming ever more widely accessible, causing the cycle of model obsolescence to further accelerate. The average model lifespan will likely grow ever shorter as the AI/ML space matures and becomes ubiquitous across the battlefield. Beginning with one set of algorithms, once the battle is underway, combatants will introduce new algorithms while evolving and deprecating the old. The cycle of this “AI arms race” will ultimately collapse into second-by-second contests of updating and evolving algorithmic models as each side attempts to counter, nullify and obfuscate each other’s capabilities. The key discriminator in this environment will be automation: automated model development, testing and deployment. Combat-effective AI capabilities of the future will have the ability to continuously update through automated mechanisms to out-sense, out-think and ultimately overmatch the intelligent machines of an adversary.6 Perpetual upgrades are the future of AI and the most effective practitioners will embrace it.

Operational Testing and Evaluation (OT&E)
Conventional OT&E methods and techniques are wholly inadequate for AI/ML systems. Beyond the opacity challenge, intelligent machines, especially AI system of systems, are inherently complex,
non-deterministic systems.7 Unanticipated emergent behavior, indeterminate test results, dynamic behavior adaptation and Black Swan events (i.e. a rare, unexpected, high-consequence event) all impact AI OT&E. Future operationally effective capabilities will demand a re-imagined, novel approach to OT&E — fertile ground for innovative solutions.

ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING AT RAYTHEON

Raytheon has performed significant Research and Development (R&D) in machine learning over the last decade, much of it at Raytheon BBN Technologies.8 Spanning theoretical to field-deployed applications, this activity has bridged multiple problems, from natural language processing to network flows and cybersecurity. As examples, Raytheon BBN’s MultiMedia Monitoring System (M3S) is a deployed application utilizing numerous machine learning solutions to provide analysts a unified interface for direct access to diversified media, including television, web and social media (Figure 3), and the Strategy Optimizer that uses machine learning to adaptively reconfigure radio and network stacks to maintain consistent communications on mobile ad hoc networks (MANETs).

Figure 3 : Multimedia Monitoring System (M3S) Display of Entity Network Analysis

In addition to Raytheon innovation in AI and ML, a primary driver of innovation in the field is the commercial sector. In the U.S. alone, the number of AI startups has increased by a factor of 14 since the year 2000, while the amount of annual venture capital investment into AI has increased six-fold during the same time period.9 The result has been a flourishing and rapidly evolving ecosystem of machine intelligence innovation that would be difficult to match in the defense industry alone. Commercial AI/ML technology as well as academic research cannot be viewed as competition but rather as an important opportunity for partnership, integration and collaboration. A key Raytheon role in the intelligent machine era will be as the integrator of cutting edge AI/ML capabilities, increasingly sourced and adapted from the commercial market. The insertion of AI capabilities into the Department of Defense (DoD) and Intelligence Community (IC) requires proven domain expertise to make them operationally effective. Raytheon’s role will not only include fundamental research into core technologies but also the integration and operationalization of AI/ML for overall mission effectiveness.

OUR FEATURE ARTICLES

Today, ML is automating what was once a costly and time-consuming analysis of artifacts and data to generate new intelligence. One example of this is hand written notes or ”pocket litter.” Because it is potentially more secure and immune to electronic surveillance, handwriting is increasingly becoming a preferred communication method, oftentimes placed over existing written or typewritten pages (Figure 4). Using a variety of ML classification techniques, Dr. Darrell Young’s “Detection, Extraction, and Language Classification of Handwriting” article describes his approach to obtain good results on simulated handwriting of 16 languages including Chinese, Cyrillic and Arabic.

Figure 4 : Sample handwriting over text and picture

Another automation example is Christine Nezda’s article “Machine Learning to Determine Patterns of Life” that describes an approach to model the state of an entity, such as an aircraft or maritime vessel based on historical observations, including location, speed, maintenance cycle and other attributes. These Patterns of Life (PoL) are widely applicable to many domains and can be used for route prediction, anomaly detection and targeted event characterization. Also discussed in this article is the underlying software framework, a horizontally scalable approach adapted for true big data analytics.

Raytheon is continually innovating with AI/ML for product improvement and reliability. Dr. Kim Kukurba’s DREAMachine concept analyzes factory data to predict future defects, highlights leading factors of defects, and identifies redundant testing. Dr. Kukurba’s article, “Machine Learning in the Factory,” describes both supervised and unsupervised ML approaches for automated defect reduction, a key enabler to increase operational effectiveness and cost containment. These same goals are shared by Michael Salpukas in his article “Raytheon Predictive Maintenance (RPM),” which discusses methods for predicting hardware failures to increase system availability and reduce costs associated with current preventive and reactive maintenance approaches. Using discrete machine learning and parameter tuning methods, RPM provides real-time anomaly detection on multiple radar and other system datasets.

Detecting emergent threat behaviors is the subject of Dr. Shubha Kadambe’s article, “PRoactiVE emergiNg Threat detection (PREVENT).” PREVENT is based on a dynamic stochastic network which consists of sparse super nodes and dense local nodes. It detects emergent group threat behaviors by computing and evaluating the instability metric of the stochastic network. PREVENT provides early warning to operators based on this detection to conduct further analysis on the identified participants (or actors) involved in the activity. This approach has been tested and evaluated under multiple use cases, including several littoral scenarios and oil rig activities.

A key enabler of many Raytheon capabilities is Automatic Target Recognition. Mark Berlin and Matt Young’s article, “Automatic Target Recognition (ATR) Systems,” provides an overview of a core technology that interprets data far faster than human analysts. The article discusses the challenges of ATR, research in academia and the commercial industry, and emerging advanced approaches including deep learning. ATR algorithms are often applied to use cases where limited sample data requires new training techniques like Generative Adversarial Networks (GANs) to teach the ML algorithms. GANs are also one of the main topics of the article, “Strategies for Rapid Prototyping Machine Learning” by Steve Israel, Philip Sallee, Franklin Tanner, Jon Goldstein and Shane Zabel. Supervised training on large, labeled datasets has enabled most of the recent advances in AI and ML. However, a lack of sufficient data has excluded the use of ML in many defense applications. GANs are a method of training with sparse datasets, allowing ML applications to be used in previously unsuitable customer mission environments.

We are at the beginning of the intelligent machine era which promises widespread evolution and exponential boosts in defense capabilities. Raytheon will continue to partner with its DoD and IC customers in the further development and integration of operationally effective AI/ML applications to ensure mission success.

– Darryl Nelson

1 Mattis, J. (2018). Summary of the 2018 National Defense Strategy of the United States of America. Retrieved from U.S. Department of Defense: https://www.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf

2 Kelly, K. (2016). The Inevitable: Understanding the 12 Technological Forces that will Shape Our Future. New York, New York: Viking Penguin Random HouseTM.

3Arbesman, S. (2016). Overcomplicated: Technology at the Limits of Comprehension. New York, New York: CURRENT Penguin Random House.

4 Knight, W. (2017, April 11). The Dark Secret at the Heart of AI. Retrieved from MIT Technology Review: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai

5 Polonski, V. (2018, January 10). People Don’t Trust AI–Here’s How We Can Change That. Retrieved from Scientific American®: https://www.scientificamerican.com/article/people-dont-trust-ai-heres-how-we-can-change-that

6 Stoica, I., Song, D., Popa, R. A., Patterson, D. A., Mahoney, M. W., Katz, R. H., Abbeel, P. (2017, October 16). A Berkeley View of Systems Challenges for AI. Retrieved from Electrical Engineering and Computer Sciences, University of California at Berkeley: http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-159.html

7 Firesmith, D. (2017, January 9). The Challenges of Testing in a Non-Deterministic World. Retrieved from Carnegie Mellon University Software EngineeringTMInstitute: https://insights.sei.cmu.edu/sei_blog/2017/01/the-challenges-of-testing-in-a-non-deterministic-world.html

8 See for instance, Ilana Heintz, “Machine Learning Applications,” Technology Today, 2017 Issue 1.

9 Shoham, Y., Perrault, R., Brynjolfsson, E., Clark, J., Manyika, J., & LeGassick, C. (n.d.). 2017 AI Index Report. Retrieved from Artificial Intelligence Index: http://cdn.aiindex.org/2017-report.pdf