AUTOMATIC TARGET RECOGNITION SYSTEMS

In today’s battlefield, there are a large number and variety of platforms and systems, many having multiple sensors that create a picture of the battlespace from all angles and across the electromagnetic spectrum (Figure 1). But while the U.S. and allied militaries have made significant investment in providing this vast quantity of data, the data alone is useless unless interpreted to extract relevant details and actionable items.

Figure 1 : Sensors gather and share data across the battlespace

Traditionally, human beings have been employed to interpret this data; for example, a shipboard operator monitoring one or more radar screens, a pilot viewing his head-up display (HUD), or even a soldier looking through a gun sight. And although humans excel at analyzing and interpreting data, our faculties are limited in that operators can become fatigued, pilots may be unable to respond to many alerts and notifications, and soldiers can be distracted by activity in their surrounding environment.

In today’s world, data is created much faster than available human resources can effectively use it. Automatic Target Recognition (ATR) is a technology designed to enhance the utility of military systems by interpreting data faster and more accurately than human analysis alone.

Intelligence, Surveillance & Reconnaissance (ISR) platforms produce a constant stream of data. The rate at which platforms and sensors are being deployed in this domain outpaces the rate at which human analysts can be trained and mobilized. Moreover, since each analyst can only review a small subset of data, it is possible to miss the big picture provided by all platforms surveilling an area.

As the capabilities of threats to engage and destroy aircraft increase, maintaining air dominance in the battlespace becomes more challenging. Oftentimes, an aircraft’s survival and mission success depend on mere seconds in a pilot’s decision making process. While early detection and early warning systems help, as enemy capabilities proliferate so do the technologies to defend against them. As a result, HUDs and cockpit control systems are becoming increasingly complex and data intensive, making it difficult for pilots to execute within the critical short timeframes required.

Today’s weapons must fly faster, farther, and hit more precise aimpoints than ever before. Asymmetric and urban warfare require precision guidance while Anti-Access/Area Denial capabilities demand weapons be launched from farther away, with less opportunity for pilot guidance. Capabilities beyond fire-and-forget are required, and often missiles must be launched beforea target’s location has been clearly identified.

ATR systems specifically address these challenges, helping to meet warfighter needs and maximizing utility of Raytheon-built sensors, platforms, weapons, and ground based systems.

THE CHALLENGE IS REAL

Humans can readily interpret an outdoor scene or photograph, and may easily take for granted or underestimate the complexities and challenge of the underlying problem. To interpret an image, you must identify and differentiate spatially varying intensity patterns resulting from a highly complex transformation of sensor specific phenomenology, viewing geometry, surface materials, and environmental conditions. One may think of image interpretation as “natural,” but the quantity and quality of training data required to develop this capability is significant. For example, babies are presented with a nearly continuous stream of labeled training data from their parents, siblings, and even TV shows all designed to teach letters, animals, vehicles and other images. A young child will see photographs of dogs, drawings of dogs, cartoons of dogs, and likely interact with a dog in the process of learning and understanding “dog.” Even with this amount of training input, it takes years of trial and error for children to become skilled at identifying objects in a complex scene with many distractors, for example, finding a specific vehicle in the parking lot depicted in Figure 2.

Figure 2 : Identifying the correct object amidst many very similar objects can be a challenge

Conversely, ATR algorithms are expected to reach the same level of recognition performance with only a small number of samples collected over a limited set of conditions. This is why classical ATR approaches typically behave well only when the operating conditions are very similar to the training data. When objects of interest are embedded in complex environments, such as an urban landscape, or when techniques are applied to disguise signatures or deceive the system, classical algorithms often fall apart quickly. In these situations, templates may no longer match; characteristics extracted from principal component analysis or other decomposition methods may no longer be present or might have been altered; and hand crafted features, while robust, are limited by the imagination of the designer and frequently cannot distinguish similar objects.

EMERGING ADVANCED APPROACHES

A number of different approaches exist for building ATR systems. One method relies onthe creation of a database containing three dimensional (3D) computer aided design (CAD) models of targets to be identified by the ATR algorithms. These models are rendered into two dimensions (2D), taking into account sensor specific phenomenological effects and viewing geometry to produce synthetic 2D signatures that are scanned across an image to find the best match. Although this is a purely physics based approach and theoretically requires no measured data for training, the 3D models require CAD modeling expertise and are expensive to build. Additionally, some amount of measured data is needed for calibration purposes.

Recently, deep learning algorithms have been applied to ATR systems. These algorithms are automatically trained, using large amounts of data, to learn differences between target signatures of interest — eliminating the need for human phenomenological or 3D modeling expertise.

There is rarely enough measured training data available for deep learning ATRs to adequately cover all possible imaging conditions of interest. One approach for generating large amounts of synthetic data, sufficiently similar to measured data for deep learning algorithms to be effectively trained, makes use of Generative Adversarial Networks (GANs). GANs are an unsupervised machine learning approach utilizing two competing network models, one discriminative and the other generative.1A topic of active interest across the entire deep learning community, GANs are actively being investigated and adapted across Raytheon under ongoing Independent Research and Development (IR&D) efforts.

Pure model based ATRs and Deep Learning based approaches are two extremes of the continua shown in Figure 3.The model based approach maximizes encoding of a priori target information by humans whereas a pure deep learning approach requires no a priori information but requires significant measured training data. Between these two approaches, we believe there are hybrid algorithms that pay some cost in providing limited a priori information but with the important benefit of reducing the amount of training data needed.

Figure 3 : Emerging Automatic Target Recognition (ATR)
approaches incorporate rapid 3D target modeling and
Deep Learning

Increasingly, the distinguishing capability for military success is knowledge of the battlespace, and ATR provides real-time knowledge across a broader swath of data than previously possible by human operators alone. Raytheon’s investment in ATR technology is building a foundation for the future of military combat. We are developing advanced machine learning algorithms capable of being trained with limited data and deployed on computationally limited platforms. Integrated into Raytheon’s product offerings, they will be trusted to provide the right answers even in challenging environments of complexity and adversarial action.

– Mark Berlin
– Matt Young

1Generative Adversarial Nets, Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozairy, Aaron Courville, Yoshua Bengioz, Département d’informatique et de recherche opérationnelle Université de Montréal, Montréal, QC H3C 3J7.