Imaging Tasks
When someone asks about an imaging system, one of the most common questions is: “How far can it see?” While this might seem like a straightforward question, the answer is more complex than it appears.
To address this, the question must first be broken down into manageable components that can be defined and measured. Typically, these components are categorized as Detection, Recognition, and Identification. Once these tasks are clearly defined, maximum ranges for a given target can be determined. However, the meaning of some of these tasks may not be as obvious as it first seems. Furthermore, how do you quantify how well these tasks can be performed?
Detection
Detection simply indicates the presence of an object, but the object may be too small or too far away to visually identify. According to the classical “Johnson Criteria” (a set of criteria from the 1950s that, while outdated in some ways, is still useful), detection typically refers to an image that consists of 2-3 pixels. This is essentially a “dot on the screen,” alerting the viewer that something exists, but the image is not detailed enough for further analysis. The maximum range at which the observer can discern these pixels is considered the maximum detection range for that target.
Recognition
Recognition goes beyond detection and involves distinguishing between different classes of targets that are approximately similar in size. A classic example from ground surveillance is differentiating between a human and a deer. While recognizing significantly different-sized targets might seem like an easy task, the focus here is on recognizing target classes that are similar in size. To accomplish recognition, more image detail is required than for detection—essentially, more pixels on the target. A common standard for recognition is that the target should have at least 6-8 pixels in one dimension. However, this number can vary, and further discussion will explore some of the nuances.
Identification
Identification is often a source of confusion. It does not require confirming the exact identity of a target, nor does it involve processes like facial recognition. Instead, identification means that the image contains enough detail for the observer to distinguish subtle differences within the same class of targets. For example, in ground surveillance, identifying whether a human is carrying a rifle or a shovel involves recognizing small differences within the target class. To accomplish identification, even more image detail is required than for recognition—this translates to more pixels on the target. Typically, identification is defined as requiring around 12-14 pixels in one dimension of the target, though this number can vary. As with recognition, understanding the nuances of these measurements is crucial for making accurate conclusions about which tasks can be performed and how well they can be accomplished at various ranges.
Other Factors to Consider
While the definitions of detection, recognition, and identification provide a starting point, many additional factors influence the ability to perform these tasks effectively. These include target contrast, background, viewing angle, and atmospheric conditions. Understanding the differences between these different imaging tasks provides a starting point for the conversation.