Menu Close
  • Home
  • White Papers
  • Imaging Tasks
  • Imaging Tasks

    When someone asks about an imaging system, one of the first questions usually asked is “How far can it see?” While this seems like an easy question, the answer is a little more involved than one might think. To start with, the question itself must be broken down into manageable pieces that can be defined and measured. Commonly, this is broken into Detection, Recognition, and Identification. Once those tasks are defined, maximum ranges (for a given target) can be determined. However, the meaning of some of these tasks is not as obvious as it may seem. Furthermore, how do you quantify how well you can accomplish these tasks?

    To begin, the categories themselves must be defined. The definitions given here are not scientific definitions, but are more practical, and are meant to be easily understood. Defining these terms is the purpose of this white paper. Other topics, such as how to quantify how well the measurement can be made, and many other factors to consider, will be the topic of other papers

    Detection

    Detection indicates that there is something out there but it’s too small (too far away) to see. From the classical “Johnson Criteria” (old school criteria from the 1950’s, but still useful, to a point) this means something like 2-3 pixels; this is essentially a “dot on the screen” that tells the viewer that something is out there, but the image of the target is too small to determine any details. The maximum range at which the observer can discern those pixels is said to be the maximum detection range of the given
    target.

    Recognition

    This is usually defined as being able to discern differences between classes of targets that are substantially different, though they are approximately similar size. A classic example (from ground surveillance) is being able to tell the difference between a human and a deer. While comparing targets of radically different sizes may allow the viewer to draw a clear distinction between those targets, it doesn’t really fit the common definition, which is to recognize target types between similar sized targets. To accomplish recognition of the target class, you would clearly need more details in the image than you would need for detection. This means more pixels on target. A common standard to be able to perform the recognition task is generally at least 6-8 pixels in one dimension of the target, though that number can vary, and will be discussed later, along with other nuances of this.

    Identification

    This definition is sometimes confusing. It does not mean that you must confirm the unique identity of a target and it does not imply facial recognition or anything of the sort. Rather, it means that the image shows enough detail that an observer can distinguish moderately small differences within the same class of targets. For ground surveillance, the classic example is being able to distinguish between a human target carrying a rifle versus a human carrying a shovel. Clearly, more details of the target must be visible to determine this, which means even more pixels on target than were there for the recognition task. Often, identification is defined to need around 12-14 pixels in one dimension of the target. Like the recognition task, there are some nuances to the number of pixels you need, and to how you make that measurement. Failing to understand (and properly apply) those nuances can lead to very inaccurate conclusions about what imaging task can be performed, and how well it can be performed at any given range.

    Of course, there are a lot of other factors involved in performing these tasks, including things like target contrast, background, viewing angle of the target, and atmospheric conditions, but understanding the differences between different imaging tasks provides a starting point for the conversation.