Infrared weak and small target detection method and detection system

A technology of weak and small targets and detection methods, applied in the field of image processing, can solve problems such as poor real-time performance and limitations

Pending Publication Date: 2020-10-30
CHANGCHUN INST OF OPTICS FINE MECHANICS & PHYSICS CHINESE ACAD OF SCI
6 Cites 4 Cited by

AI-Extracted Technical Summary

Problems solved by technology

The detection theory based on image sequence is to estimate the target through the correlation in the sequence image. An important premise of this type of algorithm is to ensure the consistency between the target and the background between consecutive frames, as well as some prior target information. These premise and Prior information limits the application of this typ...
View more

Method used

Infrared weak and small target detection method provided by the present invention, by calculating the Facet direction derivative feature of original image, in the part of described Facet direction derivative feature map, calculate relative range contrast significant figure along current direction, each direction The relative extreme contrast saliency map above is fused to obtain a saliency image, and then the target of the saliency image is extracted. The infrared weak and small target detection method provided by the present invention has a simple model structure and low computational complexity, and can be based on volume The approximate calculation of the product kernel is very convenient. It has a high signal-to-clutter ratio gain and background suppression ability. The correct detection probability of weak and small targets reaches more than 98%, and the real-time algorithm is good....
View more

Abstract

The invention provides an infrared weak and small target detection method and detection system. The infrared weak and small target detection method comprises the following steps: calculating Facet directional derivative characteristics of an original image; in the local of a Facet direction derivative characteristic pattern, calculating a relative range contrast saliency map along the current direction; fusing the relative range contrast saliency maps in each direction to obtain a saliency image; and extracting a target of the saliency image. According to the infrared weak and small target detection method and detection system, the model structure is simple; the calculation complexity is low; approximate calculation can be carried out based on a convolution kernel; the operation is very convenient; the signal-to-clutter ratio gain and the background suppression capability are very high; the correct detection probability of the weak target reaches 98% or above; and the real-time algorithm is good.

Application Domain

Image enhancementImage analysis

Technology Topic

Approximate computingSaliency map +6

Image

  • Infrared weak and small target detection method and detection system
  • Infrared weak and small target detection method and detection system
  • Infrared weak and small target detection method and detection system

Examples

  • Experimental program(2)

Example Embodiment

[0055] Example 1
[0056] See figure 1 , Is the infrared dim target detection method provided by Embodiment 1 of the present invention, including the following steps:
[0057] Step S110: Calculate the facet direction derivative feature of the original image.
[0058] In some preferred embodiments, the step of calculating the facet directional derivative feature of the original image includes the following steps:
[0059] Based on the Facet model, a binary cubic polynomial f(r,c) expression is fitted to the gray intensity surface in the 5x5 neighborhood of the original image. The expression is as follows, where r,c are the row and column coordinates in the 5x5 neighborhood , K i Is the fitting coefficient:
[0060]
[0061] The formula for calculating the derivative characteristic of 0 degree direction, 90 degree direction and arbitrary α degree direction is:
[0062]
[0063] Where K i The original image I and the convolution kernel w i Fast calculation of convolution, w i as follows:
[0064]
[0065] The facet direction derivative feature of the original image is calculated by the expression.
[0066] Step S120: Calculate a relative range contrast saliency map along the current direction within a part of the Facet direction derivative feature map;
[0067] In some preferred embodiments, the step of calculating the relative range contrast saliency map along the current direction within the part of the facet directional derivative feature map specifically includes:
[0068] Let Facet_α be the first-order derivative image along the α direction, Facet_α(r,c) is the gray value at the center point p(r,c) of the position, D R Is a neighborhood image whose distance p(r,c) is R;
[0069] Denote the set of gray values ​​along the α direction and passing p(r,c) as Line(D R ,α), and set frontLine(D R ,α) is the D taken along the α direction line and before the point p(r,c) R Inner gray value collection, backLine(D R ,α) is the D taken along the α direction line and after the point p(r,c) R Inner gray value set;
[0070] Calculate the collection frontLine(D R ,α) MaxVar(D R ,α) and backLine(D R ,α) within MinVar(D R ,α) and the set Line(D R ,α) MeanVar(D R ,α), the local relative range along the α direction at the center point p(r,c) is:
[0071]
[0072] The definition of the significance measurement formula is as follows:
[0073] C(p,D R ,α)=exp(RR(p,D R ,α));
[0074] The local relative range formula along the α direction is approximated as:
[0075]
[0076] Among them, mean[·] means finding the gray mean value of the set·;
[0077] Let M (range,α) As defined by mean[frontLine(D R ,α)]-mean[backLine(D R ,α)] values ​​constitute the image, then M (range,α) It can be obtained by the following convolution formula:
[0078] M (range,α) =Facet_αe k (range,α)
[0079] Where k (range,α) Only in the first half of the α direction is 1, the second half is -1, the rest of the elements are all 0, and the rest can be deduced by analogy;
[0080] For example: 7×7 α=45° direction, k (range,α) as follows:
[0081]
[0082] M (mean,α) It can be obtained by the following convolution formula:
[0083] M (mean,α) =Facet_αe k (mean,α) ,
[0084] k (range,α) Only in the α direction are all 1, the other elements are all 0, and the rest can be deduced by analogy;
[0085] For example, 7×7 α=k in the 45° direction (mean,α) for:
[0086]
[0087] Will M (range,α) With M (mean,α) Divide pixel by pixel and do natural index stretching to get the saliency value of the corresponding pixel.
[0088] Step S130: fusing the relative range contrast saliency maps in each direction to obtain a saliency image;
[0089] In the step of fusing the relative range contrast saliency maps in each direction to obtain a saliency image, specifically:
[0090] Use the following formula to fuse the relative range contrast saliency maps in each direction to obtain a saliency image,
[0091]
[0092] Step S140: Extract the target of the saliency image.
[0093] The step of extracting the target of the saliency image specifically includes: using Gaussian smoothing filtering and threshold segmentation to extract the target for the saliency image.
[0094] Specifically, the threshold segmentation is realized by the following formula:
[0095]
[0096] κ is the limiting factor, thr is the segmentation threshold, T is the segmented image, the value 1 represents the pixel area where the small target is located, and 0 represents the background area.
[0097] See figure 2 and image 3 , Respectively, are the actual detection map of the infrared weak and small target on the complex sky background and the actual detection map of the infrared weak and small target on the sea surface provided by Embodiment 1 of the present invention. Through the above-mentioned infrared weak target detection method, the model adopted is simple, the calculation complexity is low, and It can be approximated based on the convolution kernel, which is very convenient. After testing on related data sets, this method can have high signal-to-clutter ratio gain and background suppression ability, the probability of correct detection of weak and small targets, and the algorithm has excellent real-time performance.
[0098] The infrared dim and small target detection method provided by the present invention calculates the facet directional derivative feature of the original image, and calculates the relative range contrast saliency map along the current direction within the part of the Facet directional derivative feature map of the original image, and calculates all the saliency maps in each direction. The relative range contrast saliency map is fused to obtain a saliency image, and then the target of the saliency image is extracted. The infrared weak and small target detection method provided by the present invention has a simple model structure, low computational complexity, and can be approximated based on a convolution kernel The calculation is very convenient, with high signal-to-noise ratio gain and background suppression ability, the probability of correct detection of weak and small targets reaches more than 98%, and the real-time algorithm is good.

Example Embodiment

[0099] Example two
[0100] See Figure 4 , Is an infrared dim and small target detection system provided by the present invention, including: a feature extraction module 110, which calculates the facet direction derivative feature of the original image; a saliency value acquisition module 120, in the facet direction derivative feature map, along Calculate the relative range contrast saliency map in the current direction; the image fusion module 130 merges the relative range contrast saliency maps in each direction to obtain a saliency image; and a target extraction module 140 extracts the target of the saliency image .
[0101] The working methods of each module are described in detail below.
[0102] In some preferred embodiments, the feature extraction module 110 includes: based on the Facet model, a gray-scale intensity surface fits an expression of a binary cubic polynomial f(r,c) in a 5x5 neighborhood of the original image, and the expression The formula is as follows, where r, c are the row and column coordinates in the 5x5 neighborhood, K i Is the fitting coefficient:
[0103]
[0104] The formula for calculating the derivative characteristic of 0 degree direction, 90 degree direction and arbitrary α degree direction is:
[0105]
[0106] Where K i The original image I and the convolution kernel w i Fast calculation of convolution, w i as follows:
[0107]
[0108] The facet direction derivative feature of the original image is calculated by the expression.
[0109] The saliency value acquisition module 120 calculates the relative range contrast saliency map along the current direction within the part of the Facet directional derivative feature map, which specifically includes:
[0110] Let Facet_α be the first-order derivative image along the α direction, Facet_α(r,c) is the gray value at the center point p(r,c) of the position, D R Is a neighborhood image whose distance p(r,c) is R;
[0111] Denote the set of gray values ​​along the α direction and passing p(r,c) as Line(D R ,α), and set frontLine(D R ,α) is the D taken along the α direction line and before the point p(r,c) R Inner gray value collection, backLine(D R ,α) is the D taken along the α direction line and after the point p(r,c) R Inner gray value set;
[0112] Calculate the collection frontLine(D R ,α) MaxVar(D R ,α) and backLine(D R ,α) within MinVar(D R ,α) and the set Line(D R ,α) MeanVar(D R ,α), the local relative range along the α direction at the center point p(r,c) is:
[0113]
[0114] The definition of the significance measurement formula is as follows:
[0115] C(p,D R ,α)=exp(RR(p,D R ,α));
[0116] The local relative range formula along the α direction is approximated as:
[0117]
[0118] Among them, mean[·] means finding the gray mean value of the set·;
[0119] Let M (range,α) As defined by mean[frontLine(D R ,α)]-mean[backLine(D R ,α)] values ​​constitute the image, then M (range,α) It can be obtained by the following convolution formula:
[0120] M (range,α) =Facet_αe k (range,α)
[0121] Where k (range,α) Only the first half along the α direction is 1, the second half is -1, the rest of the elements are all 0, and the rest can be deduced by analogy;
[0122] For example: 7×7 α=45° direction, k (range,α) as follows:
[0123]
[0124] M (mean,α) It can be obtained by the following convolution formula:
[0125] M (mean,α) =Facet_αe k (mean,α) ,
[0126] k (range,α) Only in the direction of α are all 1s, the rest of the elements are all 0s, and so on;
[0127] For example, 7×7 α=k in the 45° direction (mean,α) for:
[0128]
[0129] Will M (range,α) With M (mean,α) Divide pixel by pixel and do natural index stretching to get the saliency value of the corresponding pixel.
[0130] The image fusion module 130 fuses the relative range contrast saliency maps in various directions to obtain a saliency image, specifically: using the following formula to fuse the relative range contrast saliency maps in various directions to obtain a saliency image,
[0131]
[0132] The target extraction module 140 uses Gaussian smoothing filtering and threshold segmentation to extract targets on the saliency image.
[0133] Specifically, the threshold segmentation is realized by the following formula:
[0134]
[0135] κ is the limiting factor, thr is the segmentation threshold, T is the segmented image, the value 1 represents the pixel area where the small target is located, and 0 represents the background area.
[0136] See figure 2 and image 3 , Respectively, are the actual detection map of the infrared weak and small target on the complex sky background and the actual detection map of the infrared weak and small target on the sea surface provided by Embodiment 2 of the present invention. Through the above-mentioned infrared weak target detection method, the model adopted is simple, the computational complexity is low, and It can be approximated based on the convolution kernel, which is very convenient. After testing on related data sets, this method can have high signal-to-clutter ratio gain and background suppression ability, the probability of correct detection of weak and small targets, and the algorithm has excellent real-time performance.
[0137] The infrared dim and small target detection system provided by the present invention calculates the facet direction derivative feature of the original image, calculates the relative range contrast saliency map along the current direction within the part of the Facet direction derivative feature map of the original image, and calculates all the saliency maps in each direction. The relative range contrast saliency map is fused to obtain a saliency image, and then the target of the saliency image is extracted. The infrared weak and small target detection system provided by the present invention has a simple model structure, low computational complexity, and can be approximated based on a convolution kernel The calculation is very convenient, with high signal-to-noise ratio gain and background suppression capabilities, the correct detection probability of weak and small targets reaches more than 98%, and the real-time algorithm is good.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products