[0055] Example 1
[0056] See figure 1 , Is the infrared dim target detection method provided by Embodiment 1 of the present invention, including the following steps:
[0057] Step S110: Calculate the facet direction derivative feature of the original image.
[0058] In some preferred embodiments, the step of calculating the facet directional derivative feature of the original image includes the following steps:
[0059] Based on the Facet model, a binary cubic polynomial f(r,c) expression is fitted to the gray intensity surface in the 5x5 neighborhood of the original image. The expression is as follows, where r,c are the row and column coordinates in the 5x5 neighborhood , K i Is the fitting coefficient:
[0060]
[0061] The formula for calculating the derivative characteristic of 0 degree direction, 90 degree direction and arbitrary α degree direction is:
[0062]
[0063] Where K i The original image I and the convolution kernel w i Fast calculation of convolution, w i as follows:
[0064]
[0065] The facet direction derivative feature of the original image is calculated by the expression.
[0066] Step S120: Calculate a relative range contrast saliency map along the current direction within a part of the Facet direction derivative feature map;
[0067] In some preferred embodiments, the step of calculating the relative range contrast saliency map along the current direction within the part of the facet directional derivative feature map specifically includes:
[0068] Let Facet_α be the first-order derivative image along the α direction, Facet_α(r,c) is the gray value at the center point p(r,c) of the position, D R Is a neighborhood image whose distance p(r,c) is R;
[0069] Denote the set of gray values along the α direction and passing p(r,c) as Line(D R ,α), and set frontLine(D R ,α) is the D taken along the α direction line and before the point p(r,c) R Inner gray value collection, backLine(D R ,α) is the D taken along the α direction line and after the point p(r,c) R Inner gray value set;
[0070] Calculate the collection frontLine(D R ,α) MaxVar(D R ,α) and backLine(D R ,α) within MinVar(D R ,α) and the set Line(D R ,α) MeanVar(D R ,α), the local relative range along the α direction at the center point p(r,c) is:
[0071]
[0072] The definition of the significance measurement formula is as follows:
[0073] C(p,D R ,α)=exp(RR(p,D R ,α));
[0074] The local relative range formula along the α direction is approximated as:
[0075]
[0076] Among them, mean[·] means finding the gray mean value of the set·;
[0077] Let M (range,α) As defined by mean[frontLine(D R ,α)]-mean[backLine(D R ,α)] values constitute the image, then M (range,α) It can be obtained by the following convolution formula:
[0078] M (range,α) =Facet_αe k (range,α)
[0079] Where k (range,α) Only in the first half of the α direction is 1, the second half is -1, the rest of the elements are all 0, and the rest can be deduced by analogy;
[0080] For example: 7×7 α=45° direction, k (range,α) as follows:
[0081]
[0082] M (mean,α) It can be obtained by the following convolution formula:
[0083] M (mean,α) =Facet_αe k (mean,α) ,
[0084] k (range,α) Only in the α direction are all 1, the other elements are all 0, and the rest can be deduced by analogy;
[0085] For example, 7×7 α=k in the 45° direction (mean,α) for:
[0086]
[0087] Will M (range,α) With M (mean,α) Divide pixel by pixel and do natural index stretching to get the saliency value of the corresponding pixel.
[0088] Step S130: fusing the relative range contrast saliency maps in each direction to obtain a saliency image;
[0089] In the step of fusing the relative range contrast saliency maps in each direction to obtain a saliency image, specifically:
[0090] Use the following formula to fuse the relative range contrast saliency maps in each direction to obtain a saliency image,
[0091]
[0092] Step S140: Extract the target of the saliency image.
[0093] The step of extracting the target of the saliency image specifically includes: using Gaussian smoothing filtering and threshold segmentation to extract the target for the saliency image.
[0094] Specifically, the threshold segmentation is realized by the following formula:
[0095]
[0096] κ is the limiting factor, thr is the segmentation threshold, T is the segmented image, the value 1 represents the pixel area where the small target is located, and 0 represents the background area.
[0097] See figure 2 and image 3 , Respectively, are the actual detection map of the infrared weak and small target on the complex sky background and the actual detection map of the infrared weak and small target on the sea surface provided by Embodiment 1 of the present invention. Through the above-mentioned infrared weak target detection method, the model adopted is simple, the calculation complexity is low, and It can be approximated based on the convolution kernel, which is very convenient. After testing on related data sets, this method can have high signal-to-clutter ratio gain and background suppression ability, the probability of correct detection of weak and small targets, and the algorithm has excellent real-time performance.
[0098] The infrared dim and small target detection method provided by the present invention calculates the facet directional derivative feature of the original image, and calculates the relative range contrast saliency map along the current direction within the part of the Facet directional derivative feature map of the original image, and calculates all the saliency maps in each direction. The relative range contrast saliency map is fused to obtain a saliency image, and then the target of the saliency image is extracted. The infrared weak and small target detection method provided by the present invention has a simple model structure, low computational complexity, and can be approximated based on a convolution kernel The calculation is very convenient, with high signal-to-noise ratio gain and background suppression ability, the probability of correct detection of weak and small targets reaches more than 98%, and the real-time algorithm is good.