Radar image-based flyer target identifying and tracking method

A target recognition and radar image technology, applied in the field of image processing, can solve the problems of bird target recognition and tracking without systematic mature methods and practices, and achieve the effect of reducing false alarm rate, broad application prospects, and meeting real-time requirements.

Inactive Publication Date: 2010-04-21
BEIHANG UNIV +1
0 Cites 48 Cited by

AI-Extracted Technical Summary

Problems solved by technology

The airport radar bird detection system is an effective technical means to prevent bird strikes, but there is no...
View more

Method used

The target state estimated value of described current moment is estimated to generate by Kalman state, and the target state estimated value carries out target correlation to the present moment T measured value that obtains in real time, comprises the initiation, continuation and termination of target process to eliminate clutter interference.
[0121] Step five: data fusion to realize target identification and tracking. The smooth trajectory of the flying bird target obtained by multi-target tracki...
View more

Abstract

The invention relates to a radar image-based flyer target identifying and tracking method, which comprises the following steps: subtracting a background image from an avian detection radar PPI image to obtain a background differential radar image; performing clutter suppression on the background differential radar image; performing target information extraction and multi-target tracking; and realizing the target identifying and tracking by data fusion. The method provided by the invention fills the blank of the field of radar avian detections in China, provides technical support for flyer strike prevention work of civil aviation, is suitable for processing other radar target signals under the situation of low altitude and low speed, has wide application prospect, and can be used for guiding airport staff to implement a flyer expelling operation.

Application Domain

Radio wave reradiation/reflection

Technology Topic

Objective informationTechnical support +10

Image

  • Radar image-based flyer target identifying and tracking method
  • Radar image-based flyer target identifying and tracking method
  • Radar image-based flyer target identifying and tracking method

Examples

  • Experimental program(1)

Example Embodiment

[0029] The following describes in detail the radar image-based bird target recognition and tracking method provided by the present invention with reference to the drawings and embodiments.
[0030] The bird target recognition and tracking method provided by the present invention is implemented through the following steps:
[0031] Step 1: Background difference.
[0032] The background difference refers to subtracting a background image from a bird radar PPI image, and the background image is constructed by an average value method or a principal component analysis method.
[0033] The average method is the most commonly used and simplest background construction method. This method is usually adapted to the situation where the target residence time in the scene is short and the target appears infrequently. For each frame of image to reconstruct the background, the calculation formula is as follows:
[0034] B k = 1 N ( f k + f k - 1 + . . . + f k - N + 1 )
[0035] = B k - 1 + 1 N ( f k - f k - N ) - - - ( 1 )
[0036] In the formula, N is the number of images used to reconstruct the background, B k Is the reconstructed image, B k-1 Is the background image constructed for the previous frame, f k Is the k-th frame image. In the present invention, the background information is reconstructed every certain time (5-10 minutes).
[0037] Principal component analysis (PCA) is a linear dimensionality reduction method, which projects high-dimensional data into a low-dimensional subspace, and is an extremely powerful tool for extracting features from possible high-dimensional data sets. PCA is an orthogonal transformation of the coordinate system describing the observation data. It aims to obtain fewer uncorrelated new variables with a linear combination of original variables while maintaining as much information contained in the input data set as possible. Each frame of radar image contains background and moving target (flying bird), so the background can be regarded as the largest principal component of the image sequence. In summary, the principal components of the bird radar PPI image sequence can be calculated as follows:
[0038] (1) Calculate the sample variance matrix S of the radar image sequence. Let x 1 ,..., x n Is n one-dimensional observation sample vectors expanded from radar image data to form an observation sample matrix X, and then calculate the variance matrix S of the matrix;
[0039] (2) Calculate the eigenvalue of the variance matrix S and its corresponding eigenvector;
[0040] (3) The characteristic values ​​are arranged in descending order;
[0041] (4) Select the feature vector corresponding to the largest feature value as the largest principal component, and restore the feature vector to a two-dimensional radar PPI image, that is, a background image of a bird detection radar.
[0042] The background image is subtracted from the bird radar PPI image to obtain the radar image of the background difference.
[0043] Step 2: Perform clutter suppression on the radar image after background difference.
[0044] For the radar image after background difference, the main background information is removed, but it still contains a lot of clutter, especially the edge clutter. It needs to use the constant false alarm (CFAR) threshold segmentation and morphological methods for clutter suppression.
[0045] CFAR threshold segmentation can automatically obtain the detection threshold according to the change of background clutter power to maintain the characteristics of false alarms. It is a radar signal processing method that provides detection thresholds. The CFAR threshold segmentation is implemented by a CFAR detector, and the structure of the CFAR detector is as follows figure 2 As shown, the detector includes a reference unit 1, a detection unit 2, a protection unit 3, and a comparator 4 and a multiplier 5. The reference unit 1, the detection unit 2 and the protection unit 3 occupy a total of N+M+1 units, of which , The first N/2 and the last N/2 units are the reference unit 1, the middle unit is the detection unit 2, and the two sides of the detection unit 2 are separated from the reference unit 1 by M/2 protection units 3, and the signals are serial To enter each unit of the detector. The CFAR detector obtains a background intensity relative estimation value Z according to the N reference unit signals. The estimation method is related to the adopted CFAR detection method. The background intensity can be estimated by means, statistical ranking, and reduced average. In the multiplier 5, the estimated value Z is multiplied by a threshold weighting coefficient T to obtain the decision threshold TZ. The threshold weighting coefficient T is usually determined by the following formula:
[0046] T = P f 0 - 1 / N - 1
[0047] Where P f0 Indicates the false alarm rate. In the comparator 4, the decision threshold TZ is compared with the signal of the detection unit to make a decision. If the signal strength of the detection unit is greater than the decision threshold TZ, it is determined as the target, otherwise it is clutter.
[0048] The radar image segmented by the CFAR threshold is called a binary image, in which the target is represented by a bright color, and the target area is initially determined. This binary image is further processed by morphology to remove clutter areas with fewer pixels.
[0049] Morphological processing refers to the two methods of erosion and dilation as the basis. These two methods and their combinations can be used to analyze and process the shape and structure of the image, including image segmentation, feature extraction, boundary detection, image filtering and restoration Wait.
[0050] 1) Image corrosion;
[0051] The role of corrosion is to eliminate the boundary points of the target, and the process of shrinking the boundary to the inside can remove targets smaller than structural elements. In this way, by selecting structural elements of different sizes, targets of different sizes can be eliminated. If there is a small connection between two objects, the two objects can be separated by corrosion. The mathematical expression for corrosion is
[0052] S = X ⊗ B = { x , y | B xy ⊆ X } - - - ( 2 )
[0053] In the formula, S represents the set of binary images after corrosion, and B represents the structural element used for corrosion. Each element in the structural element takes a value of 0 or 1, which can form any shape of graphics. There is a center point; X represents the pixel set of the original image after binarization. The meaning of this formula is the set S obtained by corroding X with B. S is the set of B's ​​current position when B is completely included in X.
[0054] For the binary radar image segmented by the CFAR threshold, the structural element B is usually dragged to move in the radar image domain X, the horizontal movement interval is 1 pixel, and the longitudinal movement interval is 1 scan line. At each position, when the center point of the structure element B is translated to a certain point (x, y) on the X image, if every pixel in the structure element is in the same neighborhood with (x, y) as the center If the corresponding pixels are exactly the same, the (x, y) pixels are retained, and the pixels that do not meet the conditions in the original image are all deleted, so as to achieve the effect of shrinking the object boundary inward. Corrosion actually removes the periphery of the image while retaining the inner part of the image.
[0055] 2) Image expansion;
[0056] The effect of expansion is the opposite of the effect of corrosion. It is a process of expanding the boundary points of a binary target, merging all background points in contact with the target area into the target, and expanding the boundary outward. If the distance between the two targets is relatively close, the expansion operation may connect the two targets together. The expansion is useful for filling the holes in the target after image segmentation. The mathematical expression for expansion is
[0057] S = X ⊕ B = { x , y | B xy ∩ X ≠ Φ } - - - ( 3 )
[0058] The meaning of this formula is to use B to expand the set S obtained by X. S is the set of the center point positions of B when the displacement of the image of B is the same as that of X by at least one pixel.
[0059] For the binary radar image segmented by the CFAR threshold, the structural element B is usually dragged to move in the radar image domain X, the horizontal movement interval is 1 pixel, and the longitudinal movement interval is 1 scan line. At each position, when the pixel of the structural element B intersects with the target area at least one pixel, then the (x, y) pixel point is retained, so as to achieve the effect of expanding the target boundary outward. Dilation actually expands the periphery of the image while retaining the internal part of the image.
[0060] In the actual radar image processing process, expansion and corrosion are often used in combination. An image often undergoes a series of expansion and corrosion treatments, the number of expansion and corrosion can be arbitrarily selected, using the same or different structural elements.
[0061] Step 3: Perform target information extraction.
[0062] The binary image obtained after background difference and clutter suppression requires target information extraction. First, it must be determined whether the target is an independent area in the image. There are several areas, which requires different areas to be identified. Extraction of bird target information includes, but is not limited to, the number of birds, the size of the birds, the coordinate position, and the flight speed.
[0063] An example of a partial area of ​​a binary image is image 3 As shown in the figure, A represents the target area, and O represents the background, and it is stipulated that the four-connectivity criterion is used for marking. Since the scanning has a certain order, for any point, the previous point and the previous point of the current point must be the points that have been scanned. When the point P on the target area is encountered during the scanning process, the upper point and the left point must be It is a point that has been marked. The method of marking P point is determined by the left point and the upper point. There are mainly the following different situations:
[0064] (a) When the previous point and the previous point on the left are both background O, add a new mark to point P;
[0065] (b) When one of the previous point and the previous point on the left is O and the other is marked, point P and the adjacent point of the known mark are marked with the same mark;
[0066] (c) When the two adjacent points of the previous point and the previous point on the left are both marked, the P point mark is the same as the left point mark.
[0067] According to the above three principles, after the first scan, all target areas have been marked, such as Figure 4 , The marks are 1, 2, 3, 4,... At this time, the same target area in the image may have several different marks, so a second scan is needed to unify the marks on the same target, as long as it is 4 All connected objects belong to the same target, and their labels should be consistent, such as Figure 5 The marks shown are the same target area. The same is true for 8-connected marking. At this time, marking any point P needs to be determined by the upper point, upper left point, upper right point, and front left point of the current point P; if all 4 points are background points, Then add a new mark to the point. If one of the adjacent points has been marked, the point is also marked with the same mark; if more than two adjacent points have been marked in 4 adjacent points, it can be judged that the same mark is added to the front left point or the upper right Click the same mark. The second scan unifies the marks of the same target area, so that a single and complete mark of each target is obtained.
[0068] The target information extraction method takes a binary image with a certain area of ​​white area as the processing object. In this binary image, the set of interconnected white pixels is called a white area. When extracting target information, the 8-connectivity discriminant method is first used to identify the target area, such as Image 6 The three disconnected target areas shown in A, B, and C are as follows:
[0069] 1) Scan pixel by pixel from left to right, from top to bottom;
[0070] 2) If the pixel values ​​of the top left, top right, top right, and front left point of the point are not the target, add 1 to the number index, and the array value is 1;
[0071] 3) Use (row coordinates, column coordinates) to mark the target. If a pixel (1, 1) is encountered as target A, determine the upper right point (0, 2), upper right (0, 1), upper left of the pixel in turn Whether (0,0) and left front (1,0) are the targets, the priority descending order is upper right point (0,2), positive point (0,1), upper left point (0,0) and left front Point (1, 0).
[0072] 4) If the upper right point is the target, the current point and the upper right point are marked with the same value as the upper right point. For example: the current point (2, 2), the upper right point (1, 3) is the target, so the current point (2, 2) is marked with the same value as the upper right point (1, 3).
[0073] 5) If the upper right point is not the target, then the right point is judged. For example: the current point (5, 4), the upper right point (4, 5) is not the target, and the right point (4, 4) is judged as the target, so the current point (5, 4) is marked with the right point (4) , 4) The same value.
[0074] 6) In the same way, if the upper right point and the right upper point of the current point are not the target, the same method is used to judge the upper left point in turn, and if the upper left point is not the target, then the left front point is judged again.
[0075] 7) If such as: the upper right point (0, 9), the upper right point (0, 8), the upper left point (0, 7) and the front left point (1, 7) of the current point (1, 8) are not targets, Then the value of the current point is added to the original mark, and this mark is used as the difference from the original target.
[0076] 8) There is a special adjustment: from Image 6 It can be seen that (10, 2) is a newly marked point, the upper right point (9, 4) and the left front point (10, 2) of the current point (10, 3) are different marks, and the upper right point and the upper left point are not the target , The current point (10, 3) is marked with the same value as the upper right point (9, 4). At this time, scan the image from beginning to end, and mark all the pixel values ​​with the same (10, 2) mark as the same value as the upper right point (9, 4). How many pixel points are converted, the array that counts the pixel value (marked value) of the upper right point is added, and the array that counts the pixel value (marked value) of the left front point is set to 0. After marking each pixel in the image, the pixel value of the target area is changed to a label, and the sum of each label is calculated, that is, the number of pixels n in different target areas is obtained. The center of the PPI image is defined as the origin of coordinates, the x-axis goes horizontally to the right, and the y-axis goes vertically upward. Center coordinates (x 0 , Y 0 ) Is obtained from equation (2), where S is a single target connected area, and the range coefficient C reflects the actual distance represented by each pixel when different ranges are selected.
[0077] x 0 = C X ( x , y ) A S x / n , y 0 = C X ( x , y ) A S y / n - - - ( 2 )
[0078] Step 4: Multi-target tracking. After extracting the target information from the bird detection radar PPI image, the radar measurement information is initially extracted, including the center coordinates and the number of pixels of each area. Through multi-target tracking, the tracked radar measurement is finally determined as a bird target.
[0079] Multi-target tracking can be roughly divided into two main aspects: target state estimation and data association: the former provides the state estimation (prediction) value required for tracking, and the main problem is data accuracy; the latter provides the correspondence between measurement and target. That is, the correlation between measurement and track and the elimination of clutter, the main problem is the correctness of data correlation. The process of radar multi-target tracking method is as follows Figure 7 Shown. This method is based on the idea of ​​Monte Carlo. The judgment result is expressed as a set of discrete samples, and the target state is estimated using Kalman filtering. A total of T measurement values ​​are obtained at each moment, including n target measurement values ​​and a certain amount of clutter. The specific steps are:
[0080] (a) Multi-target data association between the current measurement value and the current target state estimated value;
[0081] The estimated value of the target state at the current moment is generated by Kalman state estimation. The estimated value of the target state is associated with the T measured values ​​obtained in real time at the current moment, including the initiation, continuation, and termination of the target. Clutter interference.
[0082] Kalman state estimation means that the Kalman filter estimates the state of the system at the next moment based on the measurement update value at the previous moment, which is expressed as follows:
[0083] m k - = A k - 1 m k - 1
[0084] P k - = A k - 1 P k - 1 A k - 1 T + Q k - 1 - - - ( 3 )
[0085] In formula (3), m k - And P k - Is the mean and variance estimated before the measurement at time k, A k Is the state transition matrix, Q k Is the noise matrix.
[0086] The multi-target data association uses a particle filter data association method to perform target association on the measured values. To some extent, target state estimation is the key to the entire multi-target tracking algorithm. Because the correlation between measurement and trajectory also needs to know the state prediction value of each trajectory at the time of arrival of the measurement, otherwise the correct correlation between measurement and trajectory is almost impossible, and the accurate state prediction value is correctly associated premise. A data association method based on particle filtering (PF), the association result is expressed as a set of discrete samples, given the importance distribution π(λ k |λ 1:k-1 (i) , Y 1:k ), a set of particles {w k-1 (i) , Λ k-1 (i) , M k-1 (i) , P k-1 (i) : I=1,...,N}, the measured value y k , A set of particles {w k (i) , Λ k (i) , M k (i) , P k (i) : I=1,...,N} The processing steps are as follows:
[0087] (1) Based on the previously generated latent correlation variable λ k-1 (i) , For each particle i=1,..., the mean value m of N k-1 (i) And covariance P k-1 (i) Carry out Kalman estimates.
[0088] (2) The new potential correlation variable λ of each particle i=1,...,N is given by the corresponding importance distribution k (i)
[0089] λ k ( i ) ~ π ( λ k | λ 1 : k - 1 ( i ) , y 1 : k ) - - - ( 5 )
[0090] (3) Calculate the new (non-normalized) weight:
[0091] w k * ( i ) ∝ w k - 1 * ( i ) p ( y k | λ 1 : k ( i ) ; y 1 : k - 1 ) p ( λ k ( i ) | λ k - 1 ( i ) ) π ( λ k | λ 1 : k - 1 ( i ) , y 1 : k ) - - - ( 6 )
[0092] Where the likelihood term is the similarity of the Kalman filter edge measurement
[0093] p ( y k | λ 1 : k ( i ) , y 1 : k - 1 )
[0094] = N ( y k | H k ( λ k ( i ) ) m k - ( i ) , H k ( λ k ( i ) ) P k - ( i ) H k T ( λ k ( i ) ) + R k ( λ k ( i ) ) ) - - - ( 7 )
[0095] The model parameters of the Kalman filter are based on the latent correlation variable λ k (i) determine.
[0096] (4) Weight normalization
[0097] w k ( i ) = w k * ( i ) X j = 1 N w k * ( i ) - - - ( 8 )
[0098] (5) Based on the latent correlation variable λ k (i) Give the Kalman filter update for each particle.
[0099] (6) Estimate the effective number of particles
[0100] n eff ≈ 1 X i = 1 N ( w k ( i ) ) 2 - - - ( 9 )
[0101] If the effective number of particles is too low (such as n eff
[0102] p ( x k , λ k | y 1 : k ) ≈ X i = 1 N w k ( i ) δ ( λ k - λ k ( i ) ) N ( x k | m k ( i ) , P k ( i ) ) . - - - ( 10 )
[0103] (b) Perform Kalman status update for each associated target, and obtain the status update value of each target respectively. The update part estimates the current state of the system based on the measured value at the current moment, which is expressed as follows:
[0104] v k = y k - H k m k -
[0105] S k = H k P k - H k T + R k
[0106] K k = P k - H k T S k - 1 - - - ( 4 )
[0107] m k = m k - + K k v k
[0108] P k = P k - - K k S k K k T
[0109] In formula (4), y k Is the measurement obtained at time k, H k Is the measurement matrix at time k, m k - And P k - Is the mean and variance estimated before the measurement at time k; m k And P k Is the mean and variance estimated after the measurement is obtained at time k; v k Is the measurement correction at time k; S k Is the estimated variance of the measurement at time k; K k Is the filter gain, which gives the degree to which the estimated value at time k should be corrected.
[0110] (c) Perform Kalman smoothing on all Kalman filter results to obtain a smooth trajectory of each target. Smooth the filtering result obtained by the Kalman filter, and its mean value m k s And variance P k s Calculated by the following formula:
[0111] m k + 1 - = A k m k
[0112] P k + 1 - = A k P k A k T + Q k
[0113] C k = P k A k T [ P k + 1 - ] - 1 - - - ( 11 )
[0114] m k s = m k + C k [ m k + 1 s - m k + 1 - ]
[0115] P k s = P k + C k [ P k + 1 s - P k + 1 - ] C k T
[0116] Where
[0117] ·M k s And P k s It is a smooth estimate of the state average and variance at time k;
[0118] ·M k And P k Is the filtered estimate of the state average and variance at time k;
[0119] ·M k+1 - And P k+1 - It is the estimated state average and variance at k+1, which is the same as the situation in Kalman filtering;
[0120] ·C k Is the smoothing gain at time k, which gives the degree to which the smooth estimate needs to be corrected at that time.
[0121] Step 5: Data fusion to realize target recognition and tracking. The smooth trajectory of the bird target obtained by multi-target tracking is merged with the satellite map or coordinate system to generate a fusion image containing the trajectory of the bird target, which is convenient for airport staff to observe and use.
[0122] The following is a detailed introduction to the bird target recognition and tracking method provided by the present invention in combination with the entire process of PPI image processing of a bird detecting radar.
[0123] Step 1. Background difference.
[0124] An original bird radar PPI image such as Figure 8a As shown, the background difference removes the background image from the bird radar image, and the method of generating the background image is the average method or the principal component analysis method. Considering that the background image has subtle changes over time, the background image must be updated frequently. The radar image after background difference is like Figure 8b Shown.
[0125] Step 2: Clutter suppression: Perform noise reduction processing on the radar image after background difference to remove the residual clutter information. Clutter suppression includes constant false alarm threshold segmentation and morphology. Among them, the constant false alarm threshold segmentation can adaptively select the threshold value according to different radar images. The image after threshold segmentation is further reduced in noise by the morphology module, and the targets with too few pixels are removed. The radar image after clutter suppression is like Figure 8c Shown.
[0126] Step 3. Target information extraction Extract radar measurement information from the radar image after clutter suppression. The extracted information includes the number, size and coordinate position of the target. The extracted radar measurement information is shown in Table 1.
[0127] Table 1 Radar measurement information extraction
[0128]
[0129] Step 4. Target tracking: Track the bird target based on the radar measurement information extracted from the target information, output the bird target smooth trajectory, and record it at the same time. Through the results of the multi-target tracking processing on the radar image sequence, it can be seen that the measurements 1, 2, and 3 in Table 1 form a smooth trajectory, and the measurements 4 and 5 only appear in one radar image and fail to form a trajectory. Measured values ​​1, 2, and 3 are targets, and 4 and 5 are clutter.
[0130] Step 5. Data fusion The trajectory information of the bird target obtained by the multi-target tracking method is fused with the satellite map or coordinate system to generate a fused image containing the smooth trajectory of the bird target, which is convenient for airport staff to observe, such as Figure 8d Shown.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Novel application of rutaecarpine compound

ActiveCN102688242AGood anti-atherosclerosis effectBroad application prospects
Owner:MEDICINE & BIOENG INST OF CHINESE ACAD OF MEDICAL SCI

Structural method and application of tissue engineering adipose tissue

ActiveCN1912109AHuge market value potentialBroad application prospects
Owner:FIELD OPERATION BLOOD TRANSFUSION INST OF PLA SCI ACAD OF MILITARY

Novel water surface automatic oil suction robot

Owner:CHANGJIANG RIVER SCI RES INST CHANGJIANG WATER RESOURCES COMMISSION

Pulsating load temporary blocking fracture simulation device and method

ActiveCN109630084AImprove construction effectBroad application prospects
Owner:BC P INC CHINA NAT PETROLEUM CORP +1

Classification and recommendation of technical efficacy words

Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products