Example 1
 Such as figure 1 As shown, it is a flowchart of a specific embodiment of a bayonet vehicle image recognition method based on image features in the present invention. Referring to Fig. 1, the specific steps of a bayonet vehicle image recognition method based on image features in this specific embodiment include:
 Step S100: Establish a template database:
 Step S101: Store the photographed template images of vehicles of different styles and colors in the template database, and store vehicle attribute data and body color data corresponding to each template image in the template database. The vehicle attribute data may include vehicle brand and vehicle model. And the year of the vehicle; figure 2 Shown are two template images of "Toyota, Corolla, Ninth Generation";
 Step S102: Perform data processing on each template image in the template database, obtain vehicle SIFT feature data and store it in the template database, where the SIFT feature data generally includes SIFT feature points and SIFT descriptors of each feature point;
 Step S200: Perform vehicle recognition on the image to be recognized:
 Step S201: Perform color recognition on the image to be recognized;
 Step S202: Select a template image that matches the color of the vehicle body from the template database according to the color recognition result;
 Step S203: Extract features from the image to be recognized by using the principle of the SIFT operator, and compare it with the selected template image to obtain a template image that matches the image to be recognized;
 Step S204: output the vehicle attribute data corresponding to the matched template image.
 This specific embodiment combines vehicle color recognition with the recognition based on the SIFT operator principle, and selects the template image data that meets the color result for each vehicle model in the template database according to the color recognition result for comparison based on the SIFT operator principle. The matching template image is found in the template database, and corresponding vehicle attribute data is output according to the matching template image.
 In step S102, in order to eliminate the interference of the feature points of the vehicle license plate area on the image matching and improve the accuracy of recognition, in the specific implementation process of this specific embodiment, the license plate area is demarcated in advance for each template image in the template database. And remove the SIFT features in the license plate area of each template image, the specific steps are as follows:
 Step S1021: Mark the image coordinates of the vehicle license plate in each template image;
 Step S1022: Determine the vehicle SIFT feature points for each template image, that is, remove the SIFT feature point information located in the license plate mark area in the template image, and retain the remaining SIFT feature information in the template image as the vehicle SIFT feature of each template image data. If there is a license plate in the template image, mark the image coordinates where the license plate is located, remove the SIFT feature point information located within the coordinate range of the license plate mark, and calculate the SIFT descriptor of the remaining feature points, and finally obtain the vehicle SIFT of each template image Characteristic data. Such as image 3 Shown is the SIFT feature calculation result of the template image. The starting point of the arrow in the figure represents the position of the feature point, the direction of the arrow represents the main direction of the feature point, and the direction of the arrow represents the descriptive sub-model value.
 In step S201, this specific embodiment customizes the color of the vehicle body based on the difference in color perception by the human eye, combined with the factors affecting the light, so as to determine the color of the vehicle body according to the customized color, and combines the RGB and The HSV value judges the color of the vehicle body and judges the color of the vehicle body according to a certain color sequence, which makes the result of color recognition more stable. Specific steps are as follows:
 Step S2011: Pre-divide the body color into six categories: green, yellow, red, blue, white and black. Yellow includes yellow, orange and brown perceived by human eyes, red includes red, pink and purple perceived by human eyes, and white includes White, silver, and light gray perceived by human eyes, black includes black and dark gray perceived by human eyes;
 Step S2012: For the five colors of green, yellow, red, blue, and white, combining the pairwise differences of r, g, and b, setting an empirical threshold to draw a certain range for the values of r, g, b, h, s, and v; As shown in Table 1 below:
 Table 1 Color judgment
 colour h Value range s Value range v Value range r , g , b range Other conditions green 70 h ≤170
 See Table 1, where ThreS , ThreV , ThreRGB , ThreWhite These are the empirical thresholds, and the recommended values are 30, 20, 15, 160, diff( r , g , b ) Means r , g , b The difference between any two values in;
 As shown in Table 1, when a certain pixel in the image to be identified, its h value is 70
 Step S2013: Count the proportions of the five color pixels of green, yellow, red, blue, and white in the vehicle body range on the image to be recognized; the vehicle body range excludes the window range and the front face exhaust grid Range and lamp range;
 Step S2014: Judging the various color ratios of the image to be recognized in the order of green, yellow, red, blue, and white. When the current color ratio exceeds the empirical threshold of the corresponding color, it is determined that the vehicle body of the image to be recognized is the current color. When the body color ratio of the image to be recognized does not exceed the empirical threshold of the five colors of green, yellow, red, blue, and white, it is determined that the body color of the image to be recognized is black, and the color recognition result is obtained.
 In this specific embodiment, when performing vehicle recognition on the image to be recognized, the SIFT feature data of the vehicle image is determined according to the principle of the SIFT operator, and the color recognition result is input into the template database for comparison, and finally the recognition result is obtained. Therefore, the specific steps of step S203 in this specific embodiment are:
 S2031: Use the principle of SIFT operator to determine the SIFT feature data of the image to be recognized; Figure 4 Shown are examples of images to be recognized and their SIFT features.
 S2032: Compare the SIFT feature data of the image to be recognized with the SIFT feature data of each selected template image to obtain matching feature point pairs;
 S2033: Calculate the matching degree of the two images according to the matched feature point pairs, and determine the template image matching the image to be recognized according to the matching degree.
 Among them, the matching feature point pairs in step S2032 can be expressed by the Euclidean distance to indicate their matching degree, and the specific steps are as follows:
 S20321: preset the threshold ε;
 S20322: For the SIFT feature point P in the image to be recognized, calculate the Euclidean distance between all feature points in each selected template image and the feature vector descriptor of P, and find the minimum and second smallest Euclidean distance from it Values d1 and d2, and record the SIFT feature points Q1 and Q2 corresponding to d1 and d2 in the template image respectively;
 S20323: If d2 is 0, set the parameter ratio 0; otherwise calculated;
 S20324: Set parameters ratio Compared with the threshold ε, when ratio When it is less than the threshold ε, it is judged that the SIFT feature point P in the image to be recognized matches the feature point Q1 of the template image successfully, otherwise it is judged that the matching of P and Q1 is unsuccessful;
 S20325: Count and record the feature point pairs of each selected template image and the image to be recognized. Such as Figure 5 Shown, put Figure 4 The image to be recognized (the color recognition result is white) and figure 2 The white template image shown is matched, and the result is Figure 5 As shown, the feature points connected by a straight line between the two cars in the figure represent matching feature point pairs.
 After the matching in step S20324 is completed, in order to improve the robustness of the algorithm, this specific embodiment may also use the RANSAC algorithm to perform data purification on the matching result. The specific steps are as follows:
 The position mapping relationship between each selected template image and the corresponding matching feature point pair in the image to be recognized is used as the input value of the RANSAC algorithm, and the homography matrix of the image transformation is estimated by the RANSAC method, and the ones that do not meet the geometric consistency are eliminated Feature point pairs, the retained feature point pairs are obtained as the final matched feature point pairs. Such as Image 6 Shown, for Figure 5 Schematic diagram of the results of data purification for the matching results.
 Since in the RANSAC algorithm, the estimation of the homography matrix requires at least 4 pairs of matching feature points, in order to enhance the stability of the algorithm, after the matching is completed, the logarithm of the matching feature point pairs is greater than the set threshold Data purification operation is performed only when μ. details as follows:
 After obtaining the matching feature point pairs, it is also judged whether the logarithm of the matching feature point pairs corresponding to each selected template image is greater than the threshold μ. If it is greater than μ, the data purification step is executed, otherwise the matching degree of the two images is directly judged Is 0. The threshold μ is recommended to be set to 6.
 In this specific embodiment, considering that an image to be recognized in actual matching may need to be matched with multiple template images, images to be recognized in different colors may also be matched with template images of different gray levels, and the number of template image feature points The number of pairs has a certain effect on the number of pairs of feature points that are successfully matched. Therefore, in order to further improve the accuracy of recognition, this embodiment uses the relative value of the number of pairs of matching points as the image matching degree to indicate the degree of matching between two images. In order to measure the degree of matching between images more objectively. Therefore, in step S2033, the matching degree of the two images is calculated according to the matched feature point pairs, and the specific steps of determining the template image matching the image to be recognized according to the matching degree are as follows:
 S20331: Calculate the image matching degree IMD=N/N according to the feature point pairs that match the image to be recognized with each selected template image 0 , Where N represents the number of pairs of matching feature points, N 0 Is the total number of SIFT feature points in the image to be recognized;
 S20332: Select the maximum IMD from all IMD values max , IMD max Compare with the set threshold λ to judge IMD max Whether it is greater than λ, if yes, judge the IMD max The corresponding template image is successfully matched with the image to be recognized; otherwise, it is determined that there is no template image matching the image to be recognized in the template database.