Bayonet vehicle image identification method based on image features

An image feature and image recognition technology, which is applied in the field of bayonet vehicle image recognition based on image features, can solve the problem of high computational complexity, and achieve the effects of high recognition accuracy, fine classification and improved accuracy.

Inactive Publication Date: 2013-06-12
SUN YAT SEN UNIV +1
5 Cites 58 Cited by

AI-Extracted Technical Summary

Problems solved by technology

For the former, different vehicle images may correspond to the same distribution features; f...
View more

Method used

[0073] In step S201, this specific embodiment is based on the difference in perception of color by the human eye, and in combination with the factors affected by illumination, the color of the vehicle body is customized, thereby judging the color of the vehicle body accor...
View more

Abstract

The invention relates to the field of traffic image processing, in particular to a bayonet vehicle image identification method based on image features. The steps of bayonet vehicle image recognition method based on the image features includes that a template dada bank is established, the data bank stores template images which are shot with different style and different color vehicles, vehicle attribute data, vehicle body color data and vehicle scale invariant feature transform (SIFT) feature data are stored corresponding to each template image, wherein the SIFT feature data are obtained through data processing; vehicle recognition is conducted to to-be-identified images, color identification is conducted to the to-be-identified images, the template images which are fitted with the vehicle body colors are selected form the template data bank according to color identification results; the to-be-identified images are extracted the SIFT features and compared with the selected template images; the template images which are matched with the to-be-identified images are obtained; the vehicle attribute data are output, wherein the vehicle attribute data correspond to the matched template images. The bayonet vehicle image identification method based on the image features combines the vehicle color identification and the recognition based on a SIFT operator principle. Color information is added in the identification process. The defect that the SIFT operator discard the color information is overcome. Identification accuracy can be improved.

Application Domain

Road vehicles traffic controlCharacter and pattern recognition

Technology Topic

Imaging FeatureFeature data +6

Image

  • Bayonet vehicle image identification method based on image features
  • Bayonet vehicle image identification method based on image features
  • Bayonet vehicle image identification method based on image features

Examples

  • Experimental program(2)

Example Embodiment

[0059] Example 1
[0060] Such as figure 1 As shown, it is a flowchart of a specific embodiment of a bayonet vehicle image recognition method based on image features in the present invention. Referring to Fig. 1, the specific steps of a bayonet vehicle image recognition method based on image features in this specific embodiment include:
[0061] Step S100: Establish a template database:
[0062] Step S101: Store the photographed template images of vehicles of different styles and colors in the template database, and store vehicle attribute data and body color data corresponding to each template image in the template database. The vehicle attribute data may include vehicle brand and vehicle model. And the year of the vehicle; figure 2 Shown are two template images of "Toyota, Corolla, Ninth Generation";
[0063] Step S102: Perform data processing on each template image in the template database, obtain vehicle SIFT feature data and store it in the template database, where the SIFT feature data generally includes SIFT feature points and SIFT descriptors of each feature point;
[0064] Step S200: Perform vehicle recognition on the image to be recognized:
[0065] Step S201: Perform color recognition on the image to be recognized;
[0066] Step S202: Select a template image that matches the color of the vehicle body from the template database according to the color recognition result;
[0067] Step S203: Extract features from the image to be recognized by using the principle of the SIFT operator, and compare it with the selected template image to obtain a template image that matches the image to be recognized;
[0068] Step S204: output the vehicle attribute data corresponding to the matched template image.
[0069] This specific embodiment combines vehicle color recognition with the recognition based on the SIFT operator principle, and selects the template image data that meets the color result for each vehicle model in the template database according to the color recognition result for comparison based on the SIFT operator principle. The matching template image is found in the template database, and corresponding vehicle attribute data is output according to the matching template image.
[0070] In step S102, in order to eliminate the interference of the feature points of the vehicle license plate area on the image matching and improve the accuracy of recognition, in the specific implementation process of this specific embodiment, the license plate area is demarcated in advance for each template image in the template database. And remove the SIFT features in the license plate area of ​​each template image, the specific steps are as follows:
[0071] Step S1021: Mark the image coordinates of the vehicle license plate in each template image;
[0072] Step S1022: Determine the vehicle SIFT feature points for each template image, that is, remove the SIFT feature point information located in the license plate mark area in the template image, and retain the remaining SIFT feature information in the template image as the vehicle SIFT feature of each template image data. If there is a license plate in the template image, mark the image coordinates where the license plate is located, remove the SIFT feature point information located within the coordinate range of the license plate mark, and calculate the SIFT descriptor of the remaining feature points, and finally obtain the vehicle SIFT of each template image Characteristic data. Such as image 3 Shown is the SIFT feature calculation result of the template image. The starting point of the arrow in the figure represents the position of the feature point, the direction of the arrow represents the main direction of the feature point, and the direction of the arrow represents the descriptive sub-model value.
[0073] In step S201, this specific embodiment customizes the color of the vehicle body based on the difference in color perception by the human eye, combined with the factors affecting the light, so as to determine the color of the vehicle body according to the customized color, and combines the RGB and The HSV value judges the color of the vehicle body and judges the color of the vehicle body according to a certain color sequence, which makes the result of color recognition more stable. Specific steps are as follows:
[0074] Step S2011: Pre-divide the body color into six categories: green, yellow, red, blue, white and black. Yellow includes yellow, orange and brown perceived by human eyes, red includes red, pink and purple perceived by human eyes, and white includes White, silver, and light gray perceived by human eyes, black includes black and dark gray perceived by human eyes;
[0075] Step S2012: For the five colors of green, yellow, red, blue, and white, combining the pairwise differences of r, g, and b, setting an empirical threshold to draw a certain range for the values ​​of r, g, b, h, s, and v; As shown in Table 1 below:
[0076] Table 1 Color judgment
[0077] colour h Value range s Value range v Value range r , g , b range Other conditions green 70 h ≤170
[0078] See Table 1, where ThreS , ThreV , ThreRGB , ThreWhite These are the empirical thresholds, and the recommended values ​​are 30, 20, 15, 160, diff( r , g , b ) Means r , g , b The difference between any two values ​​in;
[0079] As shown in Table 1, when a certain pixel in the image to be identified, its h value is 70
[0080] Step S2013: Count the proportions of the five color pixels of green, yellow, red, blue, and white in the vehicle body range on the image to be recognized; the vehicle body range excludes the window range and the front face exhaust grid Range and lamp range;
[0081] Step S2014: Judging the various color ratios of the image to be recognized in the order of green, yellow, red, blue, and white. When the current color ratio exceeds the empirical threshold of the corresponding color, it is determined that the vehicle body of the image to be recognized is the current color. When the body color ratio of the image to be recognized does not exceed the empirical threshold of the five colors of green, yellow, red, blue, and white, it is determined that the body color of the image to be recognized is black, and the color recognition result is obtained.
[0082] In this specific embodiment, when performing vehicle recognition on the image to be recognized, the SIFT feature data of the vehicle image is determined according to the principle of the SIFT operator, and the color recognition result is input into the template database for comparison, and finally the recognition result is obtained. Therefore, the specific steps of step S203 in this specific embodiment are:
[0083] S2031: Use the principle of SIFT operator to determine the SIFT feature data of the image to be recognized; Figure 4 Shown are examples of images to be recognized and their SIFT features.
[0084] S2032: Compare the SIFT feature data of the image to be recognized with the SIFT feature data of each selected template image to obtain matching feature point pairs;
[0085] S2033: Calculate the matching degree of the two images according to the matched feature point pairs, and determine the template image matching the image to be recognized according to the matching degree.
[0086] Among them, the matching feature point pairs in step S2032 can be expressed by the Euclidean distance to indicate their matching degree, and the specific steps are as follows:
[0087] S20321: preset the threshold ε;
[0088] S20322: For the SIFT feature point P in the image to be recognized, calculate the Euclidean distance between all feature points in each selected template image and the feature vector descriptor of P, and find the minimum and second smallest Euclidean distance from it Values ​​d1 and d2, and record the SIFT feature points Q1 and Q2 corresponding to d1 and d2 in the template image respectively;
[0089] S20323: If d2 is 0, set the parameter ratio 0; otherwise calculated;
[0090] S20324: Set parameters ratio Compared with the threshold ε, when ratio When it is less than the threshold ε, it is judged that the SIFT feature point P in the image to be recognized matches the feature point Q1 of the template image successfully, otherwise it is judged that the matching of P and Q1 is unsuccessful;
[0091] S20325: Count and record the feature point pairs of each selected template image and the image to be recognized. Such as Figure 5 Shown, put Figure 4 The image to be recognized (the color recognition result is white) and figure 2 The white template image shown is matched, and the result is Figure 5 As shown, the feature points connected by a straight line between the two cars in the figure represent matching feature point pairs.
[0092] After the matching in step S20324 is completed, in order to improve the robustness of the algorithm, this specific embodiment may also use the RANSAC algorithm to perform data purification on the matching result. The specific steps are as follows:
[0093] The position mapping relationship between each selected template image and the corresponding matching feature point pair in the image to be recognized is used as the input value of the RANSAC algorithm, and the homography matrix of the image transformation is estimated by the RANSAC method, and the ones that do not meet the geometric consistency are eliminated Feature point pairs, the retained feature point pairs are obtained as the final matched feature point pairs. Such as Image 6 Shown, for Figure 5 Schematic diagram of the results of data purification for the matching results.
[0094] Since in the RANSAC algorithm, the estimation of the homography matrix requires at least 4 pairs of matching feature points, in order to enhance the stability of the algorithm, after the matching is completed, the logarithm of the matching feature point pairs is greater than the set threshold Data purification operation is performed only when μ. details as follows:
[0095] After obtaining the matching feature point pairs, it is also judged whether the logarithm of the matching feature point pairs corresponding to each selected template image is greater than the threshold μ. If it is greater than μ, the data purification step is executed, otherwise the matching degree of the two images is directly judged Is 0. The threshold μ is recommended to be set to 6.
[0096] In this specific embodiment, considering that an image to be recognized in actual matching may need to be matched with multiple template images, images to be recognized in different colors may also be matched with template images of different gray levels, and the number of template image feature points The number of pairs has a certain effect on the number of pairs of feature points that are successfully matched. Therefore, in order to further improve the accuracy of recognition, this embodiment uses the relative value of the number of pairs of matching points as the image matching degree to indicate the degree of matching between two images. In order to measure the degree of matching between images more objectively. Therefore, in step S2033, the matching degree of the two images is calculated according to the matched feature point pairs, and the specific steps of determining the template image matching the image to be recognized according to the matching degree are as follows:
[0097] S20331: Calculate the image matching degree IMD=N/N according to the feature point pairs that match the image to be recognized with each selected template image 0 , Where N represents the number of pairs of matching feature points, N 0 Is the total number of SIFT feature points in the image to be recognized;
[0098] S20332: Select the maximum IMD from all IMD values max , IMD max Compare with the set threshold λ to judge IMD max Whether it is greater than λ, if yes, judge the IMD max The corresponding template image is successfully matched with the image to be recognized; otherwise, it is determined that there is no template image matching the image to be recognized in the template database.

Example Embodiment

[0099] Example 2
[0100] Such as Figure 7 Shown is a flowchart of a preferred embodiment of a bayonet vehicle image recognition method based on image features in the present invention. See Figure 7 , The specific steps of this preferred embodiment are as follows:
[0101] Establish a template database: Pre-store template images of different models of vehicles in the template database, and store the vehicle attribute data, body color, and SIFT feature data of the template image in the template database; for each template image, the first Delimit the area where the license plate is located, and delete the SIFT feature information of each template image located in the license plate area;
[0102] When it is necessary to perform vehicle recognition on the image to be recognized, first perform color recognition on the image to be recognized, and simultaneously calculate the SIFT feature of the image to be recognized;
[0103] Input the to-be-recognized image into the template database for comparison, and first select the appropriate color template image under different car models in the template database according to the color recognition result to perform SIFT feature matching;
[0104] Purify the matching results;
[0105] Perform IMD calculation on the matching result after data purification, take the template image corresponding to the largest IMD value, and take the vehicle attribute data of the template image for output.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Imaging apparatus and flicker detection method

ActiveUS20100013953A1reduce dependencyimprove accuracy
Owner:RENESAS ELECTRONICS CORP

Color interpolation method

InactiveUS20050117040A1improve accuracy
Owner:MEGACHIPS

Emotion classifying method fusing intrinsic feature and shallow feature

ActiveCN105824922AImprove classification performanceimprove accuracy
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Scene semantic segmentation method based on full convolution and long and short term memory units

InactiveCN107480726Aimprove accuracylow resolution accuracy
Owner:UNIV OF ELECTRONIC SCI & TECH OF CHINA

Classification and recommendation of technical efficacy words

  • improve accuracy

Golf club head with adjustable vibration-absorbing capacity

InactiveUS20050277485A1improve grip comfortimprove accuracy
Owner:FUSHENG IND CO LTD

Stent delivery system with securement and deployment accuracy

ActiveUS7473271B2improve accuracyreduces occurrence and/or severity
Owner:BOSTON SCI SCIMED INC

Method for improving an HS-DSCH transport format allocation

InactiveUS20060089104A1improve accuracyincrease benefit
Owner:NOKIA SOLUTIONS & NETWORKS OY

Catheter systems

ActiveUS20120059255A1increase selectivityimprove accuracy
Owner:ST JUDE MEDICAL ATRIAL FIBRILLATION DIV

Gaming Machine And Gaming System Using Chips

ActiveUS20090075725A1improve accuracy
Owner:UNIVERSAL ENTERTAINMENT CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products