Vehicle type identification method based on machine vision and deep learning

A deep learning and vehicle type technology, applied in neural learning methods, character and pattern recognition, instruments, etc., can solve problems such as high error rate, serious occlusion, overlapping vehicles, etc., improve accuracy and precision, and reduce recognition and detection errors , Good detection effect

Pending Publication Date: 2020-08-07
HANGZHOU DIANZI UNIV
4 Cites 2 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0004] However, in the field of intelligent transportation, because some images are collected through the use of vehicle-mounted mobile platforms, the overlapping and occlusion of vehicles ...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

[0084] According to yolov3 target detector and classifier, the car type in...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention discloses a vehicle type identification method based on machine vision and deep learning. At present, most of the vehicle identification fields take images collected by high-altitude cameras as data sets, and images collected by mobile platforms are rarely taken as data sets. If a traditional image recognition technology is used, the requirement for mobile violation evidence obtaining cannot be met. According to the method, firstly, image information of a road automobile is collected through a vehicle-mounted mobile platform, preliminary automobile target detection and recognition are conducted through a yolov3 algorithm in deep learning, and then whether the image information is sent to three classifiers for re-prediction or not is comprehensively judged according to a detection frame and a prediction value threshold value. According to the detection results of the three classifiers and a target detection algorithm result, whether the detection box is subjected to errordetection and deletion or not is determined. Finally, a detection result of the system is updated. The method is suitable for the field of vehicle identification of the vehicle-mounted mobile platformin a non-limited operation environment, and achieves a better effect in an actual application scene.

Application Domain

Technology Topic

Image

  • Vehicle type identification method based on machine vision and deep learning
  • Vehicle type identification method based on machine vision and deep learning
  • Vehicle type identification method based on machine vision and deep learning

Examples

  • Experimental program(1)

Example Embodiment

[0049] The following is a detailed description of the system for detecting the type of traffic road vehicles based on machine vision and the accompanying drawings, so as to clearly and completely describe the technical solutions in the embodiments of the present invention.
[0050] In the example of the present invention, a residual yarn detection method based on machine vision is proposed, such as figure 1As shown, in the specific scheme, it can be divided into three steps: image acquisition and preprocessing, yolov3 preliminary detection of targets and classifier re-identification. First, it is necessary to use mobile platforms and industrial cameras to capture images from traffic road scenes, and perform preprocessing such as Gaussian filtering on the collected images. Then send the preprocessed image to the yolov3 target detector and get the preliminary detection frame and confidence, and then decide whether to send it to multiple classifiers for re-prediction by judging the detection frame area and the confidence threshold, and get the predicted classification and Confidence. Finally, combine the yolov3 detection results and the overall classifier running results to jointly output the target detection results.
[0051] The specific steps of a method for improving the vehicle recognition rate proposed in this example are:
[0052] Step (1): car image acquisition
[0053] By using a mobile platform and a 5-megapixel, 23.27fps MV-CA050-10GM/GC industrial camera to collect car images f(x, y) at the roadside or intersection for areas with violations, and save the collected images in the mobile platform terminal.
[0054] Step (2) Image preprocessing
[0055] The collected color images are preprocessed, and the mean filtering is performed to remove the noise signal in the image. The formula is as follows:
[0056]
[0057] Among them, f(x, y) represents the image information after median filtering, g(x, y) represents the original information of the image, M is the number of pixels in the area, and s is the value range of x and y in the area.
[0058] Step (3) Use the yolov3 network framework for target detection on the preprocessed digital image, and obtain the preliminary detection frame and predicted value:
[0059] (a) Normalize the size of the preprocessed image, and use the interpolation method to uniformly convert the collected image into an image with a size of 416*416. Its processing formula is as follows:
[0060]
[0061] where f1 i,j (x, y) is the pixel information of the converted image, and f(u, v) is the pixel information of the original image. When using the interpolation method for image scaling, it is necessary to find the position information of the converted pixel point corresponding to the original image. The processing method is as follows:
[0062] u=x*(srcwidth/dstwidth)
[0063] v=y*(srcheight/dstheight)
[0064] Where (srcwidth, srcheight) represents the size information of the image after conversion, and (dstwidth, dstheight) represents the size information of the image before conversion.
[0065] (b) The normalized image size is fed into the yolov3 convolutional network for several convolution and pooling operations. The formulas for its convolution and pooling operations are as follows:
[0066]
[0067]
[0068] in is the convolution symbol, Y is the convolution output, a 3×3 is the size of the convolution kernel, Y1 is the output of the maximum pooling layer, and h, w are the height and width information of the pooling box.
[0069] (c) Logistic regression is performed on the feature frame after convolution pooling to obtain the preliminary prediction frame and detection frame. The loss expression is as follows:
[0070] Loss=Loss lxy +Loss lwh +Loss lcls +Loss lconf
[0071] Among them Loss lxy Represents position loss, Loss lwh Indicates size loss, Loss lcls Represents the category loss, Loss lconf represents the position loss,
[0072] Step (4): determine whether to enter the classifier
[0073] After obtaining the preliminary detection results of yolov3, it is necessary to determine whether the detection frame needs to be re-predicted by inputting the classifier according to the area and threshold size of the detection frame, such as figure 2 The judgment formula is as follows:
[0074]
[0075] where Y i It is the judgment result of the i-th detection target in the image. If it is 1, it means that the detection result needs to enter the classifier for re-detection. If it is 0, it means that the detection result is the final output result. yo_area is the predicted frame area, and yo_pre is the predicted value of yolov3. area_th is the predicted frame area threshold, and pre_th is the predicted value threshold.
[0076] If Y i = 1, indicating that the i-th vehicle type detection frame in the image should be entered into the classifier for re-identification, otherwise the detection result will be output directly.
[0077] Step (5): determine the result of the judgment
[0078] The detection frame that needs to be re-identified is sent to the three classifiers at the same time, and the output results of the three classifiers are compared to obtain the final output of the classifier. The formula is as follows:
[0079]
[0080] Where cls_cls represents the final classification result output by the classifier, and cls_pre represents the classifier confidence. cls_cls1, cls_cls2, and cls_cls3 represent the classification results of the three classifiers, respectively, and cls_pre1, cls_pre2, and cls_pre3 represent the confidence levels of the three classifiers, respectively.
[0081] Determine whether to remove the detection frame according to the classifier result and the yolov3 prediction result, such as image 3 shown, and refresh the system detection results and perform the next picture detection, the judgment formula is as follows:
[0082]
[0083] Where Y represents the final detection result, yo_pre represents the yolov3 detection confidence result, yo_area represents the yolov3 detection frame, yo_cls represents the yolov3 detection category, cls_pre represents the classifier classification confidence, and cls_cls represents the classifier category. 0 means to delete the check box.
[0084] According to the yolov3 object detector and classifier, the car type in the image is jointly classified, and the whole system output is optimized.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Classification and recommendation of technical efficacy words

Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products