Traffic scene classification method based on multi-scale convolution neural network

A convolutional neural network and traffic scene technology, applied in the field of road traffic scene classification in urban and suburban areas, to achieve increased accuracy, precise classification, and obvious training effects

Inactive Publication Date: 2016-09-21
DALIAN UNIV OF TECH
View PDF4 Cites 75 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The deep learning method can directly act on the original image data, can extract the hidden features that reflect the nature of the data, has sufficient model complexity, and can realize multi-object classification in traffic scenes, but the model structure of the deep learning method is diverse, and there are still a lot of problems. The development and optimization space, in the existing deep learning methods, the outline clarity and accuracy of classified images need to be improved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Traffic scene classification method based on multi-scale convolution neural network
  • Traffic scene classification method based on multi-scale convolution neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention will be further described below in conjunction with the accompanying drawings.

[0034] Such as figure 1 Shown, the specific embodiment of the present invention comprises the following steps:

[0035] A. Extract hidden features based on multi-scale convolutional neural network

[0036] A1. Based on the vehicle-mounted RGB-D camera, obtain the RGB-D image of the traffic scene in front of the vehicle, that is, color figure 1 and depth figure 2 , forming a four-channel Laplacian pyramid image 4 as the data input of the deep learning algorithm; at the same time, based on the image minimum spanning tree segmentation, using the classic region fusion method, taking the RGB-D image in the traffic scene as the input, the structure has a hierarchical structure The original split tree of 3. Among them, each node in the original segmentation tree 3 corresponds to an original classification image area, and the root node C 10 Represents the entire origin...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a traffic scene multi-target classification method, to be specific, discloses a traffic scene classification method based on a multi-scale convolution neural network. The traffic scene classification method is characterized in that recessive characteristics based on the multi-scale convolution neural network are extracted; and an optimal covering segmentation tree is acquired. During the realizing of the traffic scene classification, the multi-scale convolution neural network is adopted, and the excellent recessive characteristics having the invariance property are effectively extracted from an original image in different scales, and by comparing with the single-scale convolution neural network, the acquisition of the abundant and effective characteristic information of the image is realized. The effective information extracted by the convolution neural network is combined with the original segmentation tree of the image to form an optimal purity price tree, and the covering having the optimal purity is carried out, and therefore a clearer target contour is acquired, and the classification accuracy is increased. The RGB-D is used as the convolution neural network input, and by comparing with the conventional RGB convolution neural network input, the training characteristic is additionally provided with the depth information, and the classification of the input image is more accurate.

Description

technical field [0001] The invention belongs to the field of vehicle intelligent transportation, in particular to a method for classifying urban and suburban road traffic scenes. Background technique [0002] Vehicle intelligence is one of the three core technologies in the development of today's automobile industry. The classification of road traffic scenes is an important prerequisite and basis for improving the intelligence of intelligent vehicles and advanced driver assistance systems (ADAS). [0003] Traffic scene classification refers to the use of vehicle-mounted cameras to capture traffic scene images, using different machine learning methods to simulate the human visual perception process, and to classify and mark vehicles, pedestrians, roads, and environmental elements in the captured scene. At present, according to the depth of the hierarchical structure of the machine learning model, the classification methods of traffic scenes can be divided into methods based o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/08
CPCG06N3/08G06V20/54G06F18/24
Inventor 李琳辉连静李红挪刘爽钱波周雅夫孙延秋矫翔
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products