Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Network driving environment integrated perception model based on convolutional and hollow convolutional structure

A technology that integrates the network and the driving environment. It is applied in the field of advanced automotive driver assistance. It can solve the problems of large amount of calculation, high cost, and repeated calculations, and achieve low computing costs and data labeling costs, improve accuracy, and simple labeling. Effect

Active Publication Date: 2018-12-11
SOUTHEAST UNIV
View PDF4 Cites 59 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the above problems, the present invention provides a fusion network driving environment perception model based on convolution and atrous convolution structure, which solves the problem that the current driving environment perception model has a large amount of calculation, repeated calculations, a single-task model solves a single problem, and the semantic segmentation model does not Semantic segmentation data set requirements are too high (the cost of pixel-level data labeling is too high), and the problem of multi-task driving environment perception cannot be completed at the same time. To achieve this goal, the present invention provides a fusion network driving environment based on convolution and hole convolution structure Perceptual model, concrete steps are as follows, it is characterized in that:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Network driving environment integrated perception model based on convolutional and hollow convolutional structure
  • Network driving environment integrated perception model based on convolutional and hollow convolutional structure
  • Network driving environment integrated perception model based on convolutional and hollow convolutional structure

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] Below in conjunction with accompanying drawing and specific embodiment the present invention is described in further detail:

[0024] The present invention provides a fusion network driving environment perception model based on convolution and atrous convolution structure, which solves the problem that the current driving environment perception model has a large amount of calculation, many repeated calculations, a single task model solves a single problem, and the semantic segmentation model does not affect the semantic segmentation data set. The requirements are too high (the cost of pixel-level data labeling is too high), and the problem of multi-tasking driving environment perception cannot be completed at the same time.

[0025] A fusion network driving environment perception model based on convolution and hole convolution of the present invention comprises the following steps:

[0026] 1) Capture images of the current driving environment through a camera installed ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A network driving environment integrated perception model based on a convolutional and hollow convolutional structure simultaneously realizes object detection and semantic segmentation. A video imageof a road environment is shot through a forward-looking camera system mounted on a vehicle. The residual network model is used to get the bottom feature map of the image. The converged network is designed, which includes two sub-modules: object detection and semantic segmentation. The two modules share the bottom feature map. Among them, the object detection module is responsible for predicting the confidence level of the object frame and the category, and the semantic segmentation module is responsible for predicting the pixel level of each category. The appropriate loss function is selectedfor each of the two modules, and the perceptual model tends to converge in both modules after alternate training. Finally, the joint loss function is used to train the two modules simultaneously to get the final perceptual model. The model can simultaneously complete object detection and semantic segmentation with small computation amount, and the perceptual model uses a large amount of data of object detection to assist the semantic segmentation module to learn the image distribution law.

Description

technical field [0001] The invention relates to the technical field of advanced automobile driver assistance, in particular to a fusion network driving environment perception model based on convolution and atrous convolution structures. Background technique [0002] Driving environment perception is an important function of ADAS (Advanced Driver Assistance System). The existing driving environment perception mainly includes object detection (for objects of interest, such as pedestrians, vehicles, bicycles, traffic signs, etc., to obtain the position information and category information of the object in the image) and semantic segmentation (for each pixel of the image Dots mark categories respectively) two major tasks. Driving environment perception can be used to assist driving decision-making and reduce traffic accidents. [0003] At present, in order to complete target detection and semantic segmentation, statistical learning methods such as support vector machines or co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/084G06V20/56G06N3/045
Inventor 秦文虎张仕超
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products