A road environment visual perception method based on an improved Faster R-CNN

A technology of visual perception and road environment, which is applied in the direction of instruments, character and pattern recognition, scene recognition, etc., can solve the safety and reliability constraints of autonomous driving technology, the promotion and popularization of driverless cars, the wrong prediction of targets, and the inaccurate characteristics Improve the generalization ability and detection accuracy, reduce the missed detection rate, and improve the detection ability

Active Publication Date: 2019-03-08
TIANJIN UNIVERSITY OF TECHNOLOGY
View PDF7 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the face of complex road scenes, the safety and reliability problems of autonomous driving technology have always been the bottleneck restricting the promotion and popularization of driverless cars
However, the Faster R-CNN algorithm also has many shortcomings
For example, Faster R-CNN can only use a single GPU for training. When the number of training samples is hu

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A road environment visual perception method based on an improved Faster R-CNN
  • A road environment visual perception method based on an improved Faster R-CNN
  • A road environment visual perception method based on an improved Faster R-CNN

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] In order to describe in detail the technical content, structural features, achieved goals and effects of the technical solution, the following will be described in conjunction with specific embodiments and accompanying drawings.

[0038] The present invention proposes a road environment visual perception method based on improved Faster R-CNN, which comprises the following steps:

[0039] S1. Before the input image enters the network model, it is first scaled to 1600*700, and then enters the ResNet-101 feature extraction network in the Featureextraction network module, such as figure 2 shown. After the Conv1, Conv2_x, Conv3_x, and Conv4_x of ResNet-101 have a total of 91 layers of fully convolutional networks, the Feature maps of the pictures are extracted;

[0040] S2. The Feature maps output by the Feature extraction network module enter the Regionproposal network module, such as figure 1 shown. The Region proposal network module uses a 3*3 sliding window to traver...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention relates to a road environment visual perception method based on an improved Faster R-CNN. Aiming at high-precision requirements of target detection and identification in a complex road scene, the invention provides an improved Faster R-CNN algorithm based on multi-GPU training. According to the algorithm, a multi-GPU parallel training method is used, so that the training efficiency is improved. By employing the ResNet-101 feature extraction network, the target detection precision is improved. By employing the Soft-NMS algorithm to reduce the omission factor, at the same time, introducing an OHEM into the ROI NetWork to reduce the false alarm rate, in order to improve the target detection effect of the algorithm in rainy, snowy and haze weather, the model is trained bycombining an internationally recognized automatic driving data set KITTI and Oxford RobotCar. The experimental results prove that by comparing with the Faster R-CNN, the algorithm has the advantagesthat the training speed and the detection precision are obviously improved, and particularly, and has good generalization ability and stronger practicability in an automatic driving scene.

Description

technical field [0001] The invention belongs to the technical field of image processing, and in particular relates to a road environment visual perception method based on improved Faster R-CNN. Through the improvement of the Faster R-CNN algorithm, this method has significantly improved the network model training speed and target detection accuracy, especially in automatic driving scenarios with good generalization ability and stronger practicability. Background technique [0002] The self-driving car milestone began in 2009 when Google began developing a self-driving car project known as Waymo. In recent years, as AlphaGo demonstrated the powerful learning ability of deep learning, the application of environment perception and driving decision-making algorithms based on deep learning in automatic driving has made it possible for unmanned driving to truly replace human driving. However, in the face of complex road scenes, the safety and reliability problems of autonomous dr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/58G06V20/584G06F18/214
Inventor 董恩增路尧佟吉刚
Owner TIANJIN UNIVERSITY OF TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products