Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A road environment visual perception method based on improved faster R-CNN

A technology of visual perception and road environment, applied in the direction of instruments, scene recognition, calculation, etc., can solve problems such as the safety and reliability of autonomous driving technology restricting the promotion and popularization of driverless cars, long training time, and imprecise features. Achieve the effect of improving generalization ability and detection accuracy, improving accuracy and accuracy, and improving training time

Active Publication Date: 2021-08-03
TIANJIN UNIVERSITY OF TECHNOLOGY
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the face of complex road scenes, the safety and reliability problems of autonomous driving technology have always been the bottleneck restricting the promotion and popularization of driverless cars
However, the Faster R-CNN algorithm also has many shortcomings
For example, Faster R-CNN can only use a single GPU for training. When the number of training samples is huge or the feature extraction network is deepened, it will cause problems such as long training time and insufficient video memory; because the features extracted by the feature extraction network are not fine, resulting in The problem of target missed detection; when encountering complex scenes or targets with occlusion or deformation, the problem of target prediction errors

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A road environment visual perception method based on improved faster R-CNN
  • A road environment visual perception method based on improved faster R-CNN
  • A road environment visual perception method based on improved faster R-CNN

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] In order to describe in detail the technical content, structural features, achieved goals and effects of the technical solution, the following will be described in conjunction with specific embodiments and accompanying drawings.

[0038] The present invention proposes a road environment visual perception method based on improved Faster R-CNN, which comprises the following steps:

[0039] S1. Before the input image enters the network model, it is first scaled to 1600*700, and then enters the ResNet-101 feature extraction network in the Featureextraction network module, such as figure 2 shown. After the Conv1, Conv2_x, Conv3_x, and Conv4_x of ResNet-101 have a total of 91 layers of fully convolutional networks, the Feature maps of the pictures are extracted;

[0040] S2. The Feature maps output by the Feature extraction network module enter the Regionproposal network module, such as figure 1 shown. The Region proposal network module uses a 3*3 sliding window to traver...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A road environment visual perception method based on improved Faster R‑CNN. To meet the high-precision requirements of target detection and recognition in complex road scenes, the present invention proposes an improved Faster R-CNN algorithm based on multi-GPU training. The algorithm of the present invention uses the method of multi-GPU parallel training to improve the training efficiency; adopts the ResNet-101 feature extraction network to improve the target detection accuracy; adopts the Soft-NMS algorithm to reduce the missed detection rate; introduces OHEM into the ROI NetWork to reduce the false alarm rate; for Improve the target detection effect of the algorithm in rainy, snowy and foggy weather, and train the model with the internationally recognized autonomous driving data sets KITTI and Oxford RobotCar. Experimental results prove that the algorithm of the present invention has significantly improved training speed and detection accuracy compared with Faster R-CNN, especially in automatic driving scenarios with good generalization ability and stronger practicability.

Description

technical field [0001] The invention belongs to the technical field of image processing, and in particular relates to a road environment visual perception method based on improved Faster R-CNN. Through the improvement of the Faster R-CNN algorithm, this method has significantly improved the network model training speed and target detection accuracy, especially in automatic driving scenarios with good generalization ability and stronger practicability. Background technique [0002] The self-driving car milestone began in 2009 when Google began developing a self-driving car project known as Waymo. In recent years, as AlphaGo demonstrated the powerful learning ability of deep learning, the application of environment perception and driving decision-making algorithms based on deep learning in automatic driving has made it possible for unmanned driving to truly replace human driving. However, in the face of complex road scenes, the safety and reliability problems of autonomous dr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/58G06V20/584G06F18/214
Inventor 董恩增路尧佟吉刚
Owner TIANJIN UNIVERSITY OF TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products