Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Road scene segmentation method based on full convolutional neural network

A convolutional neural network and convolutional neural network technology, applied in the fields of unmanned driving, image segmentation, target recognition, and target retrieval, can solve the problem that the segmentation accuracy of road signs, vehicles, and pedestrians does not reach the ideal result, and the segmentation effect of complex scenes Poor, difficult to meet practical requirements and other problems, to achieve the effect of solving the problem of road scene segmentation, preventing over-segmentation, and solving the problem of segmentation

Active Publication Date: 2018-08-17
HUBEI UNIV OF TECH
View PDF5 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Environmental perception includes road recognition, car detection, pedestrian detection, and road marking line recognition, etc. Solving these problems has always been a big challenge
Pedestrian detection based on binocular stereo vision and SVM algorithm uses threshold segmentation to determine the coordinate position of the moving target; for the diversity of motion indicators, the road is segmented with five motion indicators such as height, projection surface direction, and feature tracking density. The above method requires huge computing resources, and it is difficult to meet the practical requirements of unmanned vehicles at this stage
Since 2014, deep learning has been gradually introduced into road scene segmentation. Some researchers have proposed intelligent vehicle steering research based on end-to-end deep learning, and obtained good road feature encoding through pre-training self-encoding; in recent years, Due to the huge parallel computing power of the computer that accelerates the training process of large-scale data, the convolutional neural network (CNN) has become a research hotspot and has been widely used. High-order features in the scene are used to segment road scenes. Although the calculation intensity is reduced, there are also problems with poor segmentation results in some complex scenes. Aiming at the problem of complex scenes, a feature automatic algorithm using deep convolutional neural network (DCNN) is proposed. Extraction ability, supplemented by feature autoencoder to measure feature similarity in source-target scene
However, the segmentation accuracy of these algorithms for road signs, vehicles, and pedestrians has not achieved ideal results, and over-segmentation often occurs for roads in rainy days, snowy days, and high-temperature weather.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Road scene segmentation method based on full convolutional neural network
  • Road scene segmentation method based on full convolutional neural network
  • Road scene segmentation method based on full convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The technical solution of the present invention will be further described below in conjunction with the accompanying drawings.

[0035] Such as figure 1 As shown, the original data set is processed into a training set through the improved KSW double-threshold segmentation and genetic algorithm, and different weights are selected in the FCN network framework. The FCN-16s model is obtained; the FCN-8s model is obtained by selecting the FCN-16s weight. According to the results of multiple experiments, the effect of the FCN-16s model is the best, and the present invention selects the weight of the FCN-32s to build the FCNN network framework.

[0036] Step 1, put the original road scene image as initial data into the improved KSW (best entropy) two-dimensional threshold and genetic algorithm to obtain the training set of deep learning, and the test set can use the original RGB image. The specific implementation steps include the following steps;

[0037] Step 1a, first set...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a road scene segmentation method based on a full convolutional neural network, and the method comprises the following steps: 1, carrying out the median filtering of an original road scene image through a KSW two-dimensional threshold value and a genetic algorithm, and obtaining a training set; 2, constructing a full convolutional neural network framework; 3, taking a training sample obtained at step 1 and an artificial segmentation image discriminated and identified through human eyes as the input data of the full convolutional neural network, and obtaining a deep learning neural network segmentation model with the higher robustness and better accuracy through training; 4, introducing to-be-segmented road scene image test data into the trained deep learning neuralnetwork segmentation model, and obtaining a final segmentation result. An experiment result indicates that the method can effectively solve a segmentation problem of a road scene image, has the higherrobustness and segmentation precision than a conventional road scene image segmentation method, and can be further used for the road image segmentation in more complex scenes.

Description

technical field [0001] The invention belongs to the technical field of image segmentation, and relates to road image segmentation applicable to multiple scenes such as cities, suburbs, and expressways, and can be used in fields such as object recognition, object retrieval, and unmanned driving technology. Background technique [0002] With the development of society, the improvement of economy, and the continuous improvement of urban traffic, the popularity of automobiles has been greatly increased. However, various traffic accidents caused by improper driving of automobiles are also increasing day by day. Today, with the continuous advancement of science and technology, unmanned driving technology has also emerged as the times require. The unmanned driving system is a complex intelligent control system that integrates multiple modules such as intelligent identification, path planning, and mechanical control. The ultimate goal is to realize the automatic driving operation of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06T7/136G06N3/12G06N3/08G06N3/04
CPCG06N3/088G06N3/126G06T7/11G06T7/136G06N3/045
Inventor 王云艳罗冷坤徐超
Owner HUBEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products