Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Urban road scene semantic segmentation method based on deep learning

A technology of semantic segmentation and deep learning, applied in neural learning methods, image analysis, details involving image mosaic, etc., to achieve the effect of less data sets, strong practicability and adaptability, and easy to understand

Active Publication Date: 2020-08-28
ZHEJIANG UNIV OF TECH
View PDF4 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to overcome the deficiencies of the existing technology and consider that intelligent vehicles can better identify the surrounding environment in complex environments such as urban roads, the present invention proposes a method for semantic segmentation of urban road scenes based on deep learning. Using a smaller data set can prevent the gradient from descending too fast, and can ensure that no overfitting occurs during training

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Urban road scene semantic segmentation method based on deep learning
  • Urban road scene semantic segmentation method based on deep learning
  • Urban road scene semantic segmentation method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] The method of the present invention will be described in further detail below in conjunction with the accompanying drawings.

[0038] refer to Figure 1 ~ Figure 4 , a method for semantic segmentation of urban road scenes based on deep learning, said method comprising the following steps:

[0039] 1) Image collection at the front end of the vehicle: regularly collect urban road images, set the time interval as T, and input the image with a resolution of h×w into the image detection module to obtain a valid image; then input the image into the labeling module to mark , the system uses the labeling software Labelme3.11.2 of the public image interface for labeling. Through its scene segmentation labeling function, the vehicles, pedestrians, bicycles, traffic lights and neon lights on the image are framed and labeled into different categories, and the generated labeling The image reflects different types of objects through different gray levels, and the gray table list and...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an urban road scene semantic segmentation method based on deep learning. The method comprises the following steps: 1) collecting an image at the front end of a vehicle; 2) input data expansion of the annotated image and the original image: randomly cutting, splicing or adding different types of noise to the image, transforming the image through an image affine matrix, and finally maintaining the original resolution of the image through filling, cutting and other transformation to obtain a data set; 3) training a network by using the image after data expansion and the annotation image, wherein the residual U-net network comprises a down-sampling part, a bridge part, an up-sampling part and a classification part; and 4) modifying the time interval T of the acquisitionmodule, inputting subsequently obtained images into the trained deep learning model, outputting predicted semantic segmentation images, and returning different gray levels in the images to the processor. A small data set is used, too fast gradient descent can be prevented, and it can be ensured that an over-fitting problem does not occur during training.

Description

technical field [0001] The invention belongs to the field of intelligent vehicles and relates to a semantic segmentation method for urban road scenes based on deep learning. Background technique [0002] In recent years, with the continuous development of urbanization, urban road conditions have become more and more complex. Pedestrians, traffic lights, zebra crossings and different means of transportation will all affect the speed and obstacle avoidance measures of smart vehicles. The semantic segmentation method through deep learning can well identify the environment around the vehicle and make different feedbacks. Semantic segmentation is to assign a preset category to each pixel of an image, which not only keeps intelligent vehicles understanding the surrounding environment in real time while driving, but also reduces the occurrence of traffic accidents. Therefore, the research on deep learning of urban road environment has always been a research hotspot in the field of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/34G06K9/62G06N3/04G06N3/08G06T3/00G06T3/40G06T7/90
CPCG06T3/4084G06T7/90G06T3/4038G06N3/08G06T2200/32G06T2207/20081G06T2207/20084G06V10/267G06N3/045G06F18/25G06F18/241G06F18/214G06T3/02Y02T10/40
Inventor 宋秀兰魏定杰孙云坤何德峰余世明卢为党
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products