Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Streetscape semantic annotation method based on convolutional neural network and semantic transfer conjunctive model

A technology of convolutional neural network and semantic transfer, which is applied in the field of street view labeling based on the joint model of convolutional neural network and semantic transfer, can solve the problems of unbalanced data sets, inability to extract richer and more differentiated target features, etc. , to achieve the effect of improving the labeling results and labeling accuracy

Active Publication Date: 2016-03-09
NORTHWESTERN POLYTECHNICAL UNIV
View PDF4 Cites 38 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In order to avoid the problem of unbalanced existing data sets and the inability of existing methods to extract richer and more differentiated target features, this invention proposes a street view semantic labeling method based on a joint model of convolutional neural network and semantic transfer

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Streetscape semantic annotation method based on convolutional neural network and semantic transfer conjunctive model
  • Streetscape semantic annotation method based on convolutional neural network and semantic transfer conjunctive model
  • Streetscape semantic annotation method based on convolutional neural network and semantic transfer conjunctive model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] Now in conjunction with embodiment, accompanying drawing, the present invention will be further described:

[0031] The present invention proposes a street view labeling method based on a joint model of convolutional neural network and semantic transfer. Specifically, the algorithm improves the accuracy of street view labeling by extracting richer and more differentiated target features, combined with contextual information in the scene. In order to optimize the time performance, the invention transforms the pixel-by-pixel labeling problem into a superpixel labeling problem. Its technical solution includes two modules: deep feature extraction and soft-limited semantic transfer.

[0032] Feature extraction:

[0033] 1. Super pixel processing. First, the image is over-segmented into a certain number of superpixels, and the a priori information of the position of the superpixels in the original image is retained.

[0034] 2. Deep model training. Perform specific super...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a streetscape semantic annotation method based on a convolutional neural network and a semantic transfer conjunctive model. A device according to the streetscape semantic annotation method comprises a deep characteristic extracting part and a soft limited semantic transfer part. A more balanced training set is constructed, and furthermore a super-pixel classification deep model with prior information is trained. According to the streetscape semantic annotation method, the prior information of a scene can be sufficiently mined, and a characteristic expression with more remarkable difference is learned so that the annotation accuracy of a superpixel is greatly improved. Through a Markov random field model, an initial result is optimized and an unnecessary noise is eliminated so that an annotation result is further improved. Finally per-pixel annotation accuracy and average classification accuracy are respectively higher than 77% and 53%.

Description

technical field [0001] The invention belongs to the technical fields of computer vision and graphics processing, and in particular relates to a street view labeling method based on a joint model of convolutional neural network and semantic transfer. Background technique [0002] With the continuous development of the field of intelligent driving, unmanned driving technology has become one of the important directions of research in this field. How to make the computer understand the surrounding environment during the driving of the vehicle and make targeted driving operations is an important research content of driverless cars. The current unmanned driving system uses the integration of traditional technologies such as pedestrian detection, road detection, and traffic sign detection to achieve the purpose of understanding the surrounding street scene, but the effect is not significant. In recent years, researchers have also proposed the method of using street view annotation...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62
CPCG06F18/295
Inventor 袁媛王琦高君宇
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products