Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image pixel marking method based on deep convolution neural network

A neural network and deep convolution technology, applied in the field of computer vision, which can solve problems such as difficulty in improving and difficulty in learning labeling improvement tasks.

Inactive Publication Date: 2017-06-13
SHENZHEN WEITESHI TECH
View PDF7 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problems of difficult learning and difficult improvement of marking improvement tasks, the purpose of the present invention is to provide a method for marking image pixels based on a deep convolutional neural network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image pixel marking method based on deep convolution neural network
  • Image pixel marking method based on deep convolution neural network
  • Image pixel marking method based on deep convolution neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0024] figure 1 It is a system flowchart of an image pixel labeling method based on a deep convolutional neural network in the present invention. Mainly including image input; detection; replacement; refinement; predictive marker estimation.

[0025] Wherein, the image input uses a traffic scene set as a data set, which includes scene maps of various types of vehicles driving on the road, with a resolution of 1392×512; vehicle objects include cars, trucks, trucks, rail Trams, etc.; let X = Represents an input image of size H×W, where x i is the i-th pixel of the image, Represents some initial marker estimates for the input image.

[0026] Wherein, the detection detects the wrong...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image pixel marking method based on deep convolution neural network. The main content of the method comprises the following steps of image input, detection, replacement, refinement and prediction mark estimation. The process comprises the following steps that a traffic scene image and initial estimations for the image are inputted by using the deep convolution neural network, detection components detect errors in the input marks, and then substitute components replace the error marks with new marks, at last, all output marks are improved overall in the manner of residual correction, and new, accurate mark estimations are acquired. The image pixel marking method can save a lot of memory and time by adopting a neural network model and achieve more accurate results considering the dependency which exists in a joint space of input and output variables, objective functions are applied in the final outputs, and the end-to-end learning of a dense image marking task is allowed.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to an image pixel marking method based on a deep convolutional neural network. Background technique [0002] With the rapid development of technology, dense image labeling has become the most important problem in the field of computer vision, because it contains many low-level or high-level vision tasks, including stereo matching, optical flow, surface normal estimation and semantic segmentation. However, transformation-based methods (which always learn to predict new label estimates) often have to learn something more difficult than basic ones, since they must often simply learn to act as an identity transformation given the correct initial label, yielding the same output flags. On the other hand, for residual-based methods, it is easier to learn to predict zero residual given the correct initial label, but it is more difficult for them to improve large label value errors that devia...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00G06T3/40
CPCG06T3/4046G06T2207/20228G06T5/00
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products