Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A binocular disparity estimation method based on cascaded geometric context neural network

A neural network, binocular disparity technology, applied in the field of computer vision, can solve the problems of target occlusion, difficult to find, low texture, etc., to achieve the effect of improving prediction accuracy and disparity estimation accuracy

Active Publication Date: 2019-03-15
HANGZHOU DIANZI UNIV
View PDF1 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, it is still a hard problem to solve in some complex scenes, such as low texture, object occlusion, texture repetition, etc.
In low-texture areas, it is very easy to get many candidate pixels; in addition, if the target appears in one image but is occluded in another image, then this target will be very difficult to find

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A binocular disparity estimation method based on cascaded geometric context neural network
  • A binocular disparity estimation method based on cascaded geometric context neural network
  • A binocular disparity estimation method based on cascaded geometric context neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] A binocular parallax estimation method based on a cascaded geometric context neural network, comprising the following steps:

[0035] Step (1) Image preprocessing. Normalize the left and right images of the binocular image pair with the reference actual reference image, so that the image pixel values ​​are in [-1,1];

[0036] Step (2) Construct the cascaded convolutional neural network CGCNet. Includes the following network layers:

[0037] 2-1. Construct a rough disparity image estimation layer. The network layer is mainly composed of GCNet (Geometry and ContextNetwork) network.

[0038] 2-2. Construct a parallax refinement layer. The network layer is RefineNet, and the rough disparity map generated in steps 2-3 is input to the network layer, and the output result is an accurate disparity map.

[0039] Construct a cascaded convolutional neural network CGCNet. Includes the following network layers:

[0040] 2-1. The GCnet network mainly combines two-dimensional a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a binocular disparity estimation method based on a cascaded geometric context neural network. The invention designs a new cascaded convolution neural network Cascaded GCNet (CGCNet). The network mainly adopts the improved GCNet and combines the 3d convolution and the original 2d convolution operation to obtain a better disparity map characteristic expression, which is beneficial to the subsequent network training. RefineNet is used to optimize the rough parallax map of GCnet network, and the precision of parallax map is improved by iterative refinement. In the process ofRefineNet optimization, difficult mining is used to make the network model focus on learning rare samples, so as to improve the disparity estimation accuracy of the network for image pairs with different complexity.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a binocular parallax estimation method based on a cascaded geometric context neural network. Background technique [0002] Depth maps are an integral part of 3D reconstruction and 3D scene understanding. Given a pair of images generated by a corrected binocular camera, the pixels corresponding to the same row of the two images can be used to estimate its depth. For example, for the pixel (x, y) of the left image, assume that on the right image The corresponding pixel is (x+d,y), we can calculate its depth by f*l / d, where f is the focal length of the camera, l is the distance between the two center points of the binocular camera, and d is Parallax of left and right images. Depth is inversely proportional to parallax. As long as the parallax is calculated, we can directly obtain the depth through the above calculation formula. At present, there is a method of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/55G06N3/04G06N3/08
CPCG06N3/084G06T7/55G06T2207/20228G06T2207/20084G06T2207/20081G06N3/045
Inventor 张运辉吴子朝王毅刚
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products