Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A depth map super-resolution method

A super-resolution and depth map technology, applied in the fields of image processing and stereo vision, can solve the problems of depth map reconstruction interference and underutilization, and achieve the effect of sharp depth edge, suppressing ringing effect, and improving image resolution.

Inactive Publication Date: 2019-01-25
TIANJIN UNIV
View PDF5 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Most of the existing depth image super-resolution technologies rely on high-resolution color images in the same scene for fusion assistance, and the fusion strategy is limited to feature cascade, and the complex texture information in the color image will give the reconstruction of the depth image Brings interference, fails to make full use of the guidance information provided by the color map in the depth map reconstruction process

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A depth map super-resolution method
  • A depth map super-resolution method
  • A depth map super-resolution method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0036] The embodiment of the present invention proposes a color information-guided depth map super-resolution method, which uses a two-stream convolutional neural network structure to perform super-resolution on a low-resolution depth map, see figure 1 ,mainly include:

[0037] 101: Edge extraction of the depth image and the color image: that is, use the edge detection operator to extract the edge of the color image and the edge of the depth image after the initial upsampling;

[0038] 102: Optimization of the edge map, performing "AND" operation on the expanded color edge and depth edge;

[0039] 103: Super-resolution network construction, including four steps: image block extraction, feature map non-linear mapping, image reconstruction and optimized superposition.

[0040] Among them, super-resolution network training: driven by mean square error, stochastic gradient descent is the optimization strategy to update the learning parameters of the network.

[0041] To sum up,...

Embodiment 2

[0043] Combine below figure 1 and figure 2 The scheme in Example 1 is further introduced, see the following description for details:

[0044] 1. Edge extraction of depth map and color map

[0045] In order to make the edges of the depth map output by super-resolution sharper, it is necessary to use the outer edges of the color map in the same scene. However, since the resolutions of the high-resolution color image and the low-resolution depth map are different, first, the bicubic interpolation (Bicubic) algorithm is used to upsample the low-resolution depth map to make it the same size as the color map, and then use The edge detection operator (Canny) extracts the edges of depth images and color images:

[0046]

[0047] E. C =f c (Y)

[0048] in, Indicates that the low-resolution depth map D L Upsampling specific ratio, Y represents the color map, f c Indicates the edge extraction operator. E. C Represents the edge of the high-resolution color image obtained by...

Embodiment 3

[0084] Combine below figure 2 The scheme in embodiment 1 and 2 is carried out feasibility verification, see the following description for details:

[0085] figure 2 For the quantitative comparison of this method with other methods, the data are Teddy and Wood2 from the Middlebury stereo dataset.

[0086] Compared with the Bicubic method, the PSNR value obtained by this method can be increased by up to 26.10%. Since this method inputs the optimized edge map obtained from the color image and the depth image into the convolutional neural network, the optimized edge map guides the super-resolution reconstruction of the depth image, so the PSNR value of this method is also better than that of the SRCNN method. improvement.

[0087] Those skilled in the art can understand that the accompanying drawing is only a schematic diagram of a preferred embodiment, and the serial numbers of the above-mentioned embodiments of the present invention are for description only, and do not repr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a depth map super-resolution method. The method comprises the following steps: color map edges and depth map edges after initial upsampling are extracted respectively by usingedge detection operators; the edge map is optimized by 'AND' operation of expanded color edge and depth edge. The super-resolution network is constructed by four steps: image block extraction, featuremap nonlinear mapping, image reconstruction and optimization and superposition, and then the super-resolution network is constructed. Driven by mean square error and optimized by stochastic gradientdescent, the super-resolution network is trained to update the learning parameters of the network. The method not only improves the image resolution, but also makes the depth pixel value of the imagemore accurate and the image clearer, and effectively suppresses the ringing effect, so that the reconstructed depth edge is sharper.

Description

technical field [0001] The invention relates to the technical fields of image processing and stereo vision, in particular to a depth map super-resolution method. Background technique [0002] Depth information is an important clue for humans to perceive stereoscopic scenes, and the acquisition of accurate scene depth information has become a current research hotspot. At present, the depth maps captured by mainstream depth cameras have won the favor of researchers due to their low cost and real-time performance. However, the depth map obtained based on the existing technology has problems such as low resolution, inaccurate depth value, and susceptibility to noise, and the resolution of the captured depth map is far from matching the color map. A high-quality, high-resolution depth map is crucial for tasks related to stereoscopic perception, such as stereoscopic display and virtual reality. Therefore, depth map super-resolution reconstruction technology has become a hotspot ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06T7/13G06N3/04
CPCG06T3/4053G06T7/13G06T2207/20192G06T2207/10024G06T2207/20084G06T2207/20081G06N3/045
Inventor 雷建军杨博兰李奕倪敏李欣欣刘晓寰
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products