Unlock instant, AI-driven research and patent intelligence for your innovation.

A scene depth restoration method based on multi-information fusion of deep neural network

A deep neural network, multi-information fusion technology, applied in image analysis, instrument, graphics and image conversion, etc., can solve the problems of insufficient smooth border and low image quality, and achieve the effect of simple program, clear depth image, and easy implementation.

Inactive Publication Date: 2020-01-24
DALIAN UNIV OF TECH
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But the boundary obtained by this method is not smooth enough, and there is no information about using color images, so the image quality is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A scene depth restoration method based on multi-information fusion of deep neural network
  • A scene depth restoration method based on multi-information fusion of deep neural network
  • A scene depth restoration method based on multi-information fusion of deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] The scene depth restoration method based on deep neural network multi-information fusion of the present invention will be described in detail below in combination with embodiments and drawings.

[0036] A scene depth restoration method based on multi-information fusion of deep neural network, such as figure 1 Shown, described method (taking 4 * as example) comprises the following steps:

[0037] The first step is to prepare the initial data;

[0038] The initial data includes a low-resolution depth map and a high-resolution color map of the same viewing angle, where a set of data such as figure 2 shown. For training the network, the dataset uses Middlebury official data (http: / / vision.middlebury.edu), in which 38 color-depth image pairs are used for training and 6 color-depth images are used for testing. For the training data, a 15×15 depth image block is cut from the training image with a stride of 10 pixels. The corresponding color image has a stride of 40 pixels...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a multiple information fused scene depth recovery method based on a deep neural network, and belongs to the field of image processing. According to the method, a deep convolutional network is used to predict the boundary of a depth image, and the obtained boundary is used to guide interpolation so that a high-quality depth image is obtained. A colored image assists prediction of the image boundary, boundary which is not obvious in the low-resolution depth image can be predicted more effectively, and the colored image assists in interpolation so that the obtained depthimage satisfy the spatial structure of a practical scene. The method is simple in processes and easy to realize. Deep information is solved from the depth image in a partitioned way according to the predicted boundary, the calculating speed is high, interference of depth information of different areas is avoided, the accuracy is high, the obtained high-resolution depth image is clear, and the boundary is sharp.

Description

technical field [0001] The invention belongs to the field of image processing, and relates to using a deep convolutional network to predict a depth image boundary, and using boundary guidance to perform interpolation to obtain a high-quality depth map, in particular to a scene depth recovery method based on deep neural network-based multi-information fusion. Background technique [0002] Scene depth is very important for natural scene understanding, and is widely used in three-dimensional (3D) modeling, visualization, and automatic driving; however, the complexity of the actual scene and the limitations of the image sensor, the accuracy and resolution of the acquired scene depth information are not enough to be applied to actual scenarios. For example, the resolution of the depth image collected by Microsoft's second-generation Kinect (Kinect2) is only 512×424, while the resolution of the corresponding color image is 1920×1080. Generally, the actual use of the collected dep...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/50G06T7/13G06T3/40
CPCG06T3/4076G06T7/13G06T7/50
Inventor 叶昕辰段祥越严倩羽李豪杰
Owner DALIAN UNIV OF TECH