Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit)

A depth map and scene technology, applied in the fields of somatosensory interaction, computer vision, and augmented reality, can solve problems such as Kinect depth map vulnerability repair, and achieve high-quality realistic effects and augmented reality effects.

Inactive Publication Date: 2015-11-25
中国科学院科学传播研究中心
View PDF5 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0011] The present invention mainly solves the bug repair problem of the Kinect depth map

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit)
  • Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit)
  • Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit)

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0020] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in combination with specific examples and with reference to the accompanying drawings.

[0021] 1. Method overview

[0022] Such as figure 1 As shown, the method of the present invention is mainly divided into three steps: (1) use the QuickShift algorithm to segment the color image, and the implementation of the specific algorithm uses CUDA based on GPU computing; (2) the depth map and the color map of Kinect Carry out registration; (3) Use the segmented color image and the registered depth and color image relationship to repair the missing depth image. The specific steps are: if there is depth data in this area, use the average depth of this area value to fill the missing area; if all the depth information of this area is missing, the average depth value of the adjacent area is used to fill it.

[0023] (1) T...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a technology for restoring a depth image and combining virtual and real scenes based on a GPU (Graphic Processing Unit). The technology mainly comprises the following steps: (1), collecting the depth image and a colourized image; (2), performing down-sampling of the images so as to ensure real-time restoring speed; (3), segmenting the colourized image by using a QuickShift algorithm, wherein the specific algorithm is realized by using a CUDA (Compute Unified Device Architecture) based on GPU operation; (4), processing a segmented block lacking of depth data by utilizing the segmentation result of the colourized image; registering the Kinect depth image and colourized image at first; filling a deleted region by using an average depth value of the region if the depth data exists in the region; and filling by using the average depth value of a neighbourhood region if all depth information in the region is deleted; and (5), performing up-sampling of the images. According to the invention, the bug restoring problem of the Kinect depth images is solved in combination with an image sampling technology and a CUDA technology based on the QuickShift algorithm and the GPU operation; on this basis, virtual objects and real objects are superposed, so that shading between virtual objects and the real objects is realized; and thus, realistic interaction is enhanced.

Description

technical field [0001] The invention belongs to the technologies in the fields of computer vision, somatosensory interaction and augmented reality. It specifically involves a Kinect-based sensor that combines human-computer interaction with virtual and real scenes through GPU computing-based in-depth repair technology to give participants a better interactive experience. Background technique [0002] Somatosensory interaction technology refers to providing information to the computer through the user's limbs, which expands the traditional input control devices such as mouse and keyboard, so that human-computer interaction has a wider space, more free manipulation, and more flexible display methods. Humans provide a new type of interactive experience. As an emerging field, somatosensory interaction has gradually entered the daily life of human beings and is becoming more and more popular. It has a very broad application prospect in the fields of games, medical care, retail, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00
Inventor 葛水英王波
Owner 中国科学院科学传播研究中心
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products