Depth enhancing method based on texture distribution characteristics

A technology of distributing features and textures, applied in image enhancement, image data processing, instruments, etc., can solve problems such as boundary jitter to be suppressed, depth value fluctuation, and unsatisfactory filtering effect in the boundary area, so as to improve continuity and accuracy. , to promote the effect of application and promotion

Active Publication Date: 2013-11-27
SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV
View PDF4 Cites 38 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

There are two representative methods: one method is to preprocess the depth image with a bilateral filter, and then divide the depth image into non-boundary areas and boundary areas in the spatial domain, and use different weights to calculate the missing area for different areas. The depth information of the method makes up for the lost depth data and reduces the noise of the depth image in the object boundary area. However, due to the lack of stability in the temporal domain, the depth values ​​​​of corresponding pixels in adjacent frames fluctuate greatly; Another method is to use the texture-weighted average of the corresponding area depth information of multiple frames in the time domain to repair the missing pixel depth, and then perform joint bilateral filtering on the depth map of the current frame from the time domain and the space domain, thus greatly improving the depth image. time-domain consistency, and significantly improved the continuity of smooth surface depth values, but because this method ignores the processing of object boundaries that are greatly affected by noise, the filtering effect of the boundary area is not ideal, and the boundary jitter needs to be suppressed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth enhancing method based on texture distribution characteristics
  • Depth enhancing method based on texture distribution characteristics
  • Depth enhancing method based on texture distribution characteristics

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0014] In specific embodiments, the following examples may be employed. It should be noted that the specific methods (such as the Sobel operator, the least square difference method, etc.) described in the following implementation process are only examples, and the scope of the present invention is not limited to these methods.

[0015] A1: The input multi-frame texture images and corresponding depth images adjacent in the time domain are respectively collected by the color camera and the depth camera of the low-end depth sensor. Taking Kinect as an example, the speed of data collection is 60FPS, so the camera uses In the case of general speed motion, the correlation between the current frame and the previous and subsequent frames of image data in the time domain is very high. Therefore, there is enough image information in the subsequent processing to ensure the effectiveness of image enhancement from the aspect of time domain. Although the depth image enhancement in the time...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A depth enhancing method based on texture distribution characteristics comprises the steps of A1, inputting texture images of adjacent frames on the time domain and corresponding depth images collected by a low end depth transducer, wherein the number of frames is N, and the N is larger than or equal to 2; A2, extracting the boundaries of the texture images of all the frames, and dividing the depth images into non-boundary areas and boundary areas, wherein the non-boundary areas do not contain texture boundaries, and the boundary areas contain the texture boundaries; A3, aiming at the boundary areas of the depth images, selectively modifying the depth of pixels to carry out depth enhancing according to the distribution characteristics of the depth values of the pixels on the two sides of the texture boundaries in the boundary areas of all the adjacent frames on the time domain, and carrying out filtering noise reduction processing on the boundary area when the processing is judged to be necessary; A4, aiming at the non-boundary areas of the depth images, acquiring time-domain prediction blocks of current depth blocks through texture matching results of all the frames in the time domain, repairing the current depth blocks according to the pixel information of the prediction blocks, and carrying out filtering noise reduction processing. By the adoption of the depth enhancing method based on the texture distribution characteristics, the accuracy and the time-domain consistency of the depth images collected by the low end depth transducer can be improved remarkably.

Description

technical field [0001] The invention relates to the fields of computer vision and digital image processing, in particular to a depth enhancement method based on texture distribution features. Background technique [0002] Depth images are often used in many fields such as 3D reconstruction and free viewpoint coding. Existing depth information collection is often based on some complex and expensive sensors, such as structured light cameras or laser range finders. The depth images collected by these devices are not only disturbed by various noises, but also their low resolution greatly limits the development of other research and applications based on depth images. The low-end depth sensor represented by Kinect is cheap and can quickly acquire the depth and texture information of the scene, so it is widely used in research. However, because the low-end depth sensor mainly obtains the depth image of the scene by emitting structured light and receiving its reflected light, it ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00
Inventor 金欣许娅彤戴琼海
Owner SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products