Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth shape priori extraction method

An extraction method and depth technology, which are applied in image data processing, instrumentation, computing, etc., can solve the problems of not establishing a saliency target, not capturing the shape of the depth map, etc., and achieve the effect of expressing information in depth and suppressing background interference.

Active Publication Date: 2018-04-20
TIANJIN UNIV
View PDF6 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The methods in the prior art do not establish the corresponding relationship between salient objects and their depth distribution; existing methods usually use the depth map as an additional feature, but do not capture useful information such as the shape of the depth map

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth shape priori extraction method
  • Depth shape priori extraction method
  • Depth shape priori extraction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0031] A deep shape prior extraction method, see figure 1 , the deep shape prior extraction method includes the following steps:

[0032] 101: Select K superpixel regions with larger RGB saliency values ​​as root seed points, and establish the relationship between depth characteristics and saliency;

[0033] 102: Based on the depth smoothness and consistency constraints, determine the child node set of each root seed point, so as to describe the depth shape attribute;

[0034] 103: Considering the depth consistency of the relevant superpixel nodes in the two rounds of cyclic propagation, and the depth consistency between the current cyclic superpixel and the root seed point, the final DSP value is defined as the maximum depth consistency of the two cases value;

[0035] 104: The final DSP result is obtained after the DSP images generated by multiple root seed points are fused.

[0036] Wherein, before step 101, the depth shape prior extraction method also includes:

[0037...

Embodiment 2

[0046] The scheme in embodiment 1 is further introduced below in conjunction with specific calculation formulas and examples, see the following description for details:

[0047] 201: image preprocessing;

[0048] Let the RGB color image be marked as I, and the corresponding depth map be marked as D. First, the color image is segmented using the SLIC (Simple Linear Iterative Clustering) superpixel segmentation algorithm to obtain N superpixel regions, denoted as Among them, r m is the superpixel region.

[0049] Then, choose an existing RGB saliency detection algorithm as the basic algorithm, such as the BSCA algorithm (saliency detection algorithm based on cellular automata), and obtain the RGB saliency result of each superpixel region, denoting the superpixel region r m The RGB significance value of S i (r m ).

[0050] It has been observed that depth maps usually have the following characteristics:

[0051] 1) Compared with background regions, salient objects tend to...

Embodiment 3

[0082] Combined with the following specific experiments, figure 2 The scheme in embodiment 1 and 2 is carried out feasibility verification, see the following description for details:

[0083] figure 2 A visualization of the deep shape prior descriptor is given. The first column is the original RGB color image, the second column is the original depth map, and the third column is the DSP operator visualization result. It can be seen from Figure 2 that the depth shape description operator proposed by this method can effectively capture the shape information of the salient target in the depth map. The target boundary area is clear and sharp, the internal area of ​​the target is uniform, and the background suppression ability is strong. Depth map description capability.

[0084] Those skilled in the art can understand that the accompanying drawing is only a schematic diagram of a preferred embodiment, and the serial numbers of the above-mentioned embodiments of the present inv...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a depth shape priori extraction method. The depth shape priori extraction method comprises the following steps of selecting K super-pixel regions with relatively high RGB significance values as root seed points, and establishing a relationship between a depth characteristic and significance; based on depth smoothness and consistency constraints, determining a sub-node set of each root seed point to describe depth shape attributes; and by considering depth consistency of related super-pixel nodes in two continuous circulation propagation and depth consistency between super-pixels in current circulation and the root seed points, defining a final DSP value as a maximum value of the depth consistency in the two conditions; and fusing DSP images generated by the root seed points to obtain a final DSP result. Through deep analysis on depth image data, shape priori information of a depth image is fully mined, and effective depth information is provided for RGBD significance detection.

Description

technical field [0001] The invention relates to the technical fields of image processing and stereo vision, in particular to a prior extraction method of depth shape. Background technique [0002] The human visual perception system can automatically perceive scene information and locate important targets and areas. In fact, when humans perceive the scene, in addition to obtaining appearance information such as color and shape, they can also perceive the depth information of the scene, that is, the depth of field. With the development of imaging equipment, the acquisition of scene depth data has become faster and more convenient. This has laid a data foundation for related research work on RGBD data. As a supplement to color data, depth data can provide a lot of effective information, such as positional relationship, target shape, etc., thereby improving task performance. At present, extensive research has been carried out on RGBD data, such as: RGBD object recognition, RG...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06T7/90G06T7/50G06T7/60
CPCG06T7/60G06T2207/10024G06T2207/10028G06T2207/20221G06T7/11G06T7/50G06T7/90
Inventor 雷建军丛润民侯春萍李欣欣韩梦芯罗晓维
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products