Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A stereoscopic video saliency detection method based on binocular multi-dimensional perception characteristics

A technology of stereoscopic video and detection methods, applied in stereoscopic systems, televisions, electrical components, etc., can solve the problems of high complexity of optical flow algorithm, inaccurate detection, unstable detection, etc.

Active Publication Date: 2017-10-27
HANGZHOU DIANZI UNIV
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Most of the traditional saliency detection models use different algorithms for saliency detection based on the spatial characteristics of the image such as color, brightness, direction, and texture. However, these traditional model methods cannot effectively detect the salient regions of stereoscopic videos. On the one hand, Since most of the traditional detection models do not calculate the significant characteristics in the time domain, the motion between adjacent frames is one of the important features that affect the visual attention of the human eye, and the commonly used method for detecting motion features is the frame difference method. , background modeling method and optical flow method, etc.
The frame difference method is relatively simple, but the accuracy rate is low. The background modeling method is greatly affected by the background model, which will lead to unstable detection, while the algorithm complexity of the optical flow method is high. On the other hand, the traditional detection model does not calculate the depth information. For the impact of the salient characteristics of stereoscopic video, the detection is not accurate enough, because the depth information reflects the distance of the object from the human eye, which is one of the important perceptual characteristics of stereoscopic video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A stereoscopic video saliency detection method based on binocular multi-dimensional perception characteristics
  • A stereoscopic video saliency detection method based on binocular multi-dimensional perception characteristics
  • A stereoscopic video saliency detection method based on binocular multi-dimensional perception characteristics

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] Such as figure 1 As shown, a stereoscopic video saliency detection method based on binocular multi-dimensional perception characteristics includes salient feature extraction and salient feature fusion.

[0061] The salient feature extraction is to calculate the saliency from the view information of the three different dimensions of space, depth and motion of the stereo video, which includes three parts: two-dimensional static salient area detection, depth salient area detection, and motion salient area detection. in:

[0062] Two-dimensional static salient region detection: Calculate the salience of the spatial features of a single color image according to the Bayesian model, and detect the two-dimensional static salient region of the color image, specifically:

[0063] Estimate the salience degree S of an object by calculating the probability of interest at a single point Z :

[0064]

[0065] In the formula, z represents a certain pixel point in the image, p rep...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a stereoscopic video saliency detection method based on a binocular multidimensional perception characteristic. A traditional model method cannot effectively detect a salient region of a stereoscopic video. The method comprises the steps of salient feature extraction and salient feature fusion; the step of salient feature extraction is as follows: respectively performing saliency calculation from view information of the stereoscopic video of three different dimensions of space, depth and motion, including two dimensional static salient region detection, depth salient region detection, and motion salient region detection; and the step of salient feature fusion is as follows: using a global nonlinear normalized fusion strategy to perform fusion on an acquired salient feature map of three different dimension, so as to acquire a stereoscopic video salient region. According to the stereoscopic video saliency detection method based on the binocular multidimensional perception characteristic provided by the invention, the computational complexity is low, the quality of an acquired stereoscopic video saliency map is high, and the method can be directly applied to 3D video compression, 3D quality assessment and object identification and tracking and other engineering fields.

Description

technical field [0001] The invention belongs to the technical field of video image processing, and in particular relates to a stereoscopic video saliency detection method based on binocular multi-dimensional perception characteristics. Background technique [0002] Three-dimensional (Three-Dimension, 3D) video can bring viewers an immersive experience and higher fidelity due to the parallax between the left and right viewpoint images. It is a new generation of video service technology that is currently being developed. However, research on human vision has shown that due to the focusing function of the eyeball, the human eye cannot simultaneously perceive near objects and distant objects in a 3D video, and must focus on a certain area, resulting in a stronger selectivity of human 3D vision than 2D vision. , showing that its regional saliency is more prominent on 3D video. The 3D video saliency calculation model has important guiding significance for the calculation and reco...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N13/00H04N13/04H04N17/00
CPCH04N13/00H04N13/366H04N17/00H04N2013/0081
Inventor 周洋何永健唐杰张嵩
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products