Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video saliency detection method

A detection method and remarkable technology, which is applied in the fields of computer vision and image processing, can solve the problems that the inter-frame consistency does not consider the global information constraints, the overall consistency of the results needs to be improved, and the motion information is not fully excavated, so as to achieve accurate inter-frame saliency Strong results, strong background suppression ability, and fast calculation speed

Active Publication Date: 2019-08-09
TIANJIN UNIV
View PDF6 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006]Existing video detection technology is sensitive to noise and other disturbances, resulting in low detection accuracy and poor robustness; It plays a very important role in the task, but the existing algorithm does not fully mine the motion information; the existing algorithm does not consider the global information constraints when optimizing the consistency between frames, resulting in the overall consistency of the results to be improved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video saliency detection method
  • Video saliency detection method
  • Video saliency detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0038] The embodiment of the present invention proposes a video saliency detection method, see figure 1 , the method includes the following steps:

[0039] 101: Computing the spatial saliency of each frame in a video sequence through a sparse reconstruction model based on static cues and motion priors;

[0040] 102: Through a progressive sparse propagation model, capture the timing correspondence in the time domain, and generate an inter-frame saliency map;

[0041] 103: Fusing two saliency results into a global optimization model to improve spatio-temporal smoothness and global consistency of salient objects across videos.

[0042] To sum up, the embodiment of the present invention designs an effective video saliency detection model by digging deeply into the motion information of the objects in the video sequence and the inter-frame constraint relationship, and continuously extracts the salient objects in the video sequence.

Embodiment 2

[0044] The following combined with specific examples, figure 1 The scheme in Example 1 is further introduced, see the following description for details:

[0045] 201: single frame saliency reconstruction;

[0046] Among them, for the video saliency detection task, the detected object should be salient and moving in each video frame relative to the background area. To this end, based on still and motion priors, two sparse reconstruction models are constructed to detect salient objects in each video frame. The first is a static saliency prior, which utilizes three color saliency cues to construct a color-based reconstruction dictionary (DC), and the second is a motion saliency prior, which integrates motion uniqueness cues and motion compactness sexual cues, a motion-based dictionary (DM) was constructed.

[0047] set up a video sequence Contains N video frames, and uses SLIC (Simple Linear Iterative Clustering) method to divide each frame of video into 500 superpixel regions...

Embodiment 3

[0092] The scheme in embodiment 1 and 2 is carried out feasibility verification below in conjunction with specific example, see the following description for details:

[0093] figure 2 Saliency detection results for a video sequence in which women are salient objects are given. The first row is the RGB image of different video frames, the second row is the ground-truth map of video saliency detection, and the third row is the result obtained by this method. It can be seen from the results that this method can accurately extract salient objects in video sequences, and has a good suppression effect on background areas and non-moving salient areas (such as benches) with clear outlines.

[0094] Those skilled in the art can understand that the accompanying drawing is only a schematic diagram of a preferred embodiment, and the serial numbers of the above-mentioned embodiments of the present invention are for description only, and do not represent the advantages and disadvantages ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video saliency detection method, which comprises the following steps: determining a background candidate region by considering background clues, color compactness and color uniqueness, forming a static reconstruction dictionary to reconstruct superpixels in each video frame, and obtaining a static saliency map; determining a background seed point set by considering the motion compactness and the motion uniqueness, forming a motion reconstruction dictionary to reconstruct superpixels in each video frame, and obtaining a motion saliency map; fusing the static saliency map and the motion saliency map to obtain a single-frame saliency result; obtaining an inter-frame saliency map by utilizing bidirectional sparse propagation; and constructing an energy function composed of the unitary data item, the space-time smoothing item, the space mutual exclusion item and the global item, and optimizing the single frame and inter-frame saliency result through the energy function. According to the method, the salient target in the video sequence can be accurately extracted, the background inhibition capability is high, and the contour of the salient target is clear. The designed model has good robustness, and can process a plurality of challenging scenes.

Description

technical field [0001] The invention relates to the fields of image processing and computer vision, in particular to a video saliency detection method. Background technique [0002] The human visual system can quickly locate the most attractive content in a large-scale and complex scene. Inspired by this mechanism, the researchers also hope that the computer can simulate the human visual attention mechanism, and have the ability to automatically locate the salient content in the scene, and then provide effective auxiliary information for subsequent processing, so that the task of "visual saliency detection" comes into being pregnancy. As an interdisciplinary direction across computer science, neurology, biology, and psychology, visual saliency detection has been widely used in many research fields, such as: detection, segmentation, cropping, retrieval, compression coding, quality evaluation, and recommendation. system, etc., has very broad market development and applicatio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/207G06T7/246
CPCG06T7/207G06T7/246G06T2207/10016
Inventor 雷建军丛润民张哲祝新鑫宋宇欣贾亚龙
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products