Graph based unconstrained in-video significant object detection method

An object detection and video technology, applied in the field of images, can solve the problems that the video with complex motion is not robust, cannot accurately and completely extract the saliency map, and affects the wide application of the video saliency model.

Active Publication Date: 2016-01-06
SHANGHAI UNIV
View PDF6 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] However, the shortcomings of the above methods are that the models of the two methods are not robust to videos with complex motion, which can cause false detection
In summary, the existing video salient object detection methods cannot accurately and completely extract saliency maps in unconstrained video sequences, which affects the wide application of video saliency models

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Graph based unconstrained in-video significant object detection method
  • Graph based unconstrained in-video significant object detection method
  • Graph based unconstrained in-video significant object detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] Embodiments of the present invention will be described in further detail below in conjunction with the accompanying drawings.

[0041] The simulation experiment carried out by the present invention is realized by programming on a PC test platform with a CPU of 3.4GHz and a memory of 8G.

[0042] Such as figure 1 Shown, the saliency detection method of the video based on graph of the present invention, its specific steps are as follows:

[0043] (1), input the original video frame sequence, record the tth frame in it as F t ,Such as figure 2 shown;

[0044] (2), using the superpixel region segmentation method, the entire video frame F t split into n t superpixel regions, denoted as sp t,i (i=1,...,n t );

[0045] (2-1), for the video frame F t , its width is denoted as w, its length is denoted as h, and a video frame F of w×h size is set t The number of regions to be divided is: n t = w ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention discloses a graph based unconstrained in-video significant object detection method. The method specifically comprises the following steps: (1) inputting an original video frame sequence, wherein a t-th frame is referred to as Ft as shown in the specification; (2) dividing an entire video frame into super-pixel regions; (3) obtaining a motion vector field of pixel points in the formula as shown in the specification by using a dense optical flow algorithm, and separately extracting a super-pixel region class and a global motion histogram; and (4) constructing an undirected weighted graph as shown in the specification, separately calculating a shortest path from the super-pixel region to virtual background nodes as shown in the specification, and performing accumulation on weights of edges between the nodes in the path and taking the weights as a motion significant value of the super-pixel region to generate a motion significant graph of a current frame, and according to the binarized significant graph, re-estimating a motion histogram of a background and updating the significant value of the super-pixel region. According to the graph based unconstrained in-video significant object detection method provided by the present invention, the method is based on a graph,and saliency detection is performed by performing iterative estimation on motion of the background, so that significant objects in the video can be detected more accurately and completely.

Description

technical field [0001] The invention relates to the technical field of image and video processing, in particular to a graph-based method for detecting salient objects in unconstrained video. Background technique [0002] The human visual system can quickly and accurately locate the area of ​​interest of the human eye from a complex environment and make a corresponding response. In the field of computer vision research, by simulating the visual attention mechanism of the human eye, digital images / videos Accurately extracting key regions is an important part of the study of visual saliency models. According to the research of psychology and human vision, in most cases, when the human eye observes an image, it will not evenly distribute attention on the entire image, but will focus on a certain object in the image. The purpose of saliency detection is to extract the most attractive salient parts in the image / video, and use a grayscale map (ie, saliency map) to represent the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46
CPCG06V20/46G06V10/50
Inventor 刘志李君浩叶林伟
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products