Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video object cooperative segmentation method based on track directed graph

A video object, collaborative segmentation technology, applied in the image field

Active Publication Date: 2017-03-22
SHANGHAI UNIV
View PDF2 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] The purpose of the present invention is to propose a method for collaborative segmentation of video objects based on trajectory directed graphs for the defects in the prior art. This method can be used without manually setting the similarity threshold of common objects between videos. More accurate and automatic extraction of multi-type objects appearing in video groups

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video object cooperative segmentation method based on track directed graph
  • Video object cooperative segmentation method based on track directed graph
  • Video object cooperative segmentation method based on track directed graph

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0079] Embodiments of the present invention will be described in further detail below in conjunction with the accompanying drawings.

[0080] The simulation experiment carried out by the present invention is programmed on a PC test platform with a CPU of 3.4GHz and a memory of 16G.

[0081] Such as figure 1 As shown, the video object collaborative segmentation method based on trajectory directed graph of the present invention, its specific steps are as follows:

[0082] (1), respectively input the mth video sequence V in the original video group m (m=1,...,M), for video V m The tth frame is denoted as F m,t (t=1,...,N m ),Such as figure 2 Shown are the first frames of the two videos respectively;

[0083] (2), using the dense optical flow algorithm to obtain the video frame F m,t The motion vector field of the pixels of each video frame F m,t Generate initial saliency map IS m,t , using the candidate object generation method, for each video frame F m,t Generate q cand...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a video object cooperative segmentation method based on a track directed graph. The method comprises the following steps of (1), inputting each frame sequence Fm, t(t=1, ..., Nm) of a video set; (2), generating a motion vector field, an initial significance graph and a candidate object for the video frame Fm, t; (3), performing frontward and backward tracking on each candidate object, and performing maximal inhibition and track segmentation, thereby forming a track set; (4), constructing a directional weighted graph G=(V,E), wherein the track is a directional edge which is established between nodes in the graph according to a matching score; (5), converting the directional weighted graph G=(V,E) to a non-directional weighted graph G=(V,E'), extracting a maximal clique by means of a maximal clique extracting algorithm, calculating the weighted clique score of each maximal clique, using a track area which corresponds with the clique with highest weighted clique score as a main object area, performing a popular sequencing algorithm for generating an objective significance graph, and obtaining a final segmentation result by means of GrabCut; and (6), according to the obtained object segmentation result, updating an initial significance graph, calculating the maximal clique score and obtaining the object of other kind.

Description

technical field [0001] The invention relates to the technical field of image and video processing, in particular to a method for collaborative segmentation of video objects based on a trajectory directed graph. Background technique [0002] With the vigorous development of the Internet and multimedia technology, the means for people to obtain images is becoming more and more convenient and flexible. For example, in surveillance video, social network, news reports and other fields, video data is gradually increasing, and video data contains richer information. In the process of data explosion, people's demand for intelligent processing of video data is also increasing. Video object segmentation is a challenging field, which aims to extract the main object area that people pay attention to in the video. Compared with image segmentation, video object segmentation uses the motion information of the object in the video, which can extract meaningful information more effectively. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/136G06T7/194
Inventor 刘志谢宇峰叶林伟李恭杨刘秀文
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products