Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Unsupervised video target segmentation method based on local and global memory mechanism

A target segmentation and unsupervised technology, applied in the field of feature expression and unsupervised video target segmentation, can solve the problems of poor effect and lack of global guidance information

Pending Publication Date: 2021-08-17
BEIJING UNIV OF TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the methods in this category can achieve good results, they lack global guidance information, and the effect will be relatively poor when faced with large appearance changes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unsupervised video target segmentation method based on local and global memory mechanism
  • Unsupervised video target segmentation method based on local and global memory mechanism
  • Unsupervised video target segmentation method based on local and global memory mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in combination with specific examples and with reference to the detailed drawings. However, the described implementation examples are only intended to facilitate the understanding of the present invention, and do not have any limiting effect on it. figure 1 It is a flow chart of the method of the present invention, such as figure 1 As shown, this method includes the following steps:

[0033] training phase

[0034] Step 1: Construct the dataset

[0035] The database in the implementation process of the method of the present invention comes from the public video object segmentation standard data set DAVIS2016. Among them, DAVIS-2016 consists of high-quality video sequences corresponding to 50 categories, with a total of 3455 densely masked video frames. Among them, 30 categories are used as training and 20 ca...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an unsupervised video target segmentation method based on a local and global memory mechanism, and belongs to the technical field of feature learning and video target segmentation. The method comprises the following steps: firstly, extracting embedding features of a pair of same videos; selecting global memory candidate frames in the video, and extracting global memory candidate features, wherein each global memory candidate feature corresponds to each node corresponding to the graph convolution network, and the enhanced global memory feature expression is carried out; extracting mutual attention information between a pair of frames through a local memory module, and alternately regarding the mutual attention information as a target and a search role in an attention mechanism for mutual attention enhancement; and finally, obtaining a predicted target mask through a decoder, calculating loss by using cross entropy loss, and updating the whole model so as to obtain a final segmentation network. According to the method, local and global memory mechanisms are considered at the same time, and reliable short-time and long-time video inter-frame correlation information is obtained at the same time, so that unsupervised video target segmentation is realized.

Description

technical field [0001] The invention relates to the field of deep learning and the field of weakly supervised video object segmentation, in particular to a feature expression method in unsupervised video object segmentation, which can obtain more accurate segmentation results on video object segmentation data sets. Background technique [0002] With the development of visual big data technology, video information has gradually become an important information transmission medium, and the information it carries includes both spatial and temporal aspects. How to obtain valuable scene target information from this space-time carrier has become the top priority in the development of computer vision today. While the existing video information target analysis tasks bring convenience and attention to the society, they also bring certain challenges. For example, how to use only limited categories to segment foreground targets without specifying targets to be segmented online, so as t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/02G06N3/08
CPCG06N3/02G06N3/084G06V20/49G06F18/214Y02T10/40
Inventor 段立娟恩擎王文健张文博
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products