Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Weak supervision time sequence action positioning method and system based on bimodal cooperation

A positioning method, weakly supervised technology, applied in the field of computer vision, can solve problems such as no modeling

Active Publication Date: 2020-11-10
XI AN JIAOTONG UNIV
View PDF4 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Currently, weakly supervised temporal action localization methods have two drawbacks: first, weakly supervised methods are prone to generate a large number of false positive action proposals due to the lack of temporal annotations; second, existing methods use a fixed threshold to divide the activation sequence to generate actions proposal, without modeling this threshold during training

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Weak supervision time sequence action positioning method and system based on bimodal cooperation
  • Weak supervision time sequence action positioning method and system based on bimodal cooperation
  • Weak supervision time sequence action positioning method and system based on bimodal cooperation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0071] In order to make the purpose, technical effects and technical solutions of the embodiments of the present invention more clear, the technical solutions in the embodiments of the present invention are clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention; obviously, the described embodiments It is a part of the embodiment of the present invention. Based on the disclosed embodiments of the present invention, other embodiments obtained by persons of ordinary skill in the art without making creative efforts shall all fall within the protection scope of the present invention.

[0072] see figure 1 , a weakly supervised time-series action location method based on dual-modal collaboration in an embodiment of the present invention, comprising the following steps:

[0073] Step 1, feature extraction of video clips in the unedited video, including: first, divide the unedited video into multiple non-overla...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a weak supervision time sequence action positioning method and system based on bimodal cooperation. The method comprises the following steps: performing feature extraction on video clips in a non-edited video; performing action classification on the non-edited video based on a double-flow basic network to obtain a video clip attention weight sequence and an action classification result; obtaining a pseudo time sequence label based on the obtained video clip attention weight sequence; taking the pseudo time sequence label as time sequence supervision, training two single-mode sub-networks, and iterating until final fitting; and based on the two single-mode sub-networks obtained by iterative training, performing time sequence action positioning on the non-edited video. According to the method, training is carried out only under a category label of the video, and starting and ending time and categories of all action instances in the video can be obtained.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a weakly supervised temporal sequence action positioning method and system based on dual-mode cooperation. Background technique [0002] With the development of the Internet, video plays an increasingly important role in people's lives. Temporal action localization is an important technique in the field of video understanding, which aims to locate the start and end times of main actions in unedited videos and correctly classify actions. [0003] At present, most of the existing temporal action localization methods require precise temporal labeling for training, that is, the category of each action instance and its start and end time; this precise temporal labeling requires a lot of manpower and material resources, and may be due to different labeling Labeling by personnel produces deviations. In contrast, weakly supervised temporal action localization only r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/41G06N3/045G06F18/2155G06F18/24
Inventor 王乐翟元浩郑南宁
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products