Weak supervision video sequential action positioning method and system based on deep learning

A technology of deep learning and positioning method, applied in the field of computer vision based on deep learning, can solve the problems of ignoring action semantic consistency, not explicitly modeling action semantic consistency, and uncontrollable training process.

Active Publication Date: 2020-04-28
SUN YAT SEN UNIV
View PDF4 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the training process of this method is uncontrollable, and the semantic consistency of actions in videos is ignored, and the semantic consistency of actions is not explicitly modeled to guide the action localization process.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Weak supervision video sequential action positioning method and system based on deep learning
  • Weak supervision video sequential action positioning method and system based on deep learning
  • Weak supervision video sequential action positioning method and system based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] The implementation of the present invention is described below through specific examples and in conjunction with the accompanying drawings, and those skilled in the art can easily understand other advantages and effects of the present invention from the content disclosed in this specification. The present invention can also be implemented or applied through other different specific examples, and various modifications and changes can be made to the details in this specification based on different viewpoints and applications without departing from the spirit of the present invention.

[0038] figure 1It is a flow chart of the steps of a weakly supervised video sequence action location method based on deep learning in the present invention, figure 2 It is a schematic diagram of the deep learning-based weakly supervised video time-series action location process according to a specific embodiment of the present invention. Such as figure 1 and figure 2 As shown, the pres...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a weak supervision video sequential action positioning method and system based on deep learning, and the method comprises the following steps: S1, extracting a current frame and a previous frame in a video, extracting an optical flow through an optical flow estimation network, and inputting the optical flow and frames sampled by the video at equal intervals into a double-flow action recognition network to extract video features; S2, performing semantic consistency modeling on the video features to obtain embedded features; S3, mapping the embedded features to a class activation sequence by a training classification module; S4, updating the video features by adopting an attention module; S5, taking the updated video features as the input of the next cycle, and repeating S2 to S4 until stopping; S6, fusing class activation sequences generated by each cycle, and calculating classification loss of estimated action classes and real class labels; S7, fusing the embedded features of each cycle to calculate similarity loss between the action features; and S8, obtaining target loss according to the classification loss and the similarity loss, and updating system model parameters.

Description

technical field [0001] The present invention relates to the field of computer vision based on deep learning, in particular to a method and system for locating time-series actions in weakly supervised videos based on deep learning. Background technique [0002] Weakly supervised video temporal action localization refers to locating the start time and end time of action instances in a video while only relying on video-level action category annotations. Recently, this task has gradually attracted attention due to its wide application in other tasks in the field of computer vision, such as: dense video description, spatio-temporal detection of video actions. [0003] In recent years, temporal action localization technology has made great progress. Deep learning-based methods, especially convolutional neural networks, occupy an important place in it. For example, the research work "UntrimmedNets for Weakly Supervised Action Recognition and Detection" (In Proceedings of the IEEE...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V20/41G06V20/46G06N3/044G06N3/045G06F18/241Y02T10/40
Inventor 李冠彬刘劲林倞
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products