Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for identifying actions in video based on continuous multi-instance learning

A technology in action recognition and video, which is applied in the field of recognition and detection, and can solve problems such as inapplicable video data

Active Publication Date: 2015-12-09
ZHEJIANG UNIV
View PDF5 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, ordinary multi-instance learning is not suitable for video data, because video data has information of time dimension

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for identifying actions in video based on continuous multi-instance learning
  • Method for identifying actions in video based on continuous multi-instance learning
  • Method for identifying actions in video based on continuous multi-instance learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The present invention will be further described below in conjunction with accompanying drawing.

[0035] The invention proposes a method for action recognition in videos based on continuous multi-instance learning. This method first collects movie data from the video website as training data, and at the same time collects subtitles and scripts from the website, matches the subtitles with the dialogue in the script, synchronizes the subtitles and the script, and uses the action description in the script as the corresponding video clip weak label. With video-level weak labels, each video in the training data is segmented into several video segments. Then, for each marker, an action classifier based on continuous multi-instance learning is trained. In the process of testing, firstly, the trained action classifier is used to calculate the probability that each frame of the video input by the user belongs to the action. Then, the recognition final result of each frame is ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for identifying actions in a video based on continuous multi-instance learning. The method comprises the following steps that: 1, film data used as training data sample sets are collected from video websites, pre-processing is simultaneously carried out on subtitles and scripts collected on the websites, and action description in the scripts is used as video grade weak marks of training data; 2, a video is cut into a plurality of video fragments through the weak marks, each video fragment is composed of one action, and for each action, action classifiers based on the continuous multi-instance learning are trained by means of the video fragments; 3, a user inputs a video to be identified into the plurality of trained action classifiers, and the probability that each frame of the video to be identified belongs to the action is calculated; and 4, the action type of each frame is obtained by a video cutting model, and the action types are returned to the user. The method solves the problem that manual marking wastes time and labor, and ambiguity problem caused by the weak marks and turning frames is simultaneously relieved.

Description

technical field [0001] The invention belongs to the field of identification and detection, and relates to a method for recognizing actions in videos based on continuous multi-instance learning, and a method for identifying and detecting human actions from videos by using weakly marked training data. Background technique [0002] In recent years, human action recognition has played an increasingly important role in many computer vision applications. Examples include video surveillance, content-based video retrieval, tagging and visual interaction. How to solve the high practical value but challenging task of human motion recognition has become a problem that various video websites spend a lot of money and manpower on at this stage. [0003] Typical action recognition systems view this task as a classification or detection problem. Using fully labeled training data to train an effective classifier or detector is a commonly used method at present. They use accurate time stamp...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V40/20G06V20/41
Inventor 宋明黎栾乔张珂瑶宋新慧邱画谋
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products