Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video human body interaction motion identification method based on optical flow graph depth learning model

A deep learning and action recognition technology, applied in the field of human interaction action recognition in video, to achieve the effect of high recognition accuracy

Active Publication Date: 2017-02-15
SHANGHAI JIAO TONG UNIV
View PDF10 Cites 41 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The above-mentioned technical solution still adopts the traditional manual feature method, and there is still a gap in the relevant patents on the use of deep learning models for human interaction action recognition in the video field

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video human body interaction motion identification method based on optical flow graph depth learning model
  • Video human body interaction motion identification method based on optical flow graph depth learning model
  • Video human body interaction motion identification method based on optical flow graph depth learning model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The preferred embodiments of the present invention are given below in conjunction with the accompanying drawings to describe the technical solution of the present invention in detail.

[0039] Such as figure 1 As shown, the present invention discloses a method for recognizing human body interaction actions in videos based on an optical flow graph deep learning model, the steps of which mainly include:

[0040] Step 1, deframe the test set video and the training set video, calculate the optical flow sequence diagram by using two adjacent frames, and obtain the optical flow sequence diagram of the test set video and the training set video;

[0041] Step 2, preprocessing the optical flow sequence diagram, deleting the optical flow diagram with less information content, retaining the optical flow diagram with more information content, and obtaining the preprocessed test set and training set optical flow sequence;

[0042] Step 3, use the training set optical flow sequence ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video human body interaction motion identification method based on an optical flow graph depth learning model. The method mainly comprises steps that step 1, deframing of a test set video and a training set video is carried out, an optical flow sequence graph is calculated through utilizing two adjacent frames; step 2, the optical flow sequence graph is pre-processed, and optical flow graphs with relatively few information quantity are deleted; step 3, a residual error neural network is trained through utilizing the training set optical flow sequence acquired in the step 2, the test set and training set optical flow graph sequences are taken as input, and spatial domain characteristics are acquired; step 4, a long and short memory model is trained through utilizing training set characteristics, test set characteristics are inputted to acquire each type of probability output; and step 5, a classification result is acquired through employing voting model statistics. The method is advantaged in that relevant patent blanks are filled through utilizing the depth learning model to carry out human body motion identification, identification accuracy is high, and the method is applicable to multiple occasions.

Description

technical field [0001] The invention relates to a method for recognizing human body interaction actions in videos, in particular to a method for recognizing human body interaction actions in videos based on an optical flow graph deep learning model. Background technique [0002] As technology continues to evolve, so does the need to understand video content. The widespread use of cameras has resulted in more and more video information. However, it is difficult to manually process these massive amounts of information, so relevant methods are needed to analyze the content in the video. In the field of intelligent monitoring, the recognition of human interaction is particularly important. Unexpected events such as fights, abnormal behavior detection, etc. all rely on the accurate identification of human interaction actions. Therefore, accurate human interaction action recognition has important social significance. [0003] Video content understanding aims to allow computers...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/49G06F18/24
Inventor 蒋兴浩孙锬锋赵阳
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products