Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method for recognizing human behavior in a video

A video and video technology, applied in character and pattern recognition, image data processing, instruments, etc., can solve the problem of lack of semantic information in the description ability of depth representation method

Active Publication Date: 2019-03-22
SUN YAT SEN UNIV
View PDF20 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the lack of semantic information in the depth representation method in the video human behavior recognition technology, the present invention provides a method for human behavior recognition in video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method for recognizing human behavior in a video
  • A method for recognizing human behavior in a video
  • A method for recognizing human behavior in a video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0068] Such as figure 1 As shown, a method for human behavior recognition in a video, specifically includes the following steps:

[0069] Step S1: Preprocessing the video to obtain improved dense trajectories of all frames of the entire video;

[0070] Step S2: extract key frames from the video based on temporal saliency and spatial saliency;

[0071] Step S3: filter the dense trajectories obtained in step S1 through key frames, retain the dense trajectories of key frames, and remove the dense trajectories of non-key frames;

[0072] Step S4: On the basis of the improved compact shelf trajectory, perform video representation based on hierarchical trajectory bundles;

[0073] Step S5: extracting the deep learning features of key frames, and performing video representation based on deep features to the video;

[0074] Step S6: Fusing the video representation based on the hierarchical trajectory beam and the video representation based on the depth feature;

[0075] Step S7: S...

Embodiment 2

[0124] The experimental data of this embodiment are as follows:

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of artificial intelligence, more specifically, to a method for recognizing human behavior in a video. The main algorithm idea of the invention is as follows: firstly, the input video segment is pretreated, a six-parameter affine model is used to simulate the motion of a camera, and the motion trajectory is obtained through the affine model; Secondly, the video key frames are extracted based on temporal saliency and spatial saliency. Then the improved dense trajectory is extracted from the key frames, and the two-stream convolution neural network is selected as the feature extractor to extract the depth learning feature from the key frames. The extracted features are normalized, and the features are fused based on the video representation, which fuses thevideo representation based on the depth feature and the video representation based on the improved dense trajectory feature. The invention integrates the improved dense trajectory feature IDT and thedepth learning feature designed manually, and can more effectively mine the complementary information of the two features and the behavior pattern in the video, so as to obtain better effect in videohuman body behavior identification.

Description

technical field [0001] The present invention relates to the field of artificial intelligence, and more specifically, relates to a method for human behavior recognition in videos. Background technique [0002] The current research interest in human behavior recognition has shifted from simple behavior recognition in a well-controlled shooting environment to more realistic behavior recognition in unconstrained environments such as story movies, sports broadcast videos, and home videos. The research difficulty in this case The main reason is that the behavior of the human body is easily affected by many factors such as angle changes, camera shake, illumination, background occlusion, and rapid and irregular movement of background clutter. Therefore, it is particularly important to extract effective features of human body movements in such videos. At the same time, different motions may have similar performances in videos, which also makes it a main research content to construct ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06T7/269
CPCG06T7/269G06T2207/30241G06V40/20G06V20/46
Inventor 陈嘉谦朱艺沈金龙顾佳良吴昱焜衣杨
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products