Human Action Recognition Method Based on Low-rank Representation

A low-rank representation and recognition method technology, applied in the field of computer vision and machine learning, can solve problems such as only considering sparsity, not considering the overall structure of data, and low recognition rate

Active Publication Date: 2016-08-10
XIDIAN UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0012] The improvement methods mentioned above all have the following shortcomings: only considering the sparsity, not the overall structure of the data, so the global structure information in the local features of the video cannot be obtained, and the recognition rate is low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human Action Recognition Method Based on Low-rank Representation
  • Human Action Recognition Method Based on Low-rank Representation
  • Human Action Recognition Method Based on Low-rank Representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] refer to figure 1 , the present invention mainly includes two parts: video representation and video classification. The following describes the implementation steps of these two parts:

[0044] 1. Video representation

[0045] Step 1, input all videos, each video contains only one kind of human behavior, and use Cuboid detector and descriptor to detect and describe the local features of the behavior in the video respectively.

[0046] Behaviors in the video refer to human actions such as walking, running, jumping, boxing, etc. All videos are executed by several actors, and each actor completes all actions in turn. A video contains only one actor. Behavior;

[0047]The implementation method of using the Cuboid detector to detect local features of the video is: divide the video into local blocks of equal size, and calculate the response function value R of each pixel in a local block:

[0048] R=(I*g*h ev ) 2 +(I*g*h od ) 2 ,

[0049] Where: I represents the gray...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a human body action identifying method based on lower-rank representation. The human body action identifying method based on lower-rank representation mainly solves the problem that an action identifying rate in a video in the prior art is low. The identifying process of the human body action identifying method based on lower-rank representation includes the following steps that firstly, all the videos are inputted, a K mean value is used for clustering local features detected by all actions and a code book is obtained; secondly, LRR with coefficient normalization constraint is used for coding all the features of each video; thirdly, coding coefficients of each video undergoes vectorization to obtain the final representation of each video; fourthly, all the videos which obtain the final representation are divided into groups, one group of the videos serves as a training sample and the other group of the videos serves as a testing sample and the video representation of the training sample forms a dictionary; fifthly, based on the newly-formed dictionary, the sparse representation is used for coding the testing sample and determining the class mark of the testing sample to finish identifying human body actions in the testing sample. According to the human body action identifying method based on lower-rank representation, the discriminative property of the video representation is enhanced, the rate of identifying the human body actions in the videos is improved and the human body action identifying method based on lower-rank representation is applicable to intelligent monitoring.

Description

technical field [0001] The invention belongs to the fields of machine learning and computer vision, relates to the recognition of character behavior in videos, and can be used for post-processing of target detection and tracking in videos. Background technique [0002] Human behavior recognition includes extracting relevant visual information from video sequences, expressing it in an appropriate way, and finally interpreting the information to realize learning and recognition of human behavior. Studying human behavior patterns will bring new insights into people's lives. interactive mode. [0003] In recent years, the bag-of-features BoF model has been successfully applied in the fields of image classification and action recognition. In the field of action recognition, it describes a video sequence as a statistical histogram of a series of visual keywords. The construction of the visual keyword statistical histogram is divided into the following steps: [0004] The first ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/66
Inventor 张向荣焦李成杨浩杨阳侯彪王爽马文萍马晶晶
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products