Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Human Behavior Recognition Method Based on Sparse Low Rank

A recognition method and sparse technology, which can be used in character and pattern recognition, instruments, computing, etc., and can solve problems such as information loss, high computational complexity, and human behavior occlusion.

Active Publication Date: 2020-04-17
厚普清洁能源(集团)股份有限公司
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] 1. When there is a large amount of human motion information in the video screen, the primary features of the motion information will expand rapidly, which means that a lot of time and memory resources need to be consumed for clustering operations, and the determination of the number of cluster centers It is also a difficult problem to deal with, especially in the case of large data sets
[0006] 2. In a complex background environment, there are a lot of occlusions in human behavior, and frequent changes in light intensity are not conducive to the correct description of human behavior
This method splits the complete motion image into equal-length vectors, ignoring the spatio-temporal characteristics of motion information, so it is not effective for similar actions such as "jumping" and "running".
[0009] 2. Based on low-rank behavior expression, this expression method first extracts the behavior features of different actions, and then uses these features to build a complete dictionary, and assumes that the action to be classified can be expressed linearly with an over-complete dictionary, and this linear expression There is a low-rank property, that is, some column vectors in the over-complete dictionary are effective for representing samples to be classified, while other column vectors are not effective. The same behavior description method has also achieved good results, but this method , ignoring the sparsity of the expression, so there is information redundancy in the expression
Because this method adopts the vectorization processing method and ignores the space-time distribution characteristics of motion features, there is obvious information loss, and this method requires a large number of distance calculations, and the calculation complexity is high. At the same time, the number of cluster centers It also has a great influence on the recognition results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Human Behavior Recognition Method Based on Sparse Low Rank
  • A Human Behavior Recognition Method Based on Sparse Low Rank
  • A Human Behavior Recognition Method Based on Sparse Low Rank

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] Implementation language: Matlab

[0053] Hardware platform: Intel i3 2120+4G DDR RAM

[0054] The method of the invention is verified through an intuitive and effective algorithm on Matlab.

[0055] The bag-of-words method, low-rank method and the method described in this patent are tested by collecting pedestrian activities in the school square. Pedestrian activities mainly include: bending, falling, clapping, waving, running, squatting, and walking 7 behaviors, the test results ,like Figure 4 shown. In contrast, using the method described in this patent has achieved better recognition results. Using the bag-of-words method ( Figure 4 A) The recognition effect is significantly lower than the low-rank representation method and the method described in this patent, and the low-rank representation method is basically the same as the low-rank sparse representation method in bending and falling actions, but slightly lower than the low-rank representation method in othe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of digital image processing, and relates to relevant theoretical knowledge such as computer vision, pattern recognition, machine learning and data mining. The present invention first uses the optical flow histogram to extract the optical flow features of two adjacent frames of pictures, and extracts the gradient histogram information of a single frame image to obtain the motion feature information in the monitoring scene, adopts the feature information of low-dimensional space and follows the [action 1|Action 2|Action 3|...]; then use K-means to cluster, and after obtaining the cluster center, use the cluster center as an over-complete dictionary to solve the sparseness of the test sample under the over-complete dictionary rank expression to obtain the expression matrix; finally, according to the maximum value in the expression matrix, the behavior category to which the test sample belongs is calculated. The invention adopts low-rank and sparse human body action recognition and cross-validation method, the recognition rate is 92.3-98.79%, and the misrecognition rate is 1.21-7.6%. The invention has the characteristics of low rank, and the recognition rate reaches 92.3-98.79%, and the false recognition rate is 1.21-7.6%.

Description

technical field [0001] The invention belongs to the technical field of digital image processing, and relates to relevant theoretical knowledge such as computer vision, pattern recognition, machine learning and data mining. Background technique [0002] The analysis and representation of video human motion is a research hotspot in the field of computer vision. Its main task is to detect, extract and represent human motion information from video. It involves image processing, machine learning, applied physics, mathematics and other disciplines. Important theoretical and practical application value. Due to the complexity and diversity of human motion, video human action recognition is still difficult to apply in practical environments despite more than ten years of research. As the core of human action recognition, action representation and recognition still have a lot of problems to be solved. [0003] Human action recognition can usually be divided into two steps: behavior ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/23G06F18/2136G06F18/23213
Inventor 解梅程石磊王博周扬
Owner 厚普清洁能源(集团)股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products