Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Action Recognition Method Based on Deep Nonnegative Matrix Factorization Under Time-Dependent Constraints

A non-negative matrix decomposition, time-dependent technology, applied in the field of image processing, can solve the problem of ignoring the spatial and temporal characteristics of the video, and achieve the effect of improving the expression ability

Active Publication Date: 2020-05-19
XIDIAN UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, when the basic non-negative matrix factorization method is applied to video feature extraction, only the spatial features of each frame of the video are considered, and the space-time features of the video are ignored.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Action Recognition Method Based on Deep Nonnegative Matrix Factorization Under Time-Dependent Constraints
  • Action Recognition Method Based on Deep Nonnegative Matrix Factorization Under Time-Dependent Constraints
  • Action Recognition Method Based on Deep Nonnegative Matrix Factorization Under Time-Dependent Constraints

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] refer to figure 1 , the implementation steps of the present invention are as follows:

[0027] Step 1 extracts the motion saliency region V of the original video O.

[0028] (1a) Construct a Gaussian filter with a size of 5×5, and apply the original video O={o 1 ,o 2 ,...,o i ,...,o Z} for Gaussian filtering, corresponding to the filtered video B={b 1 ,b 2 ,...,b i ,...,b Z}, where b i Indicates the i-th video frame after filtering, i=1,2,...,Z;

[0029] (1b) Use the following formula to calculate the i-th video frame o i The motion salience region v i :

[0030] v i =|mo i -b i |,

[0031] Among them, mo i is the i-th video frame o i The pixel geometric mean of ;

[0032] (1c) Repeat the operation in step (1b) for all frames in the video O to obtain the entire video motion salient region V={v 1 ,v 2 ,…,v i ,...,vZ}.

[0033] The saliency extraction method in this step comes from the article "Frequency-tuned Salient Region Detection" published by ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a behavior recognition method based on deep non-negative matrix decomposition under time-dependent constraints, which mainly solves the problems of insufficient expression of features extracted by existing methods and low behavior recognition rate. Its implementation steps: 1) extract the significant motion area of ​​the original video, and construct the corresponding non-negative matrix set in sections; 2) add time-dependent constraints, and construct time-dependent constrained non-negative matrix decomposition; 3) use time-dependent constrained non-negative matrices Decompose and construct a deep non-negative matrix decomposition framework under time-dependent constraints with a depth of L, and use this framework to decompose the data in the non-negative matrix set; 4) Normalize the coefficient matrices output by each layer and concatenate them as space-time Feature output; 5) Build a bag-of-words model for space-time features, and then identify and classify through the SVM classifier. The invention can obtain space-time features with high discriminative and expressive properties, and can be applied to occasions such as video surveillance and motion analysis that require high accuracy of behavior recognition.

Description

technical field [0001] The invention belongs to the technical field of image processing, and relates to a human behavior recognition method, which can be used for intelligent video monitoring and human-computer interaction. Background technique [0002] Human behavior recognition technology has broad application prospects and very considerable economic value, and the application fields involved mainly include: video surveillance, motion analysis, virtual reality, etc. Researchers have carried out a lot of in-depth research on the related technologies of human behavior recognition, and accumulated rich research results, but on the whole, the research field of human behavior recognition is still in the basic research stage, and there are still many key problems and problems. Technical difficulties need to be solved urgently, such as researching a relatively simple behavior representation method with high recognition rate and high robustness. Some scholars have found that the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00
CPCG06V40/23G06V20/40
Inventor 同鸣汪雷李海龙
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products