Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Open View Action Recognition Method Based on Linear Discriminant Analysis

A linear discriminant analysis and action recognition technology, applied in the field of action recognition, can solve problems such as samples that cannot guarantee the action category, and achieve the effect of increasing the learning speed and simplifying the process

Active Publication Date: 2020-04-28
TIANJIN UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Because there are many perspectives in reality, and at the same time, there is no guarantee that there are samples of all action categories in each perspective.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Open View Action Recognition Method Based on Linear Discriminant Analysis
  • An Open View Action Recognition Method Based on Linear Discriminant Analysis
  • An Open View Action Recognition Method Based on Linear Discriminant Analysis

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0029] An open-view action recognition method based on linear discriminant analysis, see figure 1 , the action recognition method includes the following steps:

[0030] 101: Use the Kmeans algorithm to use the combined feature matrix for dictionary learning; convert the feature matrix of the action sample into the feature vector of the action sample;

[0031] 102: Use linear discriminant analysis to learn the projection matrix using the feature vectors of the action samples in the training view and the auxiliary view;

[0032] 103: Combining the projection matrix, projecting a total of t action sample representation vectors under the training perspective and the test perspective into the same space, and obtaining a new feature vector of each action sample;

[0033] 104: Use the linear support vector machine to use the new feature vector of the action sample to learn the action classification model, and finally use the action classification model to perform the action classifi...

Embodiment 2

[0045] The scheme in embodiment 1 is further introduced below in conjunction with specific calculation formulas and examples, see the following description for details:

[0046] 201: Collect and record motion information of the human body, and establish a multi-view motion database;

[0047] Among them, in order to ensure that there is no direct correlation between the action samples under different viewing angles, each action sample is recorded separately under different viewing angles. Table 1 gives the action list of the established database.

[0048] Table 1 action list

[0049]

[0050]

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an open-view action recognition method based on linear discriminant analysis, comprising the following steps: using the Kmeans algorithm to use the combined feature matrix for dictionary learning; converting the action sample feature matrix into an action sample feature vector; using linear discriminant analysis to utilize The feature vectors of the action samples under the training perspective and the auxiliary perspective are used to learn the projection matrix; combined with the projection matrix, a total of t action sample representation vectors under the training perspective and the test perspective are projected into the same space to obtain a new feature vector for each action sample; Use the new feature vector of the action sample to learn the action classification model by using the linear support vector machine, and finally use the action classification model to perform the action classification test under the test perspective. In the present invention, by learning the correlation of action samples under different viewing angles, the representation information of action samples under known and unknown viewing angles can be projected into the same vector space for action recognition.

Description

technical field [0001] The invention relates to the field of motion recognition, in particular to an open-view motion recognition method based on linear discriminant analysis. Background technique [0002] People began to pay attention to the research of human motion recognition a long time ago. In the 1970s, human motion recognition was studied for the first time, and psychologists conducted experiments on human motion perception. Movement in the dark was observed by marking bright spots on the joints of the person and then placing them in a dark environment. Experimental results show that the human visual system can distinguish the form of movement, such as walking, running, and even the gender of the athlete, for the sequence of light points in motion. This experiment shows that for the action recognition of the human body, it is not necessary to extract all the information of the movement, and only part of the information can be used to represent the movement. But this...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/20G06F18/2411
Inventor 苏育挺李阳刘安安
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products