Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Human body action recognition method and device based on multi-modal feature fusion

A technology for human action recognition and feature fusion, which is applied in the field of action recognition and can solve problems such as unsatisfactory results.

Active Publication Date: 2020-11-06
NORTHWEST UNIV(CN)
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the defects and deficiencies in the prior art, the present invention provides a human action recognition method and device based on multi-modal feature fusion, which utilizes wireless WiFi signals to assist in the recognition of video features, and utilizes a multi-modal feature fusion scheme to combine These two features are fused and discriminantly analyzed to obtain the final human action recognition result; thereby overcoming the defects that the existing human action recognition schemes use video features for discrimination, but the results are not ideal due to optical limitations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human body action recognition method and device based on multi-modal feature fusion
  • Human body action recognition method and device based on multi-modal feature fusion
  • Human body action recognition method and device based on multi-modal feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0066] This embodiment provides a human action recognition method based on multi-modal feature fusion, which uses multi-modal feature fusion to fuse the CSI features of WiFi signals and video features, and maps these two features to the same common Carry out discriminant analysis in space, and finally identify the category of human actions; figure 1 As shown, the multi-modal data set is processed by the data preprocessing module to process the video and Wifi into numbers, and then a multi-modal feature fusion model is constructed to solve the objective function of the mapping matrix; then the model is solved for the global optimal mapping matrix, Finally, the input multimodal samples are mapped to the public space by using the mapping matrix, and then SVM classification is performed to obtain the final classification result. Include the following steps:

[0067] Step 1, data set preprocessing: the Vi-Wi15 data set includes video information and the CSI information data of the...

Embodiment 2

[0141] This embodiment provides a human action recognition device based on multimodal feature fusion, including:

[0142] The data set preprocessing unit is used to extract the video feature in the Vi-Wi15 data set by using a convolutional neural network, and extract the CSI feature of the WiFi signal in the Vi-Wi15 data set according to a standard statistical algorithm;

[0143] The construction unit of the multimodal feature fusion model is used to use the obtained video features and the CSI features of the WiFi signal as two modes respectively, establish a multimodal feature fusion model and define an objective function for solving the mapping matrix;

[0144] The mapping matrix global optimal solution solving unit is used to calculate the objective function to obtain the global optimal solution of the mapping matrix in the multimodal feature fusion model;

[0145] The action recognition unit is used to obtain the global optimal solution about the mapping matrix, and then p...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a human body action recognition method and device based on multi-modal feature fusion, and the method comprises the steps of employing a WiFi signal with the strongest commercial performance, and fusing the CSI features and video features of the WiFi signal through employing a multi-modal feature fusion method; mapping the two different features to the same public space through a multi-modal feature fusion method, then performing classification, and finally identifying the human body action category. Experimental results show that under the condition that WiFi signals are added and a multi-modal feature fusion method is utilized, the precision of human body action recognition is obviously improved.

Description

technical field [0001] The invention belongs to the technical field of motion recognition, and in particular relates to a human body motion recognition method and device based on multimodal feature fusion. Background technique [0002] Human action recognition algorithms play a vital role in many fields of computer vision, and as for video action recognition, the most popular methods are based on spatiotemporal and optical information analysis. However, these methods do not perform well due to poor data frame quality and ambient light in natural environments. [0003] Existing multimodal models are divided into unsupervised and supervised algorithms. Among them, the unsupervised multimodal algorithm cannot obtain the discriminative common space due to the lack of label information, resulting in poor results. Currently commonly used multimodal algorithms are: GMA (Generalized Multi-View Analysis) and MvDA (Multi-View Discriminant Analysis), both of which map multi-modal sam...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/20G06V20/46G06V20/41G06F18/253Y02D30/70
Inventor 郭军石梅常晓军汤战勇刘宝英朱省吾黄位贺怡许鹏飞
Owner NORTHWEST UNIV(CN)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products