Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Light human body action recognition method based on deep learning

A technology of human action recognition and deep learning, which is applied in the field of graphics and image processing, and can solve problems such as the network is too deep and the parameters of the human action recognition model are large

Inactive Publication Date: 2019-07-05
CHENGDU UNIV OF INFORMATION TECH
View PDF5 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] In order to solve the problems of the existing human action recognition model based on deep learning, such as a large number of parameters and too deep and heavy network, the present invention provides a light-weight human action recognition method based on deep learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Light human body action recognition method based on deep learning
  • Light human body action recognition method based on deep learning
  • Light human body action recognition method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0073] In order to make the purpose, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the examples and accompanying drawings. As a limitation of the present invention.

[0074] Such as image 3 As shown, first of all, the present invention utilizes the light-weight deep learning network (SDNet) that combines the proposed shallow layer and deep network to extract and represent the features of the spatio-temporal dual stream, and then utilizes the time pyramid pooling layer (TPP) to convert the time stream The video frame-level features of the spatial stream and the spatial stream are aggregated into a video-level representation, and then the recognition results of the spatio-temporal dual-stream to the input sequence are obtained through the fully connected layer and the softmax layer. Finally, the dual-stream results are fused by weighted average fusion to obtain the final ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a light human body action recognition method based on deep learning. The method comprises the steps of firstly constructing a light-weight deep learning network (SDNet) combining a shallow network and a deep network, wherein the network comprises a shallow multi-scale module and a deep network module, and constructing a light-weight human body action recognition model basedon deep learning based on the network; in the model, firstly utilizing the SDNet to perform feature extraction and representation on the space-time double flow; utilizing a time pyramid pooling layerto aggregate the video frame level features of the time stream and the space stream into the video level representation; obtaining a recognition result of the space-time double flow to the input sequence via a full connection layer and a softmax layer, and finally fusing the double-flow result in a weighted average fusion mode to obtain a final recognition result. By adopting the light human bodyaction recognition method based on the deep learning, the model parameter quantity can be greatly reduced on the premise of ensuring that the recognition precision is not reduced.

Description

technical field [0001] The present invention relates to the technical field of graphics and image processing, in particular to a supervised light-weight human action recognition model method based on deep learning. Background technique [0002] The main problem to be solved in human action recognition is how to analyze and process the video sequences collected by cameras or sensors, so that the computer can "understand" human actions and behaviors in the video, which is important for security monitoring, entertainment methods, etc. The research significance of human action recognition based on video is also widely used in human-computer interaction, virtual reality, smart home equipment and other fields. Human action recognition or understanding of human behavior is essential for many AI systems. For example, the video surveillance system contains hundreds of hours of surveillance video. If you manually traverse the surveillance video, not only is the work tedious and lengt...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04
CPCG06V40/20G06V20/46G06N3/045
Inventor 魏维何冰倩魏敏
Owner CHENGDU UNIV OF INFORMATION TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products