Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Multimodal Dynamic Gesture Recognition Method Based on Lightweight 3D Residual Network and TCN

A dynamic gesture and recognition method technology, applied in character and pattern recognition, biological neural network model, neural learning method, etc., can solve the problem of high complexity of the method

Active Publication Date: 2022-07-01
CHONGQING UNIV OF POSTS & TELECOMM
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The combination of lightweight 3D residual network and TCN is expected to solve the problem of generally high complexity of existing methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Multimodal Dynamic Gesture Recognition Method Based on Lightweight 3D Residual Network and TCN
  • A Multimodal Dynamic Gesture Recognition Method Based on Lightweight 3D Residual Network and TCN
  • A Multimodal Dynamic Gesture Recognition Method Based on Lightweight 3D Residual Network and TCN

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The technical solutions in the embodiments of the present invention will be described clearly and in detail below with reference to the accompanying drawings in the embodiments of the present invention. The described embodiments are only some of the embodiments of the invention.

[0045] The technical scheme that the present invention solves the above-mentioned technical problems is:

[0046] like figure 1 As shown, a multi-modal dynamic gesture recognition method based on a lightweight 3D residual network and TCN provided by this embodiment includes the following steps:

[0047] Step 1: According to the frame rate of the gesture video in the original data set, sample each gesture video, generate a number of pictures corresponding to the frame rate of the video, and save the pictures in chronological order. In order to ensure that the input data has the same dimension, the window sliding method is used to set the input reference frame number of each gesture video. It...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention claims to protect a multi-modal dynamic gesture recognition method based on a lightweight 3D residual network and TCN. First, the original videos in the dataset are sampled and saved in chronological order; then, a lightweight 3D residual network is pre-trained using a large public gesture recognition dataset, and the model weight files are saved; then, RGB‑ The D image sequence is used as the input, and the lightweight 3D residual network and the temporal convolutional network are used as the basic model to extract long-term and short-term spatial and temporal features, and use the attention mechanism to weight and fuse multi-modal information. The RGB and depth (Depth) sequences are respectively input to the same network structure; finally, the fully connected layer is used for classification, the cross-entropy loss function is used to calculate the loss value, and the accuracy and F1Score are used as the evaluation indicators of the network model. The invention can not only achieve higher classification accuracy, but also has the advantages of low parameter quantity.

Description

technical field [0001] The invention belongs to the technical field of video spatiotemporal feature extraction and classification methods, in particular to a lightweight heterogeneous structure for dynamic gesture spatiotemporal feature extraction, which can not only reduce the amount of model parameters, but also ensure model performance. Background technique [0002] Gestures are a common form of human communication. Gesture recognition enables human-computer interaction in a natural way. Gesture recognition aims to understand human movements by extracting features from images or videos, and then classifying or identifying each sample as a specific label. Traditional gesture recognition is mainly based on manually extracted features. Although this method can achieve a good recognition effect, this method relies on the experience of researchers to design features, and the manually extracted features have poor adaptability to dynamic gestures. [0003] With the development...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V40/10G06V40/20G06V10/764G06V10/82G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/107G06V40/28G06N3/047G06N3/048G06N3/045G06F18/241G06F18/2415
Inventor 唐贤伦闫振甫李洁彭德光彭江平郝博慧朱楚洪李鹏华
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products