Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal dynamic gesture recognition method based on lightweight 3D residual network and TCN

A dynamic gesture and recognition method technology, applied in character and pattern recognition, biological neural network model, neural learning method, etc., can solve the problem of high complexity of the method

Active Publication Date: 2021-03-16
CHONGQING UNIV OF POSTS & TELECOMM
View PDF4 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The combination of lightweight 3D residual network and TCN is expected to solve the problem of generally high complexity of existing methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal dynamic gesture recognition method based on lightweight 3D residual network and TCN
  • Multi-modal dynamic gesture recognition method based on lightweight 3D residual network and TCN
  • Multi-modal dynamic gesture recognition method based on lightweight 3D residual network and TCN

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The technical solutions in the embodiments of the present invention will be described clearly and in detail below with reference to the drawings in the embodiments of the present invention. The described embodiments are only some of the embodiments of the invention.

[0045] The technical scheme that the present invention solves the problems of the technologies described above is:

[0046] Such as figure 1 As shown, a kind of multimodal dynamic gesture recognition method based on lightweight 3D residual network and TCN provided in this embodiment comprises the following steps:

[0047] Step 1: According to the frame rate of the gesture video in the original dataset, sample each gesture video, generate a number of pictures corresponding to the frame rate of the video, and sort and save the pictures in chronological order. In order to ensure that the input data has the same dimension, the window sliding method is used to set the input reference frame number of each gest...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal dynamic gesture recognition method based on a lightweight 3D residual network and a TCN. Firstly, original videos in a data set are sampled and sorted and storedaccording to a time sequence; then, the lightweight 3D residual network is pre-trained by using a large public gesture recognition data set, and thus storing a weight file of the model; then, the RGB-D image sequence is used as input, the lightweight 3D residual network and the time convolution network are used as basic models to extract long-term and short-term spatial and temporal features, andan attention mechanism is used for weighted fusion of multi-modal information, wherein the RGB sequence and the Depth sequence are respectively input into the same network structure; finally, a full connection layer is used for classification, a cross entropy loss function is used for calculating a loss value, and accuracy and F1Score are used as evaluation indexes of the network model. The invention not only can achieve high classification accuracy, but also has the advantage of low parameter quantity.

Description

technical field [0001] The invention belongs to the technical field of video spatiotemporal feature extraction and classification methods, in particular to a lightweight heterogeneous structure for dynamic gesture spatiotemporal feature extraction, which can reduce the amount of model parameters and ensure model performance. Background technique [0002] Gestures are a common form of human communication. Gesture recognition can enable human-computer interaction in a natural way. Gesture recognition aims to understand human actions by extracting features from images or videos, and then classify or recognize each sample as a specific label. Traditional gesture recognition is mainly based on manually extracted features. Although this method can achieve good recognition results, it relies on the experience of researchers to design features, and manually extracted features are poorly adaptable to dynamic gestures. [0003] With the development of deep learning, end-to-end gestu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/107G06V40/28G06N3/047G06N3/048G06N3/045G06F18/241G06F18/2415
Inventor 唐贤伦闫振甫李洁彭德光彭江平郝博慧朱楚洪李鹏华
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products