Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

3D convolutional neural network sign language identification method integrated with multi-modal data

A convolutional neural network and recognition method technology, applied in the field of dynamic sign language recognition and somatosensory interaction, can solve problems such as unsatisfactory classification effect, slow calculation and analysis data, high algorithm complexity, etc., to reduce computational complexity and achieve high classification accuracy , the effect of good robustness

Active Publication Date: 2018-02-09
HUAZHONG NORMAL UNIV
View PDF4 Cites 68 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method uses RGB-D images to train the deep network model. Due to the relatively large amount of data, the calculation and analysis of data is slow, and the complexity of the algorithm is high. not ideal

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • 3D convolutional neural network sign language identification method integrated with multi-modal data
  • 3D convolutional neural network sign language identification method integrated with multi-modal data
  • 3D convolutional neural network sign language identification method integrated with multi-modal data

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0041]The technical idea of ​​the present invention is: use single-channel infrared and contour data to train two neural sub-networks respectively, and the network performs 3D convolution operation on the original input data to extract features from the spatial dimension and time dimension, so that the model can learn from adjacent The static and dynamic features of sign language are efficiently...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a 3D convolutional neural network sign language identification method integrated with multi-modal data. The 3D convolutional neural network sign language identification methodintegrated with multi-modal data includes the steps: constructing a deep neural network, respectively performing characteristic extraction on an infrared image and a contour image of a gesture from the spatial dimension and the time dimension of a video, and integrating two network output based on different data formats to perform final classification of sign language. The 3D convolutional neuralnetwork sign language identification method integrated with multi-modal data can accurately extract the limb movement track information in two different data formats, can effectively reduce the computing complexity of a model, uses a deep learning strategy to integrate the classification results of two networks, and effectively solves the problem that single classifier encounters error classification because of data loss, so as to enable the model to have relatively higher robustness for illumination and background noise of different scenes.

Description

technical field [0001] The invention belongs to the technical field of educational informatization, and more specifically relates to a dynamic sign language recognition method and system based on a 3D convolutional neural network, which can be applied to somatosensory interaction for special groups of deaf-mute people in the environment of science and technology museums. Background technique [0002] Sign language is the most beneficial tool for the deaf-mute to communicate with each other and for the deaf-mute to communicate with normal people. It is also the most important and natural way for the deaf-mute to obtain information services, participate in social life on an equal footing, and share social material and cultural achievements. At the same time, dynamic sign language has a very high application value in the field of human-computer interaction because of its strong visual effects, vivid and intuitive features. [0003] The existing gesture recognition methods follo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/32G06K9/34G06K9/46G06K9/62
CPCG06V40/28G06V10/25G06V10/267G06V10/44G06F18/24137G06F18/23213
Inventor 廖盛斌梁智杰杨宗凯刘三女牙左明章刘攀吴琼郭丰
Owner HUAZHONG NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products