Multi-modal fusion sign language recognition system and method based on graph convolution

A recognition system and multi-modal technology, applied in character and pattern recognition, neural learning methods, biological neural network models, etc., can solve the problems of complex video data, poor robustness and low accuracy of fusion features, and achieve enhanced Coherence and Accuracy, Strong Representation Ability, Effect of Improving Accuracy

Active Publication Date: 2020-06-09
HEFEI UNIV OF TECH
View PDF3 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Due to the complexity of video data, existing sign language recognition has many disadvantages, especially in the representation and fusion of multi-modal data
When existing sign language recognition methods use data from multiple modal sources, they often ignore the complementary associations between different modalities, and perform violent fusion of them, and the robustness of fusion features is poor; on the other hand, in feature During the learning process, the time and space characteristics in the video data stream are less explored, and the time-varying characteristics of sign language features are not fully utilized, resulting in poor coherence and low accuracy of sign language translation results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal fusion sign language recognition system and method based on graph convolution
  • Multi-modal fusion sign language recognition system and method based on graph convolution
  • Multi-modal fusion sign language recognition system and method based on graph convolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] The specific implementation technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings.

[0038] In this embodiment, a multimodal fusion sign language recognition system based on graph convolution, such as figure 1 As shown, it includes: feature extraction module, feature fusion module, sequence learning module and alignment translation module.

[0039] Among them, the feature extraction module is to extract the color feature u of the video frame from the sign language video database c , depth feature u d and the skeleton feature u s , and dimensionally align all the extracted features to obtain the multimodal feature f;

[0040] In this embodiment, the sign language video database contains sign language video data of 100 common sentences, and 50 people demonstrate the sign language corresponding to each sentence, with a total of 5000 videos.

[0041] In the specific implementation, the ResNet-18 netw...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-modal fusion sign language recognition system and method based on graph convolution. The system comprises a feature extraction module, a feature fusion module, a sequence learning module and an alignment translation module. The method comprises the following steps: 1, respectively extracting color, depth and skeleton features of video frames from a sign language video database by using a convolutional neural network and a graph neural network; 2, combining the multi-modal features, and fusing the network fusion features through a multi-modal sequence; 3, constructing a bidirectional recurrent neural network to perform sequence learning on a series of fused fragment-level features; and 4, aligning the feature sequence through the connected subjective timing sequence classification model, and translating a complete sign language sentence. According to the invention, translation of continuous sign language sentences can be realized, and the accuracy of continuous sign language translation is improved.

Description

technical field [0001] The invention belongs to the field of multimedia information processing, and relates to technologies such as computer vision, natural language processing, and deep learning, and specifically relates to a multi-modal fusion sign language recognition system and method based on graph convolution. Background technique [0002] Able-bodied people can communicate easily using spoken language, while deaf or voiceless people need to convey their thoughts through sign language. Since most able-bodied people lack the foundation of sign language education, there are obstacles in promoting sign language for normal social communication. Technology is good, and sign language recognition technology facilitates the integration of deaf people into society to a certain extent. [0003] Early sign language recognition research focused on discrete sign language recognition, which is essentially a special video classification problem. With the development of video unders...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/28G06V40/107G06N3/045G06F18/241Y02D10/00
Inventor 郭丹唐申庚刘祥龙洪日昌汪萌
Owner HEFEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products