Scene matching fused Chinese sign language translation model construction method and device

A technology of scene matching and sign language translation, applied in the field of Chinese sign language translation model construction integrating scene matching, can solve the problems of polysemy exploration of sign language, low accuracy, misjudgment of scene classification, etc., to improve the accuracy and stability, The effect of improving accuracy and efficiency, and improving accuracy and speed

Active Publication Date: 2021-04-16
株洲手之声信息科技有限公司
View PDF6 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] For the recognition of natural scenes, at present, the scene classification task is usually simply set as a single-label class, and the neural network is used to identify and classify the scene driven by massive label data. However, real scenes usually contain multiple label information, and these labels It may correspond to objects and objects in different scenes, which will lead to misjudgment of the scene classification. If the scene classification model is directly introduced on the basis of the translation model, that is, an additional scene classification model is established, and the classification results of the scene classification model are directly used for classification. Translation, if the accuracy of the scene classification model is not high, it is very easy to cause inaccurate translation due to the wrong classification of the scene classification model
[0005] To sum up, the current research on Chinese sign language translation tasks is still in the initial stage from the extraction of sign language behavior features to the mapping of sign language texts, and no further exploration has been made on the impact of sign language polysemy in different scenarios and contexts. At the same time, the current Chinese sign language translation fails to effectively use the sign language scene to re-optimize the translation results, which also limits the improvement of the accuracy of Chinese sign language translation, and the direct introduction of scene classification has the problem of being affected by the classification accuracy of the scene classification model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene matching fused Chinese sign language translation model construction method and device
  • Scene matching fused Chinese sign language translation model construction method and device
  • Scene matching fused Chinese sign language translation model construction method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below in conjunction with the accompanying drawings and specific preferred embodiments, but the protection scope of the present invention is not limited thereby.

[0039] like figure 1 As shown, the steps of the method for building a Chinese sign language translation model that integrates scene matching in this embodiment include:

[0040] S1. Model construction: build a sign language word recognition model for the mapping relationship between sign language actions and words in different scenarios, and build a scene matching model for the mapping relationship between vocabulary and sign language actions in different scenarios;

[0041] S2. Model training: use the training data set to train the sign language word recognition model and the scene matching model respectively, and obtain the trained sign language word recognition model and the scene matching model;

[0042] S3. Dynamic update: cascade the trained sign language...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a scene matching fused Chinese sign language translation model construction method and device. The method comprises the steps of S1, constructing a sign language word recognition model of a mapping relationship between sign language actions and words in different scenes, and constructing a scene matching model of a mapping relationship between different scene vocabularies and sign language actions; s2, respectively training the sign language word recognition model and the scene matching model to obtain a trained sign language word recognition model and a scene matching model; s3, cascading the trained sign language word recognition model and the scene matching model to form a Chinese sign language translation model, acquiring a sign language action data set, and respectively inputting the sign language action data set into the sign language word recognition model and the scene matching model of the Chinese sign language translation model to update the models, and obtaining a final Chinese sign language translation model until the sign language word recognition model and the scene matching model reach dynamic balance. The method and device have the advantages of simple implementation method, high construction efficiency, high accuracy and the like.

Description

technical field [0001] The invention relates to the technical field of Chinese sign language translation, in particular to a method and device for constructing a Chinese sign language translation model integrated with scene matching. Background technique [0002] At present, sign language translation mainly obtains the user's body characteristics through wearable devices or image sensing devices. Ways to analyze the meaning of user sign language. It has been tested that wearable devices have problems such as high cost, poor portability, and uncertain disturbances to user behavior in the process of expressing sign language. At present, images or videos are usually collected by using image sensing devices. [0003] With the development of deep learning, the use of neural networks can mine deeper and more abstract features in images, and establish a more relevant feature-to-sign language text mapping. For example, use AlexNet to extract features from the input image, use the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
Inventor 陈斌牟中强
Owner 株洲手之声信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products