Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Mutually translating system and method of sign language and speech

A sign language and voice technology, applied in the application field of image pattern recognition, can solve the problem of weak ability to process time series, etc., and achieve the effect of low cost, high recognition rate and convenient use

Inactive Publication Date: 2009-09-23
XI AN JIAOTONG UNIV
View PDF0 Cites 50 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The neural network method has classification characteristics and anti-interference, but because of its weak ability to deal with time series, it is currently widely used in static sign language recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mutually translating system and method of sign language and speech
  • Mutually translating system and method of sign language and speech
  • Mutually translating system and method of sign language and speech

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] The present invention is described in further detail below in conjunction with accompanying drawing:

[0032] see figure 1 , 2 , 3, 4, 5, 6, according to the requirements of two-way interaction between normal people and deaf-mute people, the present invention divides the whole system into two subsystems of sign language recognition based on vision and voice translation to realize.

[0033] A sign language-to-speech inter-interpretation system, the system is composed of a vision-based sign language recognition subsystem 1 and a speech translation subsystem 2.

[0034] The vision-based sign language recognition subsystem 1 is made up of a gesture image acquisition module 101, an image preprocessing module 102, an image feature extraction module 103, a sign language model 104, a continuous dynamic sign language recognition module 105, and a Chinese sounding module 106; the gesture image acquisition module 101 Collect gesture video data and input it into image preprocessi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a mutually translating system of sign language and speech, a gesture image collecting module 101 is used for collecting the video data of gestures, an input image preprocessing module 102 is used for image preprocessing, an image characteristic extracting module 103 is adopted for image characteristic extraction of the video data after image preprocessing and then outputs 56-dimension characteristic vectors, the 56-dimension characteristic vectors are used for constructing a sign language model 104, a continuous and dynamic sign language recognizing module 105 is used for recognizing the sign language model 104, and recognition results are output and translated into Chinese speech through a Chinese sounding module 106; voice signals collected by a voice signal collecting device are input in a speech recognition programming interface of Microsoft Speech SDK 5.1 and converted into characters to be output; three-dimensional models and three-dimensional animation are established through three-dimensional modeling software; the information of the three-dimensional models and the three-dimensional animation is output into .x formatted files through a Panda plug-in; and DirectX 3D is utilized to load the .x formatted three-dimensional models and the three-dimensional animation and then output sign language animation.

Description

Technical field: [0001] The invention belongs to the application field of image pattern recognition, and in particular relates to the application of a method for image and voice mutual transformation in image processing and feature extraction. Background technique: [0002] The research on the Sign Language & Speech Intertranslating System not only helps to improve and enhance the living, learning and working conditions of the deaf-mute, and provides them with better services, but also can be applied to computer-assisted dumb language teaching, Bilingual broadcasting of TV programs, research on virtual humans, special effects processing in film production, animation production, medical research, game entertainment and many other aspects. [0003] From the perspective of sign language input devices, sign language recognition systems are mainly divided into data glove-based recognition systems and visual (image)-based sign language recognition systems. [0004] The vision-bas...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/20G06K9/62G10L15/26G10L21/06G10L21/10
Inventor 冯祖仁郭文涛郑珂张翔常洪浩
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products