Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Lip language recognition-based method for improving speech comprehension degree of patient with severe hearing impairment

A lip language and patient technology, applied in the field of speech understanding, can solve the problems of hearing-impaired patients with limited functions, achieve the effects of improving speech comprehension, avoiding semantic loss, and improving language comprehension

Pending Publication Date: 2021-02-05
NANJING INST OF TECH
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] Purpose of the invention: Aiming at the problem that speech hearing aid methods in the prior art have limited effect on patients with severe hearing impairment, the present invention discloses a method for improving speech comprehension of patients with severe hearing impairment based on lip recognition. By introducing spatial information feedback Module and time information feedback module to assist training, so as to capture the fine-grained features of the lips, solve the long-short-term dependence, suppress the redundant information of words, and improve the robustness and accuracy of lip language recognition. The method is ingenious and novel, and has good Application prospects

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Lip language recognition-based method for improving speech comprehension degree of patient with severe hearing impairment
  • Lip language recognition-based method for improving speech comprehension degree of patient with severe hearing impairment
  • Lip language recognition-based method for improving speech comprehension degree of patient with severe hearing impairment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053]The present invention will be further explained below in conjunction with the drawings.

[0054]The invention discloses a method for improving the speech comprehension of severe hearing impaired patients based on lip recognition, such asfigure 1 As shown, including the following steps:

[0055]Step (A): Use an image acquisition device to acquire a sequence of lip motion images from the real environment as the input feature of the deep neural network.

[0056]Step (B). Construct a visual modal speech endpoint detection method based on deep learning to confirm the position of the voice segment under the condition of low signal-to-noise ratio. The endpoint detection method uses key points to detect and estimate the motion state of the lips and their relative Location, and build a model based on this to determine whether it is a voice segment, as follows:

[0057]Step (B1), constructing a multi-scale neural network model based on depth separable convolution as a key point detection model, the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a lip language recognition-based method for improving the speech comprehension degree of a patient with severe hearing impairment. The method comprises the steps: collecting alip motion image sequence from a real environment through image collection equipment, and enabling the lip motion image sequence to serve as an input feature of a deep neural network; constructing a visual modal voice endpoint detection method based on deep learning, and determining the position of a voice segment under the condition of a low signal-to-noise ratio; constructing a deep learning model based on a three-dimensional convolution-residual network-bidirectional GRU structure as a baseline model; constructing a lip language recognition model based on spatio-temporal information features on the basis of the baseline model; and training a network model by using the cross entropy loss, and identifying the speaking content according to the trained lip language identification model. According to the method, fine-grained features and time domain key frames of the lip language image are captured through space-time information feedback, so that the adaptability to the lip language features in a complex environment is improved, the lip language recognition performance is improved, the language understanding ability of a patient suffering from severe hearing impairment is improved, and the method has a good application prospect.

Description

Technical field[0001]The invention belongs to the field of speech comprehension, and in particular relates to a method for improving speech comprehension of patients with severe hearing impairment based on lip recognition.Background technique[0002]For patients with severe hearing impairment or higher, due to their severe psychological or physical hearing impairment, they can hardly understand the language content expressed by other speakers or even perceive the sound. Although hearing aids can partially improve the auditory perception of patients, their practical effects are limited for patients with severe hearing impairment. Although the auditory cochlea can improve the speech perception of patients with severe hearing impairment, there are certain risks because the auditory cochlea requires surgery.[0003]Moreover, in a complex actual environment, speech signals are often accompanied by various types of noise and instantaneous interference, especially under conditions of low signa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/207G06N3/04G06N3/08
CPCG06T7/207G06N3/08G06T2207/10016G06T2207/20081G06T2207/20084G06N3/045G06N3/048
Inventor 唐闺臣王沛梁瑞宇王青云李克邹采荣谢跃包永强
Owner NANJING INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products