Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Transformation Method from Lip Image Features to Speech Coding Parameters

A technology of speech coding and image features, which is applied in speech analysis, speech synthesis, neural learning methods, etc., can solve the problem of complex conversion process and achieve the effect of facilitating construction training

Active Publication Date: 2020-06-26
SHANGHAI UNIVERSITY OF ELECTRIC POWER
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Compared with the previous technology, the recognition rate is higher, but the conversion process is also very complicated

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Transformation Method from Lip Image Features to Speech Coding Parameters
  • A Transformation Method from Lip Image Features to Speech Coding Parameters
  • A Transformation Method from Lip Image Features to Speech Coding Parameters

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0048] The following is a specific implementation method, but the method and principle of the present invention are not limited to the specific numbers given therein.

[0049] (1) The predictor can be implemented using artificial neural networks. Other machine learning techniques can also be used to construct the predictor. In the following process, the predictor uses an artificial neural network, that is, the predictor is equivalent to an artificial neural network.

[0050] In this embodiment, the neural network is composed of 3 LSTM layers + 2 fully connected layers Dense connected in sequence. A Dropout layer is added between every two layers and between the internal feedback layer of LSTM. For clarity of the architecture, these are not shown in the figure. Such as image 3 Shown:

[0051] Among them, the three layers of LSTM each have 80 neurons, and the first two layers use the "return_sequences" mode. The two dense layers have 100 neurons and 14 neurons respectively.

[005...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a conversion method from lip image features to speech coding parameters, comprising the following steps: 1) constructing speech coding parameter converters, including input buffers and trained predictors, and receiving lip features sequentially in chronological order vector, and store it in the input cache of the converter; 2) at regular intervals, send the k latest lip feature vectors cached at the current moment as a short-term vector sequence to the predictor, and obtain a prediction As a result, the prediction result is a coding parameter vector of a speech frame; 3) The speech coding parameter converter outputs the prediction result. Compared with the prior art, the present invention has the advantages of direct conversion, no text conversion, and convenient construction training.

Description

Technical field [0001] The invention relates to the technical fields of computer vision, digital image processing and microelectronics, in particular to a method for converting lip image features to speech coding parameters Background technique [0002] Lip language recognition is to generate corresponding text expressions based on lip videos. The following are related existing technical solutions: [0003] (1) CN107122646A, Title of Invention: A method for unlocking lip language. The principle is to compare the lip features collected in real time with the pre-stored lip features to determine the identity, but only the lip features can be obtained. [0004] (2) CN107437019A, Title of Invention: Identity verification method and device for lip recognition. The principle is similar to (1), the difference lies in the use of 3D images. [0005] (3) CN106504751A, title of invention: adaptive lip language interaction method and interaction device. The principle is still to recognize the l...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L13/08G10L13/027G10L25/57G10L25/30G06N3/08G06N3/04G06K9/62G06K9/46G06K9/00
CPCG06N3/084G10L13/027G10L13/08G10L25/30G10L25/57G06V40/20G06V20/41G06V20/46G06V10/44G06N3/045G06F18/214
Inventor 贾振堂
Owner SHANGHAI UNIVERSITY OF ELECTRIC POWER
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products