Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Transformation Method from Lip Image Sequence to Speech Coding Parameters

A technology of speech coding and image sequence, applied in speech analysis, speech synthesis, computer parts and other directions, can solve the problems of complex conversion process, and achieve the effect of convenient construction and training

Active Publication Date: 2020-09-01
SHANGHAI UNIVERSITY OF ELECTRIC POWER
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Compared with the previous technology, the recognition rate is higher, but the conversion process is also very complicated

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Transformation Method from Lip Image Sequence to Speech Coding Parameters
  • A Transformation Method from Lip Image Sequence to Speech Coding Parameters
  • A Transformation Method from Lip Image Sequence to Speech Coding Parameters

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0048] The following is a specific implementation method, but the method and principle of the present invention are not limited to the specific numbers given therein.

[0049] (1) The predictor can be realized by artificial neural network. Predictors can also be constructed using other machine learning techniques. In the following process, the predictor uses a deep artificial neural network, that is, the predictor is equivalent to a deep artificial neural network;

[0050] like image 3 As shown, the artificial neural network is mainly composed of three convolutional LSTM network layers (ConvLSTM2D) and two fully connected layers (Dense) connected in sequence. Each ConvLSTM2D is followed by a pooling layer (MaxPooling2D), and there is a dropout layer (Dropout) in front of the two Dense layers. For clear structure, these are in image 3 is not drawn in.

[0051] Among them, the three layers of convolutional LSTM each have 80 neurons, and the first two layers use the "return...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a conversion method from a lip image sequence to a speech coding parameter, comprising the following steps: 1) constructing a speech coding parameter converter, including an input buffer and a predictor after parameter configuration; 2) sequentially receiving lip image, and store it in the input cache of the converter; 3) at regular intervals, send the k latest lip images cached at the current moment as a short-term image sequence to the predictor, and obtain a A prediction result, which is a coding parameter vector of a speech frame; 4) The speech coding parameter converter outputs the prediction result. Compared with the prior art, the present invention has the advantages of direct conversion, no text conversion, and convenient construction training.

Description

technical field [0001] The invention relates to the technical fields of computer vision, digital image processing and microelectronics, in particular to a conversion method from a lip image sequence to speech coding parameters Background technique [0002] Lip language recognition is to generate corresponding text expressions based on lip videos. The following are related existing technical solutions: [0003] (1) CN107122646A, title of invention: a method for realizing lip language unlocking. The principle is to compare the lip features collected in real time with the pre-stored lip features to determine the identity, but only the lip features can be obtained. [0004] (2) CN107437019A, title of invention: identity verification method and device for lip recognition. Its principle is similar to (1), the difference is that 3D images are used. [0005] (3) CN106504751A, title of invention: adaptive lip language interaction method and interaction device. The principle is st...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L13/08G10L13/027G10L25/30G10L25/57G06K9/00
CPCG10L13/027G10L13/08G10L25/30G10L25/57G06V40/20
Inventor 贾振堂
Owner SHANGHAI UNIVERSITY OF ELECTRIC POWER
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products