Unlock instant, AI-driven research and patent intelligence for your innovation.

A method for directly generating speech from lip videos

A lip and video technology, applied in the field of lip video directly generating voice, can solve the problem of not being able to output voice, and achieve the effect of easy training and improved conversion efficiency

Active Publication Date: 2021-10-08
SHANGHAI UNIVERSITY OF ELECTRIC POWER
View PDF11 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The principle is to compare the lip features collected in real time with the pre-stored lip features to determine the identity, but it cannot output voice

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method for directly generating speech from lip videos
  • A method for directly generating speech from lip videos

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0034] Such as figure 1 As shown, the present invention collects the video that contains lip first by video collection equipment, and extracts the lip part image and obtains the video V of lip, and V is by a series of image I 1 , I 2 ,...,I n composed in order. Then extract the lip feature FI for each image I, and get the lip feature sequence FI 1 ,FI 2 ,...,FI n . The lip feature sequence is sequentially sent to the lip sound converter P, and the speech coding parameter sequence FA can be obtained from the output end of the lip sound converter P 1 ,FA 2 ,...,FA m . Using speech synthesis technology, the speech frame coding parameter sequence is synthesized into a speech frame sequence A 1 ,A 2 ,...,A m .

[0035] The specific process of the conversion method in this embodiment is described as follows.

[0036] (1) The first step is to obtain the lip video: use the camera to collect the video containing the lips (no need to collect audio), and extract the lip are...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to a kind of method that directly generates speech by lip video, comprises the following steps: 1) obtain lip video: adopt camera device to collect the video that comprises lip, obtain the video of lip part; 2) obtain lip feature vector: For each frame image in the lip part of the video, extract a number of feature points around the inner lip edge and outer lip edge of the lip to describe the shape of the lip, and obtain the lip feature vector of the current frame image, so as to obtain a series of lip 3) lip sound conversion: the obtained lip feature vector is input into the lip sound converter, and at regular intervals, the lip sound converter converts the latest cached k lip feature vectors into a speech frame parameter vector ; 4) Speech synthesis: perform speech synthesis according to the speech frame parameter vector, restore audio samples and output speech. Compared with the prior art, the present invention has the advantages of no need for intermediate characters, high conversion efficiency, and convenient training.

Description

technical field [0001] The invention relates to the fields of computer vision, digital image processing, microelectronics technology and assistive technology for disabled persons, in particular to a method for directly generating speech from lip video. Background technique [0002] The present invention is related to the field of lip recognition. "Lip language recognition" is based on lip video to generate corresponding text expressions. The following is the most relevant technical solution information currently available: [0003] (1) CN107122646A, title of invention: a method for realizing lip language unlocking. The principle is to compare the lip features collected in real time with the pre-stored lip features to determine the identity, but it cannot output voice. [0004] (2) CN107437019A, title of invention: identity verification method and device for lip recognition. Its principle is similar to (1), the difference is that 3D images are used to determine the identit...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L13/08G10L13/027G10L25/57G06K9/46G06K9/00
CPCG10L13/027G10L13/08G10L25/57G06V40/20G06V20/41G06V20/46G06V10/44
Inventor 贾振堂
Owner SHANGHAI UNIVERSITY OF ELECTRIC POWER