Voice lip fitting method and system and storage medium

A voice and lip shape technology, which is applied in voice analysis, voice recognition, image data processing, etc., can solve the problems of large amount of model calculation, cumbersome steps, slow calculation speed, etc., to achieve the effect of fitting quantization, increase calculation efficiency, The effect of reducing time cost

Active Publication Date: 2020-03-31
SUN YAT SEN UNIV
View PDF6 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] However, when the above two schemes are used in practice, there are the following problems: 1) the technical scheme has many theories, cumbersome steps, large amount of model calculation, and low accura

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice lip fitting method and system and storage medium
  • Voice lip fitting method and system and storage medium
  • Voice lip fitting method and system and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0053] Such as figure 1 As shown, it is a flow chart of the speech lip shape fitting method based on multi-scale fusion convolutional neural network in this embodiment.

[0054] The speech lip shape fitting method based on the multi-scale fusion convolutional neural network of the present embodiment comprises the following steps:

[0055] S1: Collect the image data and voice data of the video data set of the target person. In this step, the image data and voice data of the target person video data set need to be collected at the same time and at the same frame rate, and the image data of the target person video data set needs to be collected using a three-dimensional structured light depth camera. In this embodiment, MacOS and ARKit are used to write a face tracking program, run on an IphoneX device, and use its front camera to collect the video image data at a frame rate of 60 frames per second.

[0056]S2: Extracting a lip shape feature vector of the target person in the i...

Embodiment 2

[0070] This embodiment provides a system for applying the voice lip shape fitting method of Embodiment 1, and its specific scheme is as follows:

[0071] Including data acquisition module, lip shape feature vector extraction module, voice feature vector extraction module, multi-scale fusion convolutional neural network training module and voice lip shape fitting module;

[0072] Wherein the data collection module is used to collect the image data and voice data of the target figure video data set;

[0073] The lip shape feature vector extraction module is used to extract the lip shape feature vector of the target person in the image data;

[0074] The voice feature vector extraction module is used to extract the voice feature vector of the target person in the voice data;

[0075] The multi-scale fusion convolutional neural network training module is used to train the multi-scale fusion convolutional neural network with the speech feature vector as input and the lip shape fea...

Embodiment 3

[0078] This embodiment provides a storage medium, and a program is stored in the storage medium, and the method steps of the voice lip shape fitting method in the first embodiment are executed when the program is running.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a voice lip shape fitting method. The method comprises the following steps: collecting image data and voice data of a target person video data set; extracting a lip feature vector of a target person in the image data; extracting a voice feature vector of a target person in the voice data; training a multi-scale fusion convolutional neural network by taking the voice feature vector as an input and the lip feature vector as an output; and inputting a to-be-fitted voice feature vector of the target person into the multi-scale fusion convolutional neural network, generating a fitted lip feature vector by the multi-scale fusion convolutional neural network, outputting the fitted lip feature vector, and fitting the lip based on the lip feature vector.

Description

technical field [0001] The present invention relates to the technical field of voice signals, and more specifically, to a voice lip shape fitting method, system and storage medium. Background technique [0002] The voice lip shape fitting technology that generates the corresponding lip shape according to the voice is one of the basic technologies for applications such as virtual anchors, virtual image robots, and animation character mouth shape design. Accurately and smoothly fitting the lip shape according to the voice is the difficulty of this technology. [0003] At present, lip shape fitting based on speech is technically realized through the following two solutions: [0004] 1) According to phoneme theory and basic lip shape theory, after using Bayesian estimation, hidden Markov model, BP neural network, etc. to classify lip shape, the method of generating lip shape sequence by interpolation estimation. [0005] 2) Lip shape estimation method using LSTM, RNN and other...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T13/20G06T13/40G10L15/16G10L15/25
CPCG06T13/205G06T13/40G10L15/16G10L15/25
Inventor 黄以华张睿
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products