Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Face image animation method and system based on action and voice features

A face image and voice feature technology, which is applied in the field of face image animation methods and systems, can solve the problems of insufficient image effect and insufficient resolution of the result, and achieve the effect of various driving methods and avoiding graphics card resources.

Pending Publication Date: 2022-05-06
北京中科深智科技有限公司
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method only uses the information of the mapping function of order 0, resulting in the generated image effect is not good enough
The First-Order-Motion-Model proposed later used the information of the first derivative of the motion trajectory, but in order to reduce the consumption of training and increase the amount of data, the original project only used relatively low-resolution training data, resulting in the generated results resolution is not good enough

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face image animation method and system based on action and voice features
  • Face image animation method and system based on action and voice features
  • Face image animation method and system based on action and voice features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0036] refer to Figure 1-4 , the present invention provides a kind of facial image animation method based on action and voice feature, comprising: image driving mode and voice driving mode; wherein the image driving mode is: when there is a video of one of the characters talking, the action Completely transfer to another face, that is, input the talking video of one face and the face of another person, and get the dynamic image video of the other person who was originally a static picture; the voice driving method is: training for a specific person , when using the features of another person for prediction, perform a one-step transformation of the features, convert them into the voice features of the trained person, convert the voice features into face features, and obtain a face image animation.

[0037] Note that the image driving method of the present invention is fundamentally different from the popular DeepFake face-changing technology. The face-swapping technology is t...

Embodiment 2

[0051] The present embodiment provides a kind of facial image animation system based on action and voice features, including an image driver module and a voice driver module; wherein,

[0052] The image driver module is used to input the talking video of a human face and the human face of another person to obtain the dynamic image video of the other person who was originally a static picture;

[0053] The voice-driven module is trained for a specific person. When using the features of another person for prediction, the features are converted into the voice features of the trained person in one step, and the voice features are transformed into face features. , get face image animation.

[0054] The image driving module includes a key point detection unit, an action extraction unit and an image generation unit; the key point detection unit is used to input a frame of image of the target person and the driving video respectively, and obtain multiple key points and their correspon...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a face image animation method and system based on action and voice features. The face image animation method comprises an image driving mode and a voice driving mode. Wherein the image driving mode comprises the following steps: inputting a conversation video of a human face and a human face of another person to obtain a dynamic image video of the other person which is originally a static picture; the voice driving mode is as follows: training is carried out for a certain specific person, when the features of another person are used for prediction, one-step conversion is carried out on the features, the features are converted into sound features of the trained person, face feature conversion is carried out on the sound features, and a face image animation is obtained. According to the invention, the target person can be driven in video and audio driving modes, the driving modes are diversified, and various requirements can be met.

Description

technical field [0001] The invention belongs to the technical field of image animation generation, and more specifically relates to a method and system for facial image animation based on motion and voice features. Background technique [0002] Image animation is widely used in film and television production, photography, e-commerce and other fields. Specifically, given a character image, we can make this person "move" through a certain driving method. There are many ways to realize this process. If we obtain features from image data, we need to convert image features into face or action features, and add these features to the target face; if we obtain features from voice data, we need to These speech features can be converted into facial features of the target face, so as to generate the face of the target person through these features. [0003] In the 3D method in the image field, the conventional method is to carry out 3D modeling of the target object, and then input a ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T13/40G06V20/40G06V40/16G06V40/20G10L15/06
CPCG06T13/40G10L15/063
Inventor 杨磊
Owner 北京中科深智科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products