Three-dimensional non-realistic expression generation method based on facial motion unit

A motion unit, non-realistic technology, applied in neural learning methods, animation production, biological neural network models, etc., can solve problems such as inaccurate expression templates or feature point positioning, to overcome geometric position dependence, accurate feature extraction, The effect of enriching spatial information

Active Publication Date: 2020-04-17
CAPITAL NORMAL UNIVERSITY
View PDF5 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The method of the present invention can not only extract the expression information in the process of generating the expression animation, but also get rid of the problem that the existing method needs the expression template or the positioning of the feature points is inaccurate

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional non-realistic expression generation method based on facial motion unit
  • Three-dimensional non-realistic expression generation method based on facial motion unit
  • Three-dimensional non-realistic expression generation method based on facial motion unit

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] Specific embodiments of the present invention will be described below in conjunction with the accompanying drawings, so that those skilled in the art can better understand the present invention.

[0049] The invention proposes a three-dimensional non-realistic expression generation method based on facial motion units. include:

[0050] Step 1: Establish a standard 3D facial neutral model, and based on the neutral model, establish a 3D model basic training set corresponding to AU.

[0051] Use the GAN network to generate an augmented training set of AUs and corresponding 3D models. Use the AU model in the established 3D model basic training set and the target object model as the input of the GAN network, use the GAN network to generate the AU model of the target object, establish the corresponding relationship between the AU model of the target object and the 3D face model, and expand Training set.

[0052] Step 2: Analyze the facial expression of the target object to...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a three-dimensional non-realistic expression generation method based on a facial motion unit. The method includes: firstly, establishing a standard three-dimensional face neutral model and a three-dimensional model basic training set corresponding to a face motion unit AU; secondly, generating an amplification training set by using a GAN network; and taking the face motion unit AU model in the three-dimensional model basic training set and the target object model as the input of the GAN network, generating the face motion unit AU model of the target object by using the GAN network, and amplifying the training set; then, analyzing the facial expression of the target object to be generated, inputting a human face two-dimensional image, analyzing the human face expression by using a neural network, and obtaining facial motion unit AU model data of the expression; and finally, according to the obtained face motion unit AU data information and the related face motionunit AU model data of the target object, performing fusion to generate a three-dimensional model of the target expression. According to the invention, expression simulation of basic expressions can berealized, and expression migration based on the AU can also be realized.

Description

technical field [0001] The invention relates to a three-dimensional non-realistic expression generation method based on a facial motion unit (AU), and belongs to the technical field of intelligent emotion computing. Background technique [0002] In the expression of human emotional information, in addition to body movements and language information, the face is also an important part of human expression information. Expressing human emotional information through facial movements is one of the main ways for human daily communication. Human facial expressions contain a wealth of information. Mehrabian proposed that 55% of human emotional information is transmitted through human faces, while voice, speech, etc. only account for 45%. With the development of computer technology, human beings have higher and higher requirements for human-computer interaction, and affective computing has also received widespread attention. It requires computers to not only recognize human emotions,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T13/40G06N3/04G06N3/08
CPCG06T13/40G06N3/08G06N3/045
Inventor 谭小慧樊亚春李昭伟
Owner CAPITAL NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products