Lightweight 2D video-based facial expression driving method and system

A technology of facial expressions and driving methods, applied in neural learning methods, character and pattern recognition, image analysis, etc., can solve problems such as high production costs and expensive equipment, and achieve less resource occupation, strong practicability, and simple data acquisition Effect

Active Publication Date: 2022-05-10
北京中科深智科技有限公司
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] Marker-based facial expression driving is to mark several marker points on the face, obtain the three-dimensional motion trajectory of the face through motion capture equipment, and then apply technologies such as radial basis difference to drive facial expressions. This method is expensive in equipment and production cost high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Lightweight 2D video-based facial expression driving method and system
  • Lightweight 2D video-based facial expression driving method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0034] refer to Figure 1-2 , the present invention provides a kind of lightweight facial expression driving method based on 2D video, comprising the following steps:

[0035] S1, acquire data through the camera;

[0036] S2, data preprocessing: preprocessing the data acquired by the camera to obtain the intercepted face area picture;

[0037] S3, feature extraction: obtain face features and facial key point information through the face area image intercepted by S2;

[0038] S4, acquisition of facial expression parameters, acquiring facial expression parameters according to facial features and facial key point information, and driving facial animation through facial expression parameters.

[0039] Further, the specific method of data preprocessing in S2 is: the picture or video data frame I acquired by the camera frame For processing, use the MTCNN algorithm to detect the face of the picture or video, and return the relevant information such as the position of the face and ...

Embodiment 2

[0051] The present embodiment provides a light-weight facial expression driving system based on 2D video, including a camera, a data preprocessing module, a feature extraction module and an expression parameter acquisition module, wherein:

[0052] The camera is used to obtain picture or video data; the camera adopts a common RGB camera.

[0053] The data preprocessing module is used to preprocess the data obtained by the camera to obtain the intercepted face area picture;

[0054] The feature extraction module obtains face features and facial key point information through the face area image intercepted by S2;

[0055] The expression parameter acquisition module obtains expression parameters according to facial features and facial key point information, and drives facial animation through expression parameters.

[0056] The present invention extracts facial features and key point information from pictures, learns expression-related parameters through a deep neural network, a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a lightweight 2D video-based facial expression driving method and system. The method comprises the following steps of S1, acquiring data through a camera; s2, data preprocessing: preprocessing the data acquired by the camera to acquire an intercepted face region picture; s3, feature extraction: face features and face key point information are obtained through the face region picture intercepted in S2; and S4, obtaining expression parameters, obtaining the expression parameters according to the face features and the face key point information, and driving the face animation through the expression parameters. The method is small in calculation amount and small in resource occupation; the generated expression is natural; data acquisition is simple and convenient, and practicability is high.

Description

technical field [0001] The invention belongs to the technical field of human facial expression driving, and more specifically relates to a lightweight 2D video-based human facial expression driving method and system. Background technique [0002] With the development of computer technology, computer vision-related applications have been integrated into people's daily life, and facial expression drivers are widely used in game production, film and television production, human-computer interaction and other fields. In recent years, with the development of film and television, games, short video, live broadcast and other fields, facial expression driving technology has become a hot research field. Among them, face reconstruction technology based on 2D video is an important part of driving facial expressions, and it is also a challenging topic in the field of computer vision. [0003] At present, facial expression driving mainly includes key parameterization methods, muscle mod...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06V40/16G06T7/11G06N3/08G06K9/62G06V10/774
CPCG06T7/11G06N3/08G06T2207/20132G06T2207/20081G06T2207/20084G06T2207/10016G06F18/214
Inventor 周璇
Owner 北京中科深智科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products