Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video flow based people face expression fantasy method

A technology of facial expressions and video streams, applied in image data processing, instruments, calculations, etc., can solve problems that are difficult to meet practical applications and a large number of human interactions, and achieve high reliability and strong expressive effects

Inactive Publication Date: 2007-02-28
ZHEJIANG UNIV
View PDF0 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The facial expression synthesis technologies listed above are all based on images, and some methods require a lot of manual interaction, which is difficult to meet the needs of practical applications.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video flow based people face expression fantasy method
  • Video flow based people face expression fantasy method
  • Video flow based people face expression fantasy method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0044] Surprised Emoticon Sequence Fantasy Example:

[0045] 1: The input image is 1920×1080 pixels, manually determine the position of the pupils of the two eyes on the image, the horizontal distance between the pupils of the two eyes is 190 pixels, and the width of the pupils of the two eyes is shifted to the left and right by 105 pixels, and to the left and right by 100 pixels. Eye sub-region of 400×200 pixels; manually determine the positions of the two corners of the mouth on the image, the horizontal distance between the two corners of the mouth is 140 pixels, and move the width of 80 pixels from the corners of the mouth to the left and right, and the width of 150 and 50 pixels to the top and bottom, respectively. A mouth sub-region of 300×200 pixels is obtained, and the eye sub-region and the mouth sub-region constitute the facial expression interest sub-region of the input image.

[0046] 2: Take the sub-region around the eyes and the sub-region around the mouth as I r...

Embodiment 2

[0055] Happy Emoticon Sequence Fantasy Example:

[0056]1: The input image is 1920×1080 pixels, manually determine the position of the pupils of the two eyes on the image, the horizontal distance between the pupils of the two eyes is 188 pixels, and the width of the pupils of the two eyes is shifted to the left and right by 106 pixels, and to the left and right by 100 pixels. Eye sub-region of 400×200 pixels; manually determine the positions of the two mouth corners on the image, the horizontal distance between the two mouth corners is 144 pixels, and move 78 pixel widths from the two mouth corners to the left and right, and 150 and 50 pixel widths to the top and bottom, respectively. A mouth sub-region of 300×200 pixels is obtained, and the eye sub-region and the mouth sub-region constitute the facial expression interest sub-region of the input image.

[0057] 2: Take the sub-region around the eyes and the sub-region around the mouth as I respectively in , select the first f...

Embodiment 3

[0066] Angry Emoticon Sequence Fantasy Example:

[0067] 1: The input image is 1920×1080 pixels, manually determine the position of the pupils of the two eyes on the image, the horizontal distance between the pupils of the two eyes is 186 pixels, and the width of the pupils of the two eyes is shifted to the left and right by 107 pixels, and to the left and right by 100 pixels. Eye sub-region of 400×200 pixels; manually determine the positions of the two corners of the mouth on the image, the horizontal distance between the two corners of the mouth is 138 pixels, and move the width of 81 pixels from the corners of the mouth to the left and right, and the width of 150 and 50 pixels to the top and bottom, respectively. A mouth sub-region of 300×200 pixels is obtained, and the eye sub-region and the mouth sub-region constitute the facial expression interest sub-region of the input image.

[0068] 2: Take the sub-region around the eyes and the sub-region around the mouth as I respe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to face pathetic image technique of video flow, which is based on several pathetic sequences of one input neutral face pathetic image, wherein said algorism comprises: (1) selecting face interested sub area from the input face image by hand; (2) calculating out the k neighbors and relative m-dimension character coordinates in the sample space; (3) using said coordinates and character to represent training radial basic function; (4) using the coordinate of input image as the input of radial basic function to obtain relative character representation, to compose the dynamic sequence of face interested sub region frame by frame; (5) planting composed dynamic sequence into input neutral face pathetic image to obtain the last pathetic effect. The invention can generate several dynamic pathetic sequences quickly based on one image, with wide applications.

Description

technical field [0001] The invention relates to a method for imagining human facial expressions based on video streams in the field of digital image processing. Background technique [0002] Facial expression fantasy technology belongs to a kind of expression synthesis technology. The current expression synthesis methods are mainly divided into three categories: analogy-based expression synthesis, redirection-based expression synthesis and learning-based expression synthesis. The representative of the expression synthesis method based on analogy is the "shape-driven facial expression synthesis system" developed by Qingshan Zhang et al. 2003 Conference Proceedings" (Eurographics / SIGGRAPH Symposium on Computer Animation, San Diego, CA (2003) 177-186), the system generates realistic images by comparing the shape features of face images and mixing appropriate image sub-regions. Human face expression. The work of Zicheng Liu et al. can be regarded as an analogy-based expression...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/00G06T5/00
Inventor 庄越挺张剑肖俊
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products