Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Human face mapping method and device

A face image and texture technology, applied in the field of image processing, can solve problems such as the inability to edit a large-scale 3D character avatar model, and achieve the effect of improving satisfaction and increasing application functions.

Active Publication Date: 2017-04-19
YULONG COMPUTER TELECOMM SCI (SHENZHEN) CO LTD
View PDF4 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Embodiments of the present invention provide a face mapping method and device to solve the problem in the prior art that large-scale editing of three-dimensional character avatar models in various applications on mobile terminals is impossible

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human face mapping method and device
  • Human face mapping method and device
  • Human face mapping method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0035] In order to enable the user to edit the 3D character avatar model in the terminal application to the greatest extent, this embodiment provides a face mapping method, see image 3 shown, including:

[0036] S301: Acquire a 3D avatar model to be textured.

[0037] It should be noted that obtaining the 3D avatar model to be textured may include: receiving a model selection instruction, and selecting a corresponding 3D avatar model from a preset model library according to the model selection instruction, wherein the model library contains a plurality of different types 3D avatar model. In general, the 3D avatar models of different geographical ranges, age groups and genders are different. Therefore, different types of 3D avatar models in the model library can be divided according to different geographical ranges, age groups and genders. For example, in the model library It can include 3D avatar models of China, India, and the United States, as well as 3D avatar models of ...

Embodiment 2

[0056] In order to better understand the present invention, this embodiment provides a more specific face mapping method, see Figure 7 shown, including:

[0057] S701: Select a 3D avatar model to be pasted from a preset model library according to a model selection instruction, and unfold a UV topology grid of the 3D head model to be pasted.

[0058] Wherein, the model selection instruction may be issued by the user through the terminal, and the model library includes a plurality of different types of 3D avatar models. Generally, the 3D avatar models of different geographical ranges, age groups and genders are different. Therefore, different types of 3D avatar models in the model library can be divided according to different geographical ranges, age groups and genders.

[0059] S702: Obtain a two-dimensional face image as a texture material through an image acquisition module of the mobile terminal.

[0060] The image collection module in S702 of this embodiment can collect ...

Embodiment 3

[0072] In order to optimize the 3D character portrait model in the mobile terminal application, so that the user can greatly edit the 3D character portrait model, this embodiment provides a face mapping device, which can be found in Figure 9 As shown, it is applied to a mobile terminal, including: a model selection module 91 , a grid acquisition module 92 , a texture material acquisition module 93 , a processing module 94 and an execution module 95 . Wherein, the model selection module 91 is used to obtain the three-dimensional avatar model to be textured; the grid acquisition module 92 is used to obtain the UV topology grid that the three-dimensional avatar model expands; The two-dimensional human face image; the processing module 94 is used to obtain the mapping relationship between the two-dimensional human face image and the UV topological grid; the execution module 95 is used to paste the two-dimensional human face image on the three-dimensional avatar model according to ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention provides a human face mapping method and device, and the method comprises the steps: obtaining the mapping relation between a UV topological network expanded by a to-be-mapped three-dimensional head portrait model and a two-dimensional human face image, wherein the two-dimensional human face image is obtained through a mobile terminal; mapping the two-dimensional human face image to the three-dimensional head portrait model according to the mapping relation, thereby achieving an effect of enabling a user to be able to carry out the editing of the three-dimensional head portrait model in an application of the mobile terminal to maximum degree, and enabling the function applications of the three-dimensional head portrait model to tend to be diversified. The user can customize the three-dimensional head portrait model, thereby improving the user experience.

Description

technical field [0001] The present invention relates to the technical field of image processing, in particular to a face mapping method and device. Background technique [0002] With the continuous advancement of technology, "3D models" of characters have been widely used in various "3D applications" and "3D game applications". For example, the increasingly popular "virtual reality" and "augmented reality" are inseparable from 3D space Scene, 3D character modeling technology. However, the 3D character avatar models in the "3D application" and "3D game application" of the mobile terminal are all preset in the mobile terminal application client, or are imported into the mobile terminal application client through the network, and these 3D models are usually All created by computer. Taking the creation of a 3D character portrait model on the computer as an example, first create a 3D character portrait model based on ordinary "hand-painted" or "photographs". Sculpture in gener...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/00
CPCG06T3/08
Inventor 朱洪达刘亚辉裴鸿刚
Owner YULONG COMPUTER TELECOMM SCI (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products