Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and device for determining lip actions of virtual image

A lip and action technology, applied in the computer field, can solve the problems of multi-manual fine adjustment, inability to achieve real-time user interaction, etc., to achieve real-time effects

Active Publication Date: 2020-03-10
BAIDU ONLINE NETWORK TECH (BEIJIBG) CO LTD
View PDF2 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, artificially adjusting the mouth movements of the avatar to correspond to the audio through manual drawing or motion capture equipment requires more manual fine-tuning and cannot achieve real-time interaction with the user

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for determining lip actions of virtual image
  • Method and device for determining lip actions of virtual image
  • Method and device for determining lip actions of virtual image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] The application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain related inventions, rather than to limit the invention. It should also be noted that, for the convenience of description, only the parts related to the related invention are shown in the drawings.

[0032] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present application will be described in detail below with reference to the accompanying drawings and embodiments.

[0033] Such as figure 1 As shown, the system architecture 100 may include terminal devices 101 , 102 , 103 , a network 104 and servers 105 , 106 . The network 104 serves as a medium for providing communication links between the terminal devices 101 , 102 , 103 and the serve...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a method and device for determining lip actions of a virtual image. The method for determining the lip action of the virtual image comprises the following steps: acquiring a target audio; cutting the target audio into a target audio clip sequence; respectively inputting each section of target audio clip in the target audio clip sequence into a lip action coefficient model to obtain a lip action coefficient sequence corresponding to a time sequence; and based on the lip action coefficient sequence, driving the target virtual character to make lip actions corresponding to the target audio clips in the target audio clip sequence. According to the method, the lip action coefficient corresponding to the lip action of the virtual image is directly generated from the audio, and then the lip action of the virtual image is generated based on the lip action coefficient, so that the real-time performance in user interaction can be met. And meanwhile, thelip action coefficient is not an image, and is not limited to a specific virtual image, so that the requirements of different application scenes can be met.

Description

technical field [0001] The present application relates to the field of computer technology, in particular to the field of computer network technology, and in particular to a method and device for determining lip movements of virtual images. Background technique [0002] Most of the avatars in the current industry use manual adjustments by animators or complex facial motion capture equipment to realize the corresponding relationship between the avatar's mouth movements and audio. [0003] However, artificially adjusting the mouth movement of the avatar to correspond to the audio through manual drawing or motion capture equipment requires more manual fine-tuning and cannot achieve real-time interaction with the user. Contents of the invention [0004] The embodiments of the present application provide a method and an apparatus for determining lip movements of an avatar. [0005] In the first aspect, the embodiment of the present application provides a method for determining...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N5/262H04N21/81
CPCH04N5/262H04N21/81
Inventor 袁瀚
Owner BAIDU ONLINE NETWORK TECH (BEIJIBG) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products