Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for generating virtual character video based on neural network and related equipment

A virtual character and neural network technology, applied in neural learning methods, biological neural network models, speech synthesis, etc., can solve problems such as the inability to maintain completely consistent mouth movements, and achieve the effect of completely consistent mouth movements

Pending Publication Date: 2020-03-06
PING AN TECH (SHENZHEN) CO LTD
View PDF0 Cites 36 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Based on this, aiming at the problem that the voice of the avatar cannot be completely consistent with the mouth movements of the avatar when generating the avatar, a method and related equipment for generating avatar video based on a neural network are provided

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for generating virtual character video based on neural network and related equipment
  • Method for generating virtual character video based on neural network and related equipment
  • Method for generating virtual character video based on neural network and related equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] In order to make the purpose, technical solution and advantages of the present application clearer, the present application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.

[0053]Those skilled in the art will understand that unless otherwise stated, the singular forms "a", "an", "said" and "the" used herein may also include plural forms. It should be further understood that the word "comprising" used in the specification of the present application refers to the presence of the features, integers, steps, operations, elements and / or components, but does not exclude the presence or addition of one or more other features, Integers, steps, operations, elements, components, and / or groups thereof.

[0054] figure 1 It is an overall flow chart of a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of artificial intelligence, in particular to a method for generating a virtual character video based on a neural network and related equipment. The methodcomprises the steps: obtaining a to-be-recognized text, importing the text into a text voice conversion model for voice conversion, and obtaining an audio; extracting rhythm parameters of the audio, and extracting audio feature points; generating a mouth movement track of the virtual character; obtaining a two-dimensional picture of the virtual character, and generating a three-dimensional facialmap of the virtual character after processing; importing the mouth motion trail into the three-dimensional face image to generate a dynamic face image; and acquiring real-time audio corresponding to each frame of dynamic face picture, and synchronously performing audio and video synthesis coding on the dynamic face picture and the real-time audio to obtain a virtual character video. According to the method, the purpose of obtaining a desired video display effect as long as the text is input is achieved, so that the sound of the virtual character and the mouth action of the virtual character are kept completely consistent.

Description

technical field [0001] The present application relates to the field of artificial intelligence technology, and in particular to a method and related equipment for generating virtual character videos based on neural networks. Background technique [0002] A virtual character refers to a character that does not exist in reality. It can exist in creative works such as TV dramas, comics, and games, and is a fictitious character in creative works such as TV dramas, comics, and games. Synthesizing virtual characters usually adopts 3D scanning and other methods, and generates the required virtual characters by setting face parameters. [0003] However, when the virtual character is generated, the sound of the virtual character cannot be completely consistent with the mouth movements of the virtual character, resulting in poor fidelity of the virtual character, and it is impossible to achieve a false playing effect. Contents of the invention [0004] Based on this, aiming at the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06K9/00G06K9/46G06K9/62G06N3/08G10L13/04
CPCG06T17/00G06N3/084G10L13/00G06V40/171G06V10/462G06F18/23
Inventor 王健宗王义文
Owner PING AN TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products