Human motion capture and virtual animation generation method based on deep learning

A human motion and deep learning technology, applied in the computer field, can solve the problems of heavy animation rendering workload, restricting application, and being unsuitable for commercial use on a large scale, so as to improve production efficiency, improve efficiency, and reduce costs.

Inactive Publication Date: 2019-07-19
XIDIAN UNIV
View PDF6 Cites 31 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, this technology still has corresponding shortcomings: on the one hand, professional motion capture systems are expensive and not suitable for large-scale commercial use, and there are strict requirements on the lighting and reflection conditions of the performance venue, and the device calibration process is also relatively cumbersome.
This undoubtedly restricts its application in somatosensory interactive games, and also hinders the experience of ordinary users
In addition, although it can capture real-time motion, manual intervention is required for post-processing (identification, tracking, and reconstruction of marker points) to apply these data to the animation character model, and the workload of animation rendering in the later stage is relatively large.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human motion capture and virtual animation generation method based on deep learning
  • Human motion capture and virtual animation generation method based on deep learning
  • Human motion capture and virtual animation generation method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0040] Example 1: See Figure 1-8 , a human body motion capture and virtual animation generation method based on deep learning, comprising the following steps:

[0041] A. First, the actor provides the movement posture that needs to be captured by motion, which can be in the form of dance, martial arts, etc.;

[0042] B. The actor motion video data collected by ordinary optical sensing equipment (camera, mobile phone);

[0043] C. Attitude detection network pre-training; the steps of the attitude detection algorithm are as follows: 1. Input the image to be detected into the deep convolutional neural network at different scales, and calculate the response map of each key point; 2. Put each key point in The response graphs at each scale are accumulated to obtain the overall response graph of key points; 3. On the overall response graph of each key point, find the corresponding maximum point and determine the position of the key point. 4. Connect each key point to obtain the in...

Embodiment 2

[0047] Embodiment 2. On the basis of Embodiment 1, the attitude conditional generation confrontation network is composed of three major modules: the attitude detection network P in step B, the generation network G, and the discrimination network D. Among them, the posture detection network P has the same structure and function as in step B, and mainly completes the posture extraction of the avatar with various action postures to obtain posture graphics. The generation network G is composed of a deep convolutional network, and its main function is to complete the automatic creation and rendering of virtual images in a given pose; we use a codec architecture with skip connections, that is, the input of each deconvolution layer is the previous The output of the layer is added to the output of the mirror convolution layer of this layer, so as to ensure that the information of the encoding network can be re-memorized during decoding, so that the generated image retains the details o...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a human motion capture and virtual animation generation method based on deep learning. The human motion capture and virtual animation generation method comprises the followingsteps: A, collecting actors' motions and converting the actors' motions into video stream signals for input; b, pre-training a posture detection network; c, extracting a human body posture sequence; d, pre-training the posture condition generative adversarial network; and E, inputting the posture sequence into the posture condition generative adversarial network, and outputting an animation videosynchronous with the human body action. According to the invention, the animation production cost is effectively reduced; the operability of a common user is improved; meanwhile, the output efficiencyof animations is also improved, and the method can be used as a media creation tool for special effect demonstration, real-time generation of Demo and rapid production of animations and movies, and can also be used as an interactive filter in short video application, assistance of virtual reality somatosensory games and the like, so that the large-range landing commercial and popularization of the action capture technology are facilitated.

Description

technical field [0001] The invention relates to the field of computer technology, in particular to a deep learning-based human motion capture and virtual animation generation method. Background technique [0002] Most of the current digital film and animation production processes use motion capture (Motion capture) technology. The traditional method records and captures real motion information through sensors or markers worn on the actor's body, and then restores and renders these motions to the corresponding virtual reality. Image body, get the corresponding virtual animation effect. [0003] However, this technology also has corresponding disadvantages: on the one hand, professional motion capture systems are expensive and not suitable for large-scale commercial use, and there are strict requirements on the lighting and reflection of the performance venue, and the device calibration process is also relatively cumbersome. This undoubtedly restricts its application in somat...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T13/40G06T13/80G06K9/00
CPCG06T13/40G06T13/80G06V40/20G06V20/40
Inventor 林杰崔健石光明刘丹华齐飞赵光辉金星
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products