Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

3D character action generation systems and methods that mimic character actions in given video

A technology in generating systems and videos, applied in the computer field, can solve problems such as little research work, and achieve the effect of optimizing network parameters and optimizing timing continuity

Active Publication Date: 2021-02-02
FUDAN UNIV
View PDF14 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Despite the research importance of this task, there is still little directly related research work

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • 3D character action generation systems and methods that mimic character actions in given video
  • 3D character action generation systems and methods that mimic character actions in given video
  • 3D character action generation systems and methods that mimic character actions in given video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] Step 1. Video frame image preprocessing. For the source video data, first of all, the frequency of taking one frame every 25 frames is used to collect image frames. For the video that needs to be used in the training phase, each time a continuous T frame is randomly selected from the collected video frames as the input data of the network. For the video that needs to be used in the test phase, each time the middle T frame is taken from the collected video as the input data of the network. T takes 16. The resulting video clips are Indicates that I t Indicates the tth image frame.

[0042] Step 2. Obtain the location area of ​​the actor in the image. For each selected video frame image I t , input it into the existing openpose network structure (article [9]), predict the corresponding human bone joint points, and then determine the position area of ​​the actor according to the position coordinates of the joint points to help the initial human body reconstruction T...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of computers, and particularly relates to a 3D character action generation system and method for simulating character actions in a given video. The systemcomprises an initial human body reconstruction module, a rule data meshcuboid construction module, a mesh2mesh smoothing module and a human body posture migration module. For a video containing a human body action source, recovering a mesh source sequence of an action player by an initial human body reconstruction module; a rule data meshcuboid construction module is used for constructing the initial mesh sequence into common rule data meshcuboid; the initial mesh sequence is further smoothed by a mesh 2mesh smoothing module through 3D convolution, so the action of the mesh sequence is more coherent; and finally, the human body posture migration module migrates the posture from the source mesh to the target mesh frame by frame, so the action sequence contained in the source video is migrated to the target 3D role. According to the invention, the mesh sequence consistent with the motion of the source video can be generated, and the time sequence coherence of the mesh sequence is improved.

Description

technical field [0001] The invention belongs to the technical field of computers, and in particular relates to a system and a method for generating 3D character actions. Background technique [0002] Character action generation has important practical significance for many computer vision tasks including multimedia interaction technology and visual information understanding. Humans are very good at learning and imitating actions from some examples, and this imitation itself also plays a key role in human intelligence. Therefore, people hope that 3D characters can also learn to imitate actions from video samples and generate the same action sequences as the source video. [0003] Despite the important research implications of this task, there is still little directly related research work. Relatively related work mainly includes character animation manipulation and imitation learning. [0004] Character animation manipulation ([1], [2], [3], [4]) mainly studies how to make...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T13/40G06T17/20G06K9/00G06N3/08G06N3/04
CPCG06T13/40G06T17/20G06N3/08G06V40/20G06N3/045
Inventor 姜育刚傅宇倩付彦伟
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products