Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and apparatus for synthetizing animations in videos in real time

A video and animation technology, applied in animation production, image data processing, instruments, etc., can solve problems such as inability to effectively satisfy visual communication, short duration, unpredictability of camera shooting angle, and camera position, etc., to meet the requirements of video The effect of visual communication needs

Inactive Publication Date: 2016-02-17
BEIJING SEVEN DIMENSION VISION TECH CO
View PDF3 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the live video data of the live broadcast, since the shooting angle and camera position of the camera that collects the video data cannot be predicted, if you want to synthesize animation in the live video during the live broadcast, because the change of the video picture cannot be predicted, in order to ensure the synthesis effect Not obtrusive, at best, some short-duration, 2D animations or pictures can be synthesized in the video
[0004] It can be seen that the current animation synthesis effect in video, especially the ability to synthesize animation in real-time in live video, basically does not have the ability to effectively meet the needs of existing visual communication.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and apparatus for synthetizing animations in videos in real time
  • Method and apparatus for synthetizing animations in videos in real time
  • Method and apparatus for synthetizing animations in videos in real time

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0057] figure 1 It is a method flowchart of a method for synthesizing animation in video in real time provided by an embodiment of the present invention. The method is applied to video collected in real time, and relevant parameters need to be obtained before implementing the method.

[0058] The fixed area where the video is collected includes at least one camera, and the video is collected by the main camera in the at least one camera; the server establishes a 3D coordinate system in the fixed area, and the server collects the at least one video in real time. The position information of a camera in the 3D coordinate system and the video acquisition parameters of the main camera; the server uses a rendering engine to establish a virtual area according to the fixed area and the 3D coordinate system, and the fixed area is in the The position information in the 3D coordinate system has a proportional relationship with the position information of the virtual area in the 3D coord...

Embodiment 2

[0090] In the embodiment of the present invention, in addition to synthesizing facial animation in the video in real time, further, skeletal animation can also be synthesized, and the visual communication effect of the synthesized animation can be increased through the body movements of the skeletal animation.

[0091] exist figure 1 On the basis of the corresponding embodiment, image 3 It is a method flowchart of a method for synthesizing animation in video in real time provided by an embodiment of the present invention.

[0092] S301: The server acquires area position information of the target area in the 3D coordinate system according to the determined target area for synthesizing animation.

[0093] S302: The server acquires motion data and motion coordinates collected in real time by a motion capture device, and the motion coordinates have a corresponding relationship with the 3D coordinate system. And the server acquires facial data and facial coordinates collected in...

Embodiment 3

[0110] In practical applications, the motion capture device may collect the motion data of the third object according to a preset collection frequency. For example, the preset collection frequency is 50 times per second, that is, the motion capture device collects motion data 50 times within one second. Of course, this numerical value is only for illustration, and the present invention is not limited thereto. In addition, the server may preset a refresh rate, which is referred to as a preset refresh rate for ease of description. The server generates video images of each frame of the target video at a preset refresh rate. For example, the refresh rate of the server may be 40 times per second, then the server generates 40 frames of video images within one second. Of course, this numerical value is only for illustration, and the present invention is not limited thereto.

[0111] There is a situation that the refresh rate of the server is different from the collection frequency...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a method and apparatus for synthetizing animations in videos in real time. The method and apparatus are applied to videos which are acquired in real time. The method includes the following steps that: a face data and face coordinates which are acquired in real time by face capture equipment are obtained, a face model is generated in a target region in a virtual area according to region position information, the face data and the face coordinates; and a virtual camera keeps synchronous with a main camera, so that synthetic video data can be obtained according to the animations data of face animations which are synthetized in real time by a 3D coordinate system in video data, and the synthetic video data are outputted in real time, and therefore, when the synthetic video data are played, the face animations are synthetized at a position corresponding to the target region in the displayed synthetic video. According to the method and apparatus of the invention, by means of the cooperation of the processor of a rendering engine and the processor of a server and the synchronization of the virtual camera and the main camera, the face animations can be synthetized in the videos which are synthetized in real time, and existing video visual communication requirements can be satisfied.

Description

technical field [0001] The invention relates to the field of real-time video synthesis, in particular to a method and device for synthesizing animation in video in real time. Background technique [0002] Video is a common media format. For example, real-time live video data can be obtained from a live TV program collected by a camera. In the process of live broadcast, in order to improve the live broadcast effect or increase artistic expression, the animation effect can be synthesized in the live video through data synthesis in the live video data, which is a new form of visual communication, such as strengthening the performance of characters Emoticon animations, voiceover text art, and some background animation effects, etc. [0003] However, the animation synthesis currently applied in TV or network video playback mainly relies on the completion of video post-processing, that is, in the case of non-live broadcasting, after animation synthesis is performed in the recorde...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N21/234H04N21/218G06T13/40
CPCH04N21/234G06T13/40H04N21/21805H04N21/23412
Inventor 殷元江
Owner BEIJING SEVEN DIMENSION VISION TECH CO
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products