Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A video data synthesis method and a device

A technology of video data and synthesis method, which is applied in the field of video processing, can solve the problem of environmental sound affecting the sound of the target object, and achieve the effect of improving the quality of synthesis

Active Publication Date: 2018-12-18
VIVO MOBILE COMM CO LTD
View PDF4 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The present invention provides a method and device for synthesizing video data, aiming to solve the problem that the sound of the environment affects the sound of the target object in the process of synthesizing video data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A video data synthesis method and a device
  • A video data synthesis method and a device
  • A video data synthesis method and a device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0028] refer to figure 1 , which shows a flow chart of the video data synthesis method provided in Embodiment 1 of the present invention, which may specifically include the following steps:

[0029] Step 101, acquire an original audio signal and an original image signal.

[0030] In the embodiment of the present invention, the original audio signal and the original image signal are obtained. Specifically, the original audio signal may be acquired through a microphone, the original audio signal may be acquired through one microphone, or the original audio signal may be acquired through multiple microphones. In the embodiment of the present invention, there is no specific limitation on this.

[0031] In the embodiment of the present invention, the original image signal can be obtained through the camera. The above-mentioned original audio signal and original image signal may be acquired at the same time, or may not be acquired at the same time, for example, the original audio...

Embodiment 2

[0059] refer to figure 2 , which shows a flow chart of the video data synthesis method provided in Embodiment 2 of the present invention, which may specifically include the following steps:

[0060] Step 201, acquire an original audio signal and an original image signal.

[0061] In the embodiment of the present invention, for step 201, reference may be made to the specific description of step 101 in the embodiment of the present invention, which is not specifically limited in the embodiment of the present invention.

[0062] Step 202: Separate the original audio signal into multiple sub-audio signals according to the frequency and signal strength of the original audio signal.

[0063] In the embodiment of the present invention, the original audio signal is separated into multiple sub-audio signals according to the frequency and signal strength of the original audio signal. Specifically, the number of channels in the acquisition process of the original audio signal can be d...

Embodiment 3

[0093] refer to image 3 As shown, it is a structural block diagram of the video data synthesis device 300 provided in Embodiment 3 of the present invention, and the above-mentioned video data synthesis device 300 may specifically include:

[0094] An original signal acquisition module 301, configured to acquire an original audio signal and an original image signal;

[0095] An audio separation module 302, configured to separate the original audio signal into multiple sub-audio signals;

[0096] A mouth-shaped feature information identification module 303, configured to identify the lip-shaped feature information of the target object from the original image signal;

[0097] A target sub-audio signal determining module 305, configured to determine a target sub-audio signal matching the lip-shape feature information from the plurality of sub-audio signals;

[0098] A video data synthesis module 306, configured to synthesize the target sub-audio signal and the original image si...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a video data synthesis method and a device, which relate to the technical field of video processing. The method comprises the following steps: acquiring an original audio signaland an original image signal; separating the original audio signal into a plurality of sub-audio signals; identifying mouth shape characteristic information of a target object from the original imagesignal; determining a target sub-audio signal matching the mouth shape characteristic information from the plurality of sub-audio signals; synthesizing the target sub-audio signal and the original image signal into video data. According to the mouth shape characteristic information of the target object in the original image signal, the sound of the target object is accurately determined, and thetarget sub-audio signal and the original image signal are synthesized into video data, thereby avoiding synthesizing the ambient sound signal, recording only the sound of the target object in the synthesized video, avoiding the influence of the ambient sound on the sound of the target object, and improving the synthesizing quality of the video data.

Description

technical field [0001] The present invention relates to the technical field of video processing, in particular to a method and device for synthesizing video data. Background technique [0002] Video data can record sound and images at the same time, provide users with more information, and have a good entertainment effect, so it is widely used. [0003] At present, in the video data synthesis process, it is usually performed through a simple combination of a camera and a microphone, and all sounds collected by the microphone are recorded while recording images. [0004] In the process of studying the above-mentioned prior art, the inventor found that the above-mentioned prior art solution has the following disadvantages: when video data is synthesized in a noisy environment, not only the sound of the target object but also the sound of the environment are synthesized, resulting in The sound is confusing, and what's more, the voice of the target object is drowned in the ambi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N5/91
CPCH04N5/91
Inventor 张凯
Owner VIVO MOBILE COMM CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products