Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for separating sound source from video

A video and video technology, applied in the field of cross-modal learning, can solve problems such as the inability to establish adjacent pixel connections, the inability of robots to understand the actual meaning of sounds, and the lack of practical meaning of a single isolated pixel.

Active Publication Date: 2020-04-07
TSINGHUA UNIV
View PDF7 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The sound separation results of these methods are not suitable for applications such as intelligent robots in real scenes. When sound separation is performed at the pixel level, although the sound produced by each pixel can be obtained, the connection between adjacent pixels cannot be established. Isolated pixels have no practical significance in real-world scenarios. For example, a robot can only know the sound signal from a certain pixel in the current picture but does not know that this pixel is a part of the object of the alarm clock.
Similarly, when performing sound separation at the image segmentation region level, the robot can only know the sound signal generated in a certain region but cannot know which objects are actually contained in the region in the current picture, which makes it impossible for the robot to understand the source of the separated sound. Actual Meaning of Representation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for separating sound source from video
  • Method for separating sound source from video
  • Method for separating sound source from video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, and do not limit the protection scope of the present invention.

[0060] In order to better understand the present invention, an application example of a method for separating audio from video according to the present invention will be described in detail below.

[0061] A kind of method that the present invention proposes is separated sound source from video, comprises the following steps:

[0062] (1) Training stage

[0063] (1-1) Obtain training data

[0064] Obtain T segments of video from different event categories of class C as training data. Each segment of video is used as a training sample. The duration of each segment of video is equal, and each seg...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method for separating a sound source from a video. The method is composed of a training stage and a testing stage. According to the method, the method includes constructing asound source separation model composed of a visual target detection network, a sound feature extraction network and a sound separation network in a training stage; and selecting two different types ofvideos from the training data to mix the audios of the videos, and training a sound source separation model to enable the sound source separation model to accurately separate original audios corresponding to the two videos from the mixed audio. And in the testing stage, a test video is acquired and then input into the trained sound source separation model, the model detects all visual targets inthe video, and sounds corresponding to the visual targets are separated from the original audio. According to the invention, the sound source can be separated from the target object level, all targetobjects appearing in the video can be detected and automatically matched with the separated corresponding sound, the relation between each visual target object and the separated sound is established,and the application prospect is wide.

Description

technical field [0001] The invention relates to a method for separating sound sources from videos, belonging to the field of cross-modal learning. Background technique [0002] In recent years, technologies such as smart wearable devices, smart homes, and smart service robots have developed rapidly, which require real-time processing of video, audio and other data in real scenes and further use of the processing results in subsequent behaviors. Among them, it is a very important task to separate the sound of each audio source from the mixed audio containing multiple audio sources. For example, when a person gives a voice command to an intelligent service robot, the environment may also contain the sounds of household appliances such as telephone ringtones, alarm clocks, and TVs. At this time, the intelligent robot needs to separate the human voice from the obtained mixed audio to correctly Identifies the instructions given to it by a person. The task of sound source separa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L25/57G10L25/30G10L21/028H04N21/44H04N21/439G06K9/00
CPCG10L25/57G10L25/30G10L21/028H04N21/44008H04N21/439H04N21/4394G06V20/40G06V20/46
Inventor 刘华平刘馨竹刘晓宇郭迪孙富春
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products