Method and system for sound source localization and sound source separation based on dual consistent network

A sound source localization and network technology, applied in neural learning methods, biological neural network models, speech analysis, etc., can solve problems such as slow calculation speed, inability to simultaneously process the architecture, and poor performance, and achieve the effect of enhancing performance.

Active Publication Date: 2022-02-18
HANGZHOU YIWISE INTELLIGENT TECH CO LTD
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] 1) In the current vision-guided sound separation model, it is necessary to use a specific image to query the sound corresponding to the object in it, but when there are multiple objects in the image, the model will not know which object to separate the sound corresponding to, resulting in poor performance
[0005] 2) At present, most of the models correspond to two tasks, which cannot be processed at the same time with a set of architectures. When the audio needs to be positioned and separated at the same time, the direct superimposition model will be very complicated and the calculation speed is slow

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for sound source localization and sound source separation based on dual consistent network
  • Method and system for sound source localization and sound source separation based on dual consistent network
  • Method and system for sound source localization and sound source separation based on dual consistent network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0114] In order to further demonstrate the implementation effect of the present invention, the present invention is verified experimentally on the MUSIC dataset, which contains 685 untrimmed videos collected from YouTube, including 536 solo and 149 duet videos. The video contains 11 instrument categories: accordion, acoustic guitar, cello, clarinet, erhu, flute, trumpet, tuba, saxophone, violin, xylophone, this dataset is suitable for sound source separation and sound source localization tasks. In order to verify the effectiveness of the present invention, for the sound source localization task, the experiment uses Intersection over Union (IoU) and Area Under the Curve (AUC) as evaluation indicators. Extending existing visual localization methods, SoP (Hang Zhao, Chuang Gan, Andrew Rouditchenko, CarlVondrick, Josh H. McDermott, and Antonio Torralba. The sound of pixels. InECCV, 2018) and DMC (Di Hu, Feiping Nie, and Xuelong Li. Deep multimodal clustering for unsupervised audio...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method and system for sound source localization and sound source separation based on a dual consistent network, and belongs to the field of image-audio multimodality. It mainly includes the following steps: 1) Obtain audio and video data sets, select a pair of videos belonging to different sound domains, extract corresponding single-source audio and image information, and calculate the mixed audio. 2) Perform feature encoding on audio and image respectively to obtain audio and image features. 3) Send the mixed audio and image features to the sound source separation module of the dual consistent network to separate the single-source audio. 4) Send the image and corresponding audio features to the sound source localization module of the dual consensus network to obtain the sounding object in the image. Compared with the traditional method in the task of sound source localization and sound source separation, the present invention regards the two tasks as dual tasks, and uses the same architecture to complete them simultaneously, and utilizes the characteristics of the two tasks to enhance the performance of each other during the training process, Finally, the effect was improved on both tasks.

Description

technical field [0001] The invention relates to the image-audio multimodal field, in particular to a method for sound source localization and sound source separation based on a dual consistent network. Background technique [0002] Vision and hearing are important ways for human beings to perceive the world. We can recognize and separate the sounds from various objects, and at the same time find objects that emit sounds in complex scenes. Having such a powerful perception is what we need to make subsequent complex decisions. Foundation. Therefore, enabling machines to have the ability to separate and locate sound sources is the only way to realize artificial intelligence. [0003] Many current studies mainly focus on two separate tasks, namely sound source localization and visually guided sound separation. Although they have achieved certain results, there are still some unsolved problems: [0004] 1) In the current vision-guided sound separation model, it is necessary to ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06V20/40G10L21/028G06N3/04G06N3/08G06T3/40G06T9/00
CPCG10L21/028G06T3/4038G06T9/00G06N3/08G06N3/045
Inventor 李昊沅
Owner HANGZHOU YIWISE INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products