Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voice matching method in multi-person scene

A matching method and vocal technology, applied in the field of multi-person scene vocal matching to achieve the effect of reducing workload

Active Publication Date: 2022-04-08
YUNNAN POWER GRID CO LTD ELECTRIC POWER RES INST
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] This application provides a method for matching human voices in multi-person scenes to solve the problem of automatic matching of human voices in multi-person scenes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice matching method in multi-person scene
  • Voice matching method in multi-person scene
  • Voice matching method in multi-person scene

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] In order to enable those skilled in the art to better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described The embodiments are only some of the embodiments of the present application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the scope of protection of this application.

[0045] see figure 1 , is a schematic flowchart of a multi-person scene vocal matching method provided by the embodiment of the present application, as shown in figure 1 As shown, the multi-person scene vocal matching method provided by the embodiment of the present application includes the following steps:

[0046] Step S110: Divide the aud...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

An embodiment of the present application provides a voice matching method in a multi-person scene, including: dividing the audio to be matched into multiple sound segments; performing voice recognition on the sound segments to obtain the voice segments in the sound segments; and obtaining the video corresponding to the voice segments Segment; Face detection is performed on the video segment to obtain all predicted speakers of the voice segment; according to the pixel difference of adjacent gray-scale frames in the video segment, the hit information of each predicted speaker in the adjacent gray-scale frame is obtained; according to The hit information counts the number of hits of each predicted speaker in the video segment, and the predicted speaker with the largest number of hits is the target speaker of the speech segment. The present application realizes the automatic binding of the voice to the target speaker, which can greatly reduce the workload of subsequent manual matching of the voice and the target speaker, and is conducive to promoting the practical application of audio-visual cognition technology.

Description

technical field [0001] The present application relates to the technical field of vocal matching, and in particular to a method for matching vocals in a multi-person scene. Background technique [0002] With the continuous development of natural language processing technology, the speech recognition function of converting sound into text has been continuously improved. However, in some multi-person conversation scenarios, such as multi-person meeting records and interview summaries, in addition Only by identifying the speaker’s identity and matching the voice with the speaker’s human voice can the meeting minutes or interview summary be fully recorded. [0003] In related technologies, voiceprint recognition technology can be used to distinguish different speakers. However, voiceprint recognition needs to collect a segment of each speaker's voice in advance to extract the speaker's voice features as the basis for voiceprint recognition. It does not meet the conditions for re...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L15/26G10L15/04G10L15/24G10L15/28G10L21/0308G06V40/16G06V10/762
CPCG10L15/04G10L15/24G10L15/28G10L21/0308G10L15/26G06V40/161G06F18/23
Inventor 唐立军杨家全周年荣张林山李浩涛杨洋冯勇严玉廷李孟阳罗恩博梁俊宇袁兴宇李响何婕栾思平
Owner YUNNAN POWER GRID CO LTD ELECTRIC POWER RES INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products