Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voice classification method and device, server and storage medium

A classification method and classifier technology, applied in the field of Internet technology applications, can solve the problems of ignoring the deep information of voice content, rough evaluation, etc., and achieve the effect of fast and effective classification processing.

Active Publication Date: 2018-12-07
WUHAN DOUYU NETWORK TECH CO LTD
View PDF7 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, based on the general speech classification method, the deep information of the speech content is ignored, and only a rough assessment of the speech with a large difference in content can be made.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice classification method and device, server and storage medium
  • Voice classification method and device, server and storage medium
  • Voice classification method and device, server and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0026] figure 1 It is a flow chart of a speech classification method provided by Embodiment 1 of the present invention. This embodiment is applicable to the situation where speech classification is implemented based on deep information of speech content among numerous speech data. This method can be implemented by a speech classification device Execution, wherein the device may be implemented by software and / or hardware. Such as figure 1 As shown, the method of this embodiment specifically includes:

[0027] S110. Obtain the MFCC feature matrix of the target short speech by using the MFCC algorithm of Mel-frequency cepstral coefficients, and convert the MFCC feature matrix into a target image.

[0028] Among them, the Mel frequency is proposed based on the auditory characteristics of the human ear, and has a nonlinear corresponding relationship with the HZ frequency. Among them, the auditory characteristic of the human ear is that the human ear has different perception capa...

Embodiment 2

[0074] figure 2 It is a flow chart of a speech classification method provided by Embodiment 2 of the present invention. In this embodiment, on the basis of the above-mentioned embodiments, the optional conversion of the MFCC feature matrix into a target image includes: adjusting the row-column ratio of the MFCC feature matrix according to a first preset rule, so that the row-column ratio It is the same as the preset aspect ratio of the target image; the MFCC feature matrix after adjusting the row-column ratio is converted into a grayscale image, wherein each element in the MFCC feature matrix after adjusting the row-column ratio corresponds to the A grayscale value in the grayscale image; converting the grayscale image into an RGB three-primary-color image, and using the RGB three-primary-color image as the target image. Further, before adjusting the ratio of rows and columns of the MFCC feature matrix according to the preset first rule, it is optional to further include: pe...

Embodiment 3

[0100] image 3 It is a structural schematic diagram of a speech classification device in Embodiment 3 of the present invention. Such as image 3 As shown, the voice classification device includes:

[0101] Target image conversion module 310, for utilizing Mel frequency cepstral coefficient MFCC algorithm to obtain the MFCC feature matrix of target short speech, and MFCC feature matrix is ​​converted into target image;

[0102] The feature determination module 320 is used to extract the target image features of the target image based on the deep learning model;

[0103] The voice category determination module 330 is configured to input the features of the target image into a pre-trained voice classifier, and output the category of the target short voice.

[0104] The voice classification device that the embodiment of the present invention provides uses the MFCC algorithm to obtain the MFCC feature matrix of the target short speech through the target image conversion module,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a voice classification method and device, a server and a storage medium, wherein the voice classification method comprises: acquiring the MFCC feature matrixof a target short voice by using a Mel frequency cepstrum coefficient MFCC algorithm, and converting the MFCC feature matrix into a target image; based on a deep learning model, extracting the targetimage feature of the target image; inputting the target image feature into a pre-trained voice classifier, and outputting the category of the target short voice. The embodiment of the invention solvesthe problem that the existing voice classification method ignores the deep information of the voice content, can only roughly evaluate the voices with a large content difference, and realizes an effect of classifying and processing the voice data quickly and effectively.

Description

technical field [0001] The embodiment of the present invention relates to the application field of Internet technology, and in particular to a voice classification method, device, server and storage medium. Background technique [0002] With the rapid development of the Internet industry and the expansion of voice information, how to quickly and accurately classify voice data in massive information and save computing resources is a difficult point at present. [0003] The existing speech classification method usually calculates the MFCC features of each frame in the speech data, and then stitches the MFCC features of each frame into the overall features of short speech, trains a classifier and performs feature classification, and then obtains classification labels. However, based on the general speech classification method, the deep information of the speech content is ignored, and only a rough assessment can be made on the speech with a large difference in content. Conten...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L15/08G10L15/30G10L15/02G10L15/04G10L25/18G10L25/24G10L25/45G06N3/08
CPCG06N3/08G10L15/02G10L15/04G10L15/08G10L15/30G10L25/18G10L25/24G10L25/45
Inventor 吕志高张文明陈少杰
Owner WUHAN DOUYU NETWORK TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products