Environment Adaptive Speech Enhancement Algorithm Based on Attention-Driven Recurrent Convolutional Network

A technology of speech enhancement and circular convolution, applied in speech analysis, instruments, etc., can solve the problem that the speech enhancement model is difficult to adapt to different noise environments, and achieve the effect of enriching information acquisition, improving robustness, and improving speech enhancement performance

Active Publication Date: 2021-05-07
TIANJIN UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problem that the existing speech enhancement model is difficult to adapt to different noise environments, the present invention proposes an environment-adaptive speech enhancement algorithm based on attention-driven circular convolution network, thereby improving the environmental adaptability of the algorithm in different environments Algorithmic Robustness

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Environment Adaptive Speech Enhancement Algorithm Based on Attention-Driven Recurrent Convolutional Network
  • Environment Adaptive Speech Enhancement Algorithm Based on Attention-Driven Recurrent Convolutional Network
  • Environment Adaptive Speech Enhancement Algorithm Based on Attention-Driven Recurrent Convolutional Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] In order to better understand the technical solution of the present invention, the present invention will be described in further detail in conjunction with the accompanying drawings and specific embodiments

[0043] figure 1 It is a framework diagram of an environment-adaptive speech enhancement algorithm based on the attention-driven circular convolution network of the present invention, and mainly includes the following steps:

[0044] Step 1, input data preparation: In order to verify the effect of the present invention, a speech enhancement experiment is carried out in the REVERBChallenge2014 database. The sampling frequency of all sentences in REVERBChallenge2014 is 16KHz.

[0045] Step 2, amplitude feature and environment feature extraction:

[0046] 1) Amplitude feature extraction: Pre-emphasize, frame, window, and fast Fourier transform each segment of the speech signal. The number of FFT points is set to 512, the window length is 512, the window shift is 256...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an environment-adaptive speech enhancement algorithm based on attention-driven circular convolution network, comprising the following steps: step 1, selecting a speech enhancement task database, and preparing input data; step 2, extracting the amplitude information of the speech and the environment Information, wherein the environmental information of the speech is extracted by using the weighted prediction error method (WPE), and the amplitude information of the speech is mainly the spectrogram information extracted by Fourier transform; Step 3, the construction and training of the depth model; Step 4, the speech Reconstruction, converting the speech amplitude predicted in step 3 into a speech waveform. The present invention considers the environmental information of the voice, and improves the environmental adaptability and algorithm robustness of the algorithm in different environments; in terms of preserving the real voice signal, the present invention incorporates the attention mechanism to build an attention-driven circular convolution network, which is more Accurately characterizes the temporal context information of speech, effectively improving the performance of speech enhancement.

Description

technical field [0001] The invention belongs to the technical field of speech enhancement, in particular to an environment-adaptive speech enhancement algorithm based on an attention-driven circular convolution network. Background technique [0002] With the popularization of smart devices and the rapid development of speech recognition technology, speech processing technology has attracted more and more public attention. In a common near-field environment (the speaker is relatively close to the microphone), the performance of speech recognition has reached over 95%, and many speech recognition and speech synthesis technologies have been commercialized. However, in the far-field environment (the speaker is far away from the microphone), there is often the influence of reverberation and various background noises, and the performance of speech recognition drops sharply. In the far-field environment, since the speaker does not need to hold a microphone or wear a microphone dev...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G10L21/0208G10L21/0216G10L25/30G10L25/03
CPCG10L21/0208G10L21/0216G10L25/03G10L25/30G10L2021/02082
Inventor 葛檬王龙标党建武
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products