Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Compensation method and device for frame loss after voiced sound start frame

A compensation method and a start frame technology, applied in the field of speech coding and decoding, can solve the problem that the compensation sound quality is not guaranteed, etc.

Inactive Publication Date: 2018-01-19
ZTE CORP
View PDF6 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Different frame loss compensation methods are selected according to the different types of adjacent frames before the lost frame, but the frame loss after the voiced sound start frame usually uses a compensation method similar to the frame loss after the voiced sound frame, so that when the frame loss occurs before the voiced sound start Compensation sound quality is not guaranteed when after the start frame

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Compensation method and device for frame loss after voiced sound start frame
  • Compensation method and device for frame loss after voiced sound start frame
  • Compensation method and device for frame loss after voiced sound start frame

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0032] This embodiment describes a method for compensating after loss of the first frame immediately after the voiced sound start frame, such as figure 1 shown, including the following steps:

[0033] Step 101, the voiced sound start frame is correctly received, and it is judged whether the first frame (hereinafter referred to as the first lost frame) following the voiced sound start frame is lost, if lost, execute step 102, otherwise the process ends;

[0034] Step 102, selecting a corresponding pitch delay estimation method according to the stability condition of the voiced sound start frame to infer the pitch delay of the first lost frame;

[0035] Specifically: if the voiced sound onset frame meets the stability condition, the following pitch delay inference method is used to infer the pitch delay of the first lost frame: using the integer part of the pitch delay of the last subframe in the voiced sound onset frame (T -1 ) as the pitch delay of each subframe in the first...

Embodiment 2

[0103] This embodiment describes a method for compensating after loss of the first frame immediately after the start frame of voiced sound, and the difference from Embodiment 1 is that a second correction process is added.

[0104] Step 201 is the same as step 101 in embodiment 1;

[0105] Step 202, the main difference between this step and step 102 is that when the starting frame of voiced sound does not meet the stability condition, use the first correction amount to T -1 After making the correction, the corrected T -1 The second correction process is performed, and the result after the correction process is used as the final estimated value of the pitch delay of each subframe of the first lost frame.

[0106] Specifically, the second correction process is as follows:

[0107] Judging if the following two conditions are met, take T -1 is the median value of the pitch delay: condition 1: modified T -1 (i.e. T c =T -1 + f s *f m ) and T -1 The absolute value of the di...

Embodiment 3

[0138] This embodiment describes a method for compensating after the loss of two or more frames immediately after the voiced sound start frame, where the lost frames include the first lost frame and one or more lost frames immediately after the first lost frame, such as Figure 4 shown, including the following steps:

[0139] Step 301, using the method in embodiment 1 or embodiment 2 to infer the pitch delay and adaptive codebook gain of the first lost frame;

[0140] Step 302, for one or more lost frames following the first lost frame, use the pitch delay of the previous lost frame of the current lost frame as the pitch delay of the current lost frame;

[0141] Step 303, the adaptive codebook gain value obtained after attenuation and interpolation of the estimated value of the adaptive codebook gain of the last subframe of the previous lost frame of the current lost frame is used as the adaptive codebook gain value of each subframe in the current lost frame. codebook gain; ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A compensation method for frame loss after a voiced initial frame, comprising: if a first frame following a voiced initial frame is lost after the voiced initial frame is correctly received (101), choosing a fundamental tone delay inference method according to a stability condition of the voiced initial frame to infer a fundamental tone delay of the first lost frame (102); inferring an adaptive codebook gain of the first lost frame according to an adaptive codebook gain of one or more subframes received before the first lost frame, or inferring an adaptive codebook gain of the first lost frame according to an energy change of a time domain voice signal of the voiced initial frame (103); and compensating the first lost frame according to the inferred fundamental tone delay and adaptive codebook gain (104).

Description

technical field [0001] The invention relates to the technical field of speech coding and decoding, in particular to a compensation method and device for frame loss after the initial frame of voiced sound. Background technique [0002] When voice frames are transmitted in a channel, such as a wireless environment or an IP network, various complex factors involved in the transmission process may cause frame loss during reception, which seriously degrades the synthesized voice quality at the receiving end. The purpose of the frame loss compensation technology is to reduce the voice quality degradation caused by the frame loss, so as to improve people's subjective experience. [0003] CELP (Code Excited Linear Prediction) type speech codecs are widely used in practical communication systems because they can provide better speech quality at medium and low rates. The CELP type speech codec is a prediction-based speech codec. The speech frame of the current codec not only depends ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L19/008G10L21/003
CPCG10L19/005G10L19/09
Inventor 关旭袁浩彭科黎家力
Owner ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products