Unlock instant, AI-driven research and patent intelligence for your innovation.

Multi-step audio object coding and decoding method based on two-stage filtering

An object coding, two-stage filtering technology, applied in speech analysis, instruments, etc., can solve the problem of high code rate of multi-step audio object coding, and achieve the effect of reducing the code rate and meeting the transmission requirements.

Active Publication Date: 2021-08-27
WUHAN UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0012] In order to solve the problem of high coding rate of multi-step audio objects, the present invention provides a multi-step audio object coding and decoding method based on two-stage filtering, which can perform high-quality audio coding and decoding at medium and low bit rates, ensuring that all Audio objects all have good decoded sound quality

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-step audio object coding and decoding method based on two-stage filtering
  • Multi-step audio object coding and decoding method based on two-stage filtering
  • Multi-step audio object coding and decoding method based on two-stage filtering

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0058] In order to facilitate those skilled in the art to understand and implement the present invention, the technical solution of the present invention will be further described below in conjunction with the accompanying drawings and specific implementation examples. It should be understood that the implementation examples described here are only for illustration and explanation of the present invention, and are not intended to To limit the present invention:

[0059] The present invention conducts further research on the basis of the existing multi-step audio object encoding method, and proposes a residual information compression method based on two-stage filtering. Firstly, according to psychoacoustics, the components in the residual matrix in the frequency domain that cannot be perceived by the human ear are filtered out as the first level of filtering; secondly, the residual information of each object is important by using the average frequency point energy in the residua...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-step audio object coding and decoding method based on two-stage filtering. In the coding stage, firstly, a plurality of input audio object signals are subjected to time-frequency transformation; an object circulation down-mixing sequence is determined, side information of each step is extracted, and a final down-mixed signal is output; redundant components which cannot be perceived by human ears in the residual information are removed through a first-stage filter; residual errors in the first n sub-bands are reserved through a second-stage filter according to the energy size of residual error information in each sub-band; singular value decomposition is carried out on the residual information after secondary filtering, and a large-size residual matrix is compressed into a small matrix; and the final mixed signal, the parameter and the residual decomposition matrix are synthesized into a code stream. In the decoding stage, the decomposed residual matrix is firstly utilized to reconstruct the original residual, and then multiple objects are gradually stripped from the down-mixed signal according to the side information. According to the method, secondary components in residual information are filtered out by using psychoacoustics and sub-band energy information, and the coding rate of the audio objects is reduced.

Description

technical field [0001] The invention belongs to the technical field of digital audio signal processing, and specifically relates to a multi-step audio object encoding and decoding method based on two-stage filtering, which is suitable for multi-audio object signal transmission under the condition of limited code rate, and allows transmission under different code rate requirements residual information. Background technique [0002] Next-generation audio systems differ from previous systems in two ways: immersion and personalization. For immersion, spatial audio technologies such as MPEG Surround [Document 1] and NHK 22.2 [Document 2] can provide three-dimensional audio reproduction. For personalization, the audio system should be compatible with different playback environments and devices according to user needs. In addition, the personalized audio system should support interactive audio services. But traditional spatial audio content is delivered to all users, regardless ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L19/02G10L19/032G10L19/26
CPCG10L19/02G10L19/032G10L19/26
Inventor 胡瑞敏胡晨昊王晓晨吴玉林张灵鲲柯善发刘文可
Owner WUHAN UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More