Unlock instant, AI-driven research and patent intelligence for your innovation.

Audio Segmentation Based on Spatial Metadata

A metadata and audio technology, applied in the field of adaptive audio signal processing, can solve problems such as not using spatial clues

Active Publication Date: 2020-09-15
DOLBY LAB LICENSING CORP
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0013] Current systems also do not use spatial cues of objects in adaptive audio content when segmenting audio

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Audio Segmentation Based on Spatial Metadata
  • Audio Segmentation Based on Spatial Metadata
  • Audio Segmentation Based on Spatial Metadata

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0224] The embodiments generally aim at the following systems and methods, which are used to divide audio into non-interpolable output matrices on which can maintain constant sub-segments, and at the same time achieve continuous variation of the norm through the interpolation of the input primitive matrix and can use triangular matrices Update to correct the trajectory. The segmentation is designed so that the matrix specified at the boundary of these sub-segments can be decomposed into the original matrix in two different ways, one is suitable for interpolation all the way to the boundary, and the other is suitable for starting from The boundary is interpolated. This process also marks segments that need to be rolled back as non-interpolated.

[0225] One method of this method involves keeping the primitive matrix channel sequence constant. As mentioned earlier, each primitive matrix is ​​associated with the channel it operates or modifies. For example, consider the order S o...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method of encoding adaptive audio includes receiving N objects and associated spatial metadata describing the persistent motion of the objects, and dividing the audio into segments based on the spatial metadata. The method encodes adaptive audio with object and channel beds by capturing the continuous motion of N objects in a sequence of time-varying matrix trajectories comprising matrices, encoding the coefficients of the time-varying matrix trajectories as will be obtained via spatial metadata for rendering adaptive audio over the M output channels in high definition audio format, and based on the spatial metadata, splitting the sequence of matrices into a plurality of subsections, wherein the plurality of subsections are configured as Coding of one or more properties that facilitate adaptive audio.

Description

[0001] Cross references to related applications [0002] This application claims the priority of U.S. Provisional Patent Application No. 61 / 984634 filed on April 25, 2014, the full text of which is incorporated herein by reference. Technical field [0003] The embodiments generally relate to adaptive audio signal processing, and more specifically to segmenting audio using spatial metadata describing the motion of an audio object to derive a downmix matrix for rendering the object to discrete speaker channels. Background technique [0004] New professional and consumer audiovisual (AV) systems (such as Atmos TM System) has been developed to render mixed audio content using a format that includes both audio beds (channels) and audio objects. The audio bed refers to the audio channel to be reproduced at a predefined, fixed speaker position (for example, 5.1 or 7.1 surround), while the audio object refers to the existence of a defined duration and a position that describes each object ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L19/008
CPCG10L19/008H04S2400/11G10L19/0017G10L19/167G10L19/20
Inventor V·麦尔考特M·J·洛R·M·费杰吉恩
Owner DOLBY LAB LICENSING CORP