Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Audio segmentation based on spatial metadata

A metadata and audio technology, applied in the field of adaptive audio signal processing, which can solve the problem of not using spatial cues

Active Publication Date: 2017-02-22
DOLBY LAB LICENSING CORP
View PDF9 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0013] Current systems also do not use spatial cues of objects in adaptive audio content when segmenting audio

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Audio segmentation based on spatial metadata
  • Audio segmentation based on spatial metadata
  • Audio segmentation based on spatial metadata

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0224] Embodiments are generally directed to systems and methods for segmenting audio into sub-segments over which a non-interpolable output matrix can remain constant, while achieving continuously varying norms through interpolation of input primitive matrices and being able to utilize triangular matrices update to correct the trajectory. Segments are designed such that matrices specified at the boundaries of these subsegments can be decomposed into primitive matrices in two different ways, one suitable for interpolation all the way to the boundary, and the other suitable from Boundaries start interpolating. This process also marks segments that need to fall back to not interpolating.

[0225] One approach to this approach involves keeping the primitive matrix channel sequence constant. As mentioned earlier, each primitive matrix is ​​associated with the channel it operates on or modifies. For example, consider the order S of the primitive matrix 0 , S 1 , S 2 (The inve...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method of encoding adaptive audio, comprising receiving N objects and associated spatial metadata that describes the continuing motion of these objects, and partitioning the audio into segments based on the spatial metadata. The method encodes adaptive audio having objects and channel beds by capturing a continuing motion of a number N objects in a time-varying matrix trajectory comprising a sequence of matrices, coding coefficients of the time-varying matrix trajectory in spatial metadata to be transmitted via a high-definition audio format for rendering the adaptive audio through a number M output channels, and segmenting the sequence of matrices into a plurality of sub-segments based on the spatial metadata, wherein the plurality of sub-segments are configured to facilitate coding of one or more characteristics of the adaptive audio.

Description

[0001] Cross References to Related Applications [0002] This application claims priority to US Provisional Patent Application No. 61 / 984634, filed April 25, 2014, which is hereby incorporated by reference in its entirety. technical field [0003] Embodiments relate generally to adaptive audio signal processing, and more particularly to segmenting audio using spatial metadata describing the motion of audio objects to derive downmix matrices for rendering the objects to discrete speaker channels. Background technique [0004] New professional and consumer audiovisual (AV) systems such as Atmos TM system) has been developed to render mixed audio content using a format that includes both audio beds (channels) and audio objects. Audio beds refer to audio channels to be reproduced at predefined, fixed speaker positions (e.g. 5.1 or 7.1 surround), while audio objects refer to objects that exist for a defined duration and have positions describing each object , velocity and siz...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L19/008
CPCG10L19/008H04S2400/11G10L19/0017G10L19/167G10L19/20
Inventor V·麦尔考特M·J·洛R·M·费杰吉恩
Owner DOLBY LAB LICENSING CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products