Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

232 results about "Spatial parameter" patented technology

Spatial Parameters. Specifies whether to write geometry (planar data) or geography (geodetic data) when writing to tables. This parameter works only in combination with the Spatial Column parameter. Specifies the geometry or geography column to use when writing to tables.

System and method for visualizing connected temporal and spatial information as an integrated visual representation on a user interface

A system and method for configuring the presentation of a plurality of presentation elements in a visual representation on a user interface, the presentation elements having both temporal and spatial parameters, the method comprising the steps of: defining a time bar with a time scale having time indicators as subdivisions of the time scale and having a first global temporal limit and a second temporal global limit of the time scale for defining a temporal domain of the presentation elements, defining a focus range of the time bar such that the focus range has a first local temporal limit and a second local temporal limit wherein the first local temporal limit is greater than or equal to the first global temporal limit and the second local temporal limit is less than or equal to the second global temporal limit; defining a focus bar having a focus time scale having focus time indicators as subdivisions of the focus time scale and having the first and second local temporal limits as the extents of the focus time scale, such that the focus time scale is an expansion of the time scale; and displaying a set of presentation elements selected from the plurality of presentation elements based on the respective temporal parameter of each of the set of presentation elements is within the first and second local temporal limits.
Owner:PEN LINK LTD

Perceptual synthesis of auditory scenes

An auditory scene is synthesized by applying two or more different sets of one or more spatial parameters (e.g., an inter-ear level difference (ILD), inter-ear time difference (ITD), and/or head-related transfer function (HRTF)) to two or more different frequency bands of a combined audio signal, where each different frequency band is treated as if it corresponded to a single audio source in the auditory scene. In one embodiment, the combined audio signal corresponds to the combination of two or more different source signals, where each different frequency band corresponds to a region of the combined audio signal in which one of the source signals dominates the others. In this embodiment, the different sets of spatial parameters are applied to synthesize an auditory scene comprising the different source signals. In another embodiment, the combined audio signal corresponds to the combination of the left and right audio signals of a binaural signal corresponding to an input auditory scene. In this embodiment, the different sets of spatial parameters are applied to reconstruct the input auditory scene. In either case, transmission bandwidth requirements are reduced by reducing to one the number of different audio signals that need to be transmitted to a receiver configured to synthesize/reconstruct the auditory scene.
Owner:AVAGO TECH INT SALES PTE LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products