Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

167 results about "3d geometry" patented technology

Segmenting compressed graphics data for parallel decompression and rendering

A graphics system and method for reducing redundant transformation and lighting calculations performed on vertices that are shared by more than one geometric primitive is disclosed. The amount of data transmitted in certain data blocks may be reduced by incorporating a multicast/unicast bit into each data block. This bit may then be set to instruct the control unit to use the current 3D geometry data or state information for subsequent vertices. This may increase efficiency by allowing subsequent vertices using the same 3D geometry data to transfer less data. Conversely, if a vertex has wholly independent 3D geometry data, then its multicast/unicast bit may be set to invoke use of the current 3D geometry data on the current vertex as opposed to all future vertices. The reduction in redundant calculations is accomplished by delaying the formation of geometric primitives until after transformation and lighting has been performed on the vertices. Transformation and or lighting are performed independently on a vertex-by-vertex basis without reference to which geometric primitives the vertices belong to. After transformation and or lighting, geometric primitives may be formed utilizing previously generated connectivity information. The connectivity information may include mesh buffer references, vertex tags, and or other types of information.
Owner:ORACLE INT CORP

Cybernetic 3D music visualizer

3D music visualization process employing a novel method of real-time reconfigurable control of 3D geometry and texture, employing blended control combinations of software oscillators, computer keyboard and mouse, audio spectrum, control recordings and MIDI protocol. The method includes a programmable visual attack, decay, sustain and release (V-ADSR) transfer function applicable to all degrees of freedom of 3D output parameters, enhancing even binary control inputs with continuous and aesthetic spatio-temporal symmetries of behavior. A “Scene Nodes Graph” for authoring content acts as a hierarchical, object-oriented graphical interpreter for defining 3D models and their textures, as well as flexibly defining how the control source blend(s) are connected or “Routed” to those objects. An “Auto-Builder” simplifies Scene construction by auto-inserting and auto-routing Scene Objects. The Scene Nodes Graph also includes means for real-time modification of the control scheme structure itself, and supports direct real-time keyboard / mouse adjustment to all parameters of all input control sources and all output objects. Dynamic control schemes are also supported such as control sources modifying the Routing and parameters of other control sources. Auto-scene-creator feature allows automatic scene creation by exploiting the maximum threshold of visualizer set of variables to create a nearly infinite set of scenes. A Realtime-Network-Updater feature allows multiple local and / or remote users to simultaneously co-create scenes in real-time and effect the changes in a networked community environment where in universal variables are interactively updated in real-time thus enabling scene co-creation in a global environment. In terms of the human subjective perception, the method creates, enhances and amplifies multiple forms of both passive and interactive synesthesia. The method utilizes transfer functions providing multiple forms of applied symmetry in the control feedback process yielding an increased level of perceived visual harmony and beauty. The method enables a substantially increased number of both passive and human-interactive interpenetrating control / feedback processes that may be simultaneously employed within the same audio-visual perceptual space, while maintaining distinct recognition of each, and reducing the threshold of human ergonomic effort required to distinguish them even when so coexistent. Taken together, these novel features of the invention can be employed (by means of considered Scene content construction) to realize an increased density of “orthogonal features” in cybernetic multimedia content. This furthermore increases the maximum number of human players who can simultaneously participate in shared interactive music visualization content while each still retaining relatively clear perception of their own control / feedback parameters.
Owner:VASAN SRINI +2

Long bone fracture traction reduction navigation apparatus

The invention discloses a fracture traction reduction guiding device of a long bone. The device comprises a driving mechanical arm, a driven mechanical arm, a photoelectric tracer, an end effector, a trolley, a tracking mark component and a spatial point acquisition unit. The tracking mark component is installed on the fracture position of the long bone. The spatial point acquisition unit is held by a doctor. A prop A of the driving mechanical arm and a prop B of the driven mechanical arm are installed on the upper panel of the shell of the trolley respectively. The photoelectric tracer is installed on the end cover of a rocker A of the driving mechanical arm. The end effector is installed on the end cover of a rocker B of the driven mechanical arm. A computer system is arranged in the wagon box of the trolley. The driving mechanical arm has the same structure with the driven mechanical arm. The fracture traction resetting guiding device of the long bone can assist the doctor to finish the reduction operation of 3D geometry of a wounded limb and the mechanical arm is used for fixing the distant end of the wounded limb to improve the accuracy of reduction and to reduce the radiation of X-ray. The tracking mark component is arranged on the fractured bone that needs to be operated, so the position of a characteristic point marked by the tracking mark component is easier to be tracked by the photoelectric tracer.
Owner:BEIHANG UNIV

Image Coding And Decoding Method And Apparatus For Efficient Encoding And Decoding Of 3D Light Field Content

The invention is an image coding method for video compression, especially for efficient encoding and decoding of true 3D content, without extreme bandwidth requirements, being compatible with the current standards serving as an extension, providing a scalable format. The method comprises of the steps of obtaining geometry-related information about the 3D geometry of the 3D scene and generating a common relative motion vector set on the basis of the geometry-related information, the common relative motion vector set corresponding to the real 3D geometry. This motion vector generating step (37) replaces conventional motion estimation and motion vector calculation applied in the standard (MPEG4/H.264 AVC, MVC, etc.) procedures. Inter-frame coding is carried out by creating predictive frames, starting from an intra frame, being one of the 2D view images on the basis of the intra frame and the common relative motion vector set. On the decoder side large number of views are reconstructed based on dense, but real 3D geometry information. The invention also relates to image coding and decoding apparatuses carrying out the encoding and decoding methods, as well as to computer readable media storing computer executable instructions for the inventive methods. (FIG. 8)
Owner:BALOGH TIBOR
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products