Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

169 results about "Viewing frustum" patented technology

In 3D computer graphics, the view frustum (also called viewing frustum) is the region of space in the modeled world that may appear on the screen; it is the field of view of the notional camera. The view frustum is typically obtained by taking a frustum—that is a truncation with parallel planes—of the pyramid of vision, which is the adaptation of (idealized) cone of vision that a camera or eye would have to the rectangular viewports typically used in computer graphics. Some authors use pyramid of vision as a synonym for view frustum itself, i.e. consider it truncated.

Large-scale virtual crowd real-time rendering method

The invention relates to a large-scale virtual crowd real-time rendering method, which comprises the following steps of: 1, introducing the conventional grid model and extracting geometric information and animation information of the model; 2, performing octree space subdivision on the model, wherein approximate description of a part of the model related to geometric size of each node is stored in the node; 3, performing point sampling on the surface of the part of the model included by each node; 4, processing and modeling a sample point, wherein the step comprises the sub-steps of calculating sample point information by interpolation, selecting sample point animation information, oversampling and removing redundancy and the like; 5, establishing model sampling data of three-layer LOD (Levels of Detail) according to specified parameters; 6, performing view frustum culling accelerated by a GPU (Graphic Processing Unit) on a virtual crowd in a large-scale scene during the real-time rendering; 7, performing an LOD strategy accelerated by the GPU on the cuffing result, wherein the step comprises the sub-steps of selecting role LOD and ordering LOD; and 8, sequentially performing GPU skin-skeleton animation based instancing rendering on the role of each layer of LOD level. By adopting the method, quick real-time rendering of a large-scale virtual crowd can be realized.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Method for converting virtual 3D (Three-Dimensional) scene into 3D view

InactiveCN102509334AImprove experienceNo vertical parallax3D-image renderingParallaxViewpoints
The invention discloses a method for converting a virtual 3D (Three-Dimensional) scene into a 3D view. The method can be used for converting any virtual 3D scene in OpenGL into a three-dimensional view by the following steps of: rotating and translating a world coordinate system, wherein an observation point is taken as the original point of a new coordinate system, and a connecting line between the central position of viewpoints and the observation point serves as an axis-Z positive axis; determining a rotating angle and a translating distance according to the central position of the viewpoints and the coordinate of the observation point; determining the shear mapping angle of each viewpoint according to the central position of the viewpoints and the coordinate of each viewpoint to generate a corresponding shear mapping matrix, performing right-handed multiplication on a model view matrix of each viewpoint, and projecting to obtain corresponding image data of each viewpoint; and adjusting the coordinate of the 3D scene, the horizontal resolution of the view, the size of a view frustum and the positions of the viewpoints according to a constraint condition of a parallax error and 3D effect experience to improve the 3D effect of the 3D view. Shear transformation and parameter adjustment are inserted in an OpenGL processing flow, so that the problems of unremarkable 3D effect and the presence of vertical parallax are solved, and an optimal 3D effect is achieved.
Owner:BEIJING JETSEN TECH

Large-scale oblique photography model organization and scheduling method

The invention discloses an organization and scheduling method based on a large-scale oblique photography model, and the method comprises the steps: carrying out the model data hierarchical division ofoblique photography model data, determining the simplified model data and model material files of all hierarchies through combining with the simplification degrees of all hierarchies, and generatingmodel files of different hierarchies; partitioning the model file of each hierarchy to generate partitioned model data of different hierarchies, and recording tiles by adopting OCT coding to generatea Json index file; generating a quadrangular view cone region according to the selected view angle and viewpoint, and calling block model data in the view cone region in combination with the index file; and matching the current view cone area according to the viewpoint and view angle changes, and updating the model data of the corresponding view cone area. According to the method, the data index and the tree-shaped data structure are established after the data is layered and blocked, so that quick positioning and effective management of the data are facilitated; through scheduling strategies of pre-access, three-level caching and the like, the model data loading and rendering efficiency is improved, and the data loading and visualization capacity of the three-dimensional GIS is expanded.
Owner:中国科学院电子学研究所苏州研究院

Simple calculation method of target laser scattering characteristics under local irradiation

ActiveCN106770045ACalculation of Scattering Cross SectionEasy and fast scattering cross sectionScattering properties measurementsTarget surfaceScattering cross-section
The invention discloses a simple calculation method of target laser scattering characteristics under local irradiation, and belongs to the technical field of target detection, recognition, and stealth. By modifying the parameters of view-frustum, the simple calculation of scattering cross-section of a target laser radar under local irradiation can be realized. The method comprises the following steps: at first, obtaining the bidirectional reflectance distribution function of a target surface material; establishing a geometric model of a complicated target; reading the target geometric model file, according to the size and position of an irritation laser spot, setting the parameters of view-frustum; calling an OpenGL function to carry out target rendering and blanking to realize real-time display of a target under local irradiation; and finally obtaining each parameter of a target laser radar scattering cross-section calculation formula to complete the target laser radar scattering cross-section imitation calculation under local irradiation. Compared with the conventional algorithms, the adopted algorithm is easy to operate and is flexible; the size and irritation position of the irritation laser spot can be easily modified; and the target laser radar scattering cross-section imitation calculation can be realized under laser overall/local irradiation.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products