Supercharge Your Innovation With Domain-Expert AI Agents!

Real-time rendering method for very large geometry based on continuous simplification

A real-time rendering and geometry technology, applied in the field of visual scene processing, which can solve problems such as reducing data accuracy

Active Publication Date: 2020-10-23
江苏原力数字科技股份有限公司
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The biggest problem with this method is that it reduces the accuracy of the data, so it can only be used when the sampling requirement is not high, that is, when the camera is far away from the object.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real-time rendering method for very large geometry based on continuous simplification
  • Real-time rendering method for very large geometry based on continuous simplification

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] Such as figure 1 and figure 2 As shown, the real-time rendering method of super-large geometry based on continuous simplification, including the preprocessing stage and the implementation rendering stage;

[0025] Specifically, the preprocessing stage creates a continuous simplified model for the geometry and converts it into another representation to form continuous LOD data; the simplified model includes the following steps:

[0026] S11: Find and select the most redundant edge in the model, that is, the edge that has the least impact on information after removal;

[0027] S12: Use the edge selected in S11 to figure 1 The method is removed, and the two vertices connected by this edge are fused;

[0028] S13: Repeat steps S11 and S12 until it cannot be simplified at the end, such as figure 2 ;

[0029] S14: Record the edge fusion operations performed in steps S12 and S13, and form continuous LOD data together with the simplified model obtained in step S13.

[0...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a super-large geometry real-time rendering method based on continuous simplification, which comprises the following steps: preprocessing: creating a continuous simplification model for a geometry, and converting the continuous simplification model into another representation form to form continuous LOD data; rendering in real time, loading the continuous LOD data generated in the preprocessing stage, judging the target precision of each part in the scene frame, adjusting the target precision to the corresponding precision, and sending the adjusted data to the GPU for rendering. According to the invention, high performance of real-time preview is provided, and the original precision of data is reserved.

Description

technical field [0001] The invention belongs to the technical field of visual scene processing, and in particular relates to a method for real-time rendering of super large geometry based on continuous simplification. Background technique [0002] In visual scenes and 3D tours of games, videos, and massive data rendering, Level of detail is often used to improve efficiency, which is usually called LOD. However, the real-time rendering of geometry depends on highly specialized GPU hardware. With the increasing amount of ultra-large-scale geometric data, GPU cannot directly process such massive data. The industry usually uses a method of reducing the number of samples to create a proxy body that is similar to the original data but much smaller in size and sent to the GPU for rendering. The biggest problem with this method is that it reduces the data accuracy, so it can only be used when the sampling requirement is not high, that is, when the camera is far away from the object...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T15/00G06T19/20
CPCG06T15/005G06T19/20
Inventor 赵锐侯志迎
Owner 江苏原力数字科技股份有限公司
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More