Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An efficient global illumination rendering method based on depth learning

A technology of deep learning and global light, applied in neural learning methods, 3D image processing, instruments, etc., to achieve the effect of saving rendering time

Active Publication Date: 2019-02-26
PEKING UNIV
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method may still have the following problems: a high-precision, high-robust neural network usually requires a deeper structure, more parameters, and a higher amount of calculation, so how to design an effective neural network under the premise of ensuring time efficiency? neural network to generate renderings with the same quality as traditional non-deep learning lighting calculation methods? Neural network is a data-based method. How much photon data can be used as input to get a general photon mapping-based global illumination algorithm?

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An efficient global illumination rendering method based on depth learning
  • An efficient global illumination rendering method based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The implementation of the lighting rendering method based on deep learning of the present invention will be described in detail below in conjunction with the accompanying drawings.

[0037] Basic concepts related to the convolutional neural network (CNN) in the present invention and deep learning:

[0038] a) stride, kernel, pooling in convolutional neural network (CNN)

[0039] Step size (stride), convolution kernel size (kernel), and pooling (pooling) are concepts commonly used in the design of convolutional neural network structures. Image convolution is generally a 2-dimensional convolution. In the convolution layer, kernel refers to the size (height and width) of the convolution kernel, and stride refers to the sampling step of convolution. Stride can also be used for image features (feature map), which refers to the sampling step size corresponding to the height and width of the feature map, that is, a multi-layer neural network can also be used as a sampling.

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an efficient global light illumination rendering method based on depth learning, which comprises the following steps: 1) selecting or generating a plurality of groups of images, each group comprising k coarse color effect images which are calculated and rendered by using different photon collection radii for light illumination; For each group of images, k color effect images are stacked on three channels as the input of neural network. 2) training that input data by using the neural network to obtain a neural network model and parameter; 3) executing a photon mapping method according to the viewpoint parameters to be rendered and the three-dimensional scene, generating k color rough effect maps and stacking the k color maps on three channels as input data; Then thecurrent input data is processed by using the neural network model trained in step 2) and various parameters thereof to obtain the final composite rendering image. The invention only needs to illuminate the rough image with little light to synthesize the high-quality realistic rendering effect image.

Description

technical field [0001] The invention belongs to the field of computer graphics, and relates to a deep learning-based high-efficiency global illumination rendering method. Background technique [0002] Lighting calculation is a key issue in computer graphics. As games, movies, animation, and virtual reality are increasingly pursuing high realism in scene rendering, the speed of algorithms is increasingly demanding. However, the rendering algorithm based on global light illumination converges slowly, and often cannot meet the requirements of real-time rendering. [0003] As an advanced technology of machine learning, deep learning, combined with the development of high-parallel hardware, is showing great splendor in various fields of artificial intelligence. The deep learning theory is based on big data. Compared with the traditional artificial rule method, it has higher robustness and can handle various unexpected situations before the algorithm design. Among them, in the f...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T15/50G06N3/04G06N3/08
CPCG06N3/08G06T15/506G06N3/045
Inventor 李胜高煜林泽辉汪国平
Owner PEKING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products