Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Light field multi-view image super-resolution reconstruction method based on deep learning

A technology of super-resolution reconstruction and deep learning, applied in the field of light-field multi-view image super-resolution reconstruction based on deep learning, can solve the problem that the light-field multi-view image super-resolution method cannot meet the technical indicators, and achieve enhanced accuracy. Effect

Active Publication Date: 2021-05-04
VOMMA (SHANGHAI) TECH CO LTD
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The super-resolution reconstruction method of light field multi-view image based on deep learning provided by the present invention aims to solve the problem that the existing super-resolution method for light field multi-view image cannot meet the technical indicators

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Light field multi-view image super-resolution reconstruction method based on deep learning
  • Light field multi-view image super-resolution reconstruction method based on deep learning
  • Light field multi-view image super-resolution reconstruction method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] According to one or more embodiments, such as figure 1 As shown, a light field multi-view image super-resolution method based on multi-scale fusion features includes the following steps:

[0022] A1, using light field camera multi-view images or light field camera array images (multi-view images distributed in an N×N array) to construct a training set of high-resolution and low-resolution image pairs;

[0023] A2, construct a multi-layer feature extraction network from the N×N light field multi-view image array to the N×N light field multi-view feature image;

[0024] A3, stack feature images and build feature fusion and enhanced multi-layer convolutional network to obtain 4D light field structural features that can be used to reconstruct light field multi-view images;

[0025] A4, build an upsampling module to obtain the nonlinear mapping relationship from 4D light field structural features to high-resolution N×N light field multi-view images;

[0026] A5, build a lo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a light field multi-view image super-resolution reconstruction method based on deep learning. The method comprises the following steps: constructing a training set of high-resolution and low-resolution image pairs by adopting multi-view images which are obtained from a light field camera or a light field camera array and are distributed in an N * N array shape; constructing a multi-layer feature extraction network from the N * N light field multi-view image array to the N * N light field multi-view feature image; stacking the feature images and constructing a feature fusion and enhanced multi-layer convolutional network to obtain 4D light field structure features capable of being used for reconstructing a light field multi-view image; constructing an up-sampling module to obtain a nonlinear mapping relation from 4D light field structure features to a high-resolution N * N light field multi-view image; constructing a loss function based on the multi-scale feature fusion network, training the loss function, and finely adjusting network parameters; and inputting a low-resolution N * N light field multi-view image into the trained network to obtain a high-resolution N * N light field multi-view image.

Description

technical field [0001] The present invention relates to the technical field of image processing, in particular to a method for super-resolution reconstruction of light field multi-view images based on deep learning. Background technique [0002] Light field cameras can simultaneously capture the spatial position and incident angle of light. However, the light field recorded by it has a trade-off relationship between spatial resolution and angular resolution. The limited spatial resolution of multi-view images limits the light field to a certain extent. The scope of application of the camera. The camera array is also restricted by cost and resolution, which limits the development of 3D light field display, 3D modeling, 3D measurement and other fields. With the continuous development of the field of image processing, the demand for super-resolution technology of light field multi-view images needs to be met urgently. [0003] In recent years, the development and progress of ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06T5/50G06N3/04G06N3/08
CPCG06T3/4053G06T5/50G06N3/08G06T2207/10052G06T2207/20081G06T2207/20084G06T2207/20221G06N3/045
Inventor 赵圆圆李浩天
Owner VOMMA (SHANGHAI) TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products