Panoramic image fusion method based on depth convolution neural network and depth information

A convolutional neural network and deep convolution technology, applied in biological neural network models, image enhancement, neural architecture, etc., can solve problems such as splicing ghosting and gaps in image fusion areas

Inactive Publication Date: 2017-07-07
CHANGSHA PANODUX TECH CO LTD
View PDF11 Cites 45 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Currently, there are two types of commonly used image fusion methods. One is to use direct fusion methods (for example: average value method, weighted average method, median filter method), which will cause the generated panoramic image to be blurred due to the difference in details in the overlapping area. There are obvious stitching seams; the other is to use dynamic programming and graph-cut methods to find the optimal fusion centerline, specifically, to use the gray

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Panoramic image fusion method based on depth convolution neural network and depth information
  • Panoramic image fusion method based on depth convolution neural network and depth information
  • Panoramic image fusion method based on depth convolution neural network and depth information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

[0053] The present invention provides a panoramic image fusion method based on deep convolutional neural network and depth information, such as figure 1 shown, including the following steps:

[0054] S1: Construct a deep learning training dataset.

[0055] Select the overlapping area x of the two fisheye images to be fused for training e1 and x e2 And the ideal fusion area y of the panoramic image formed by the fusion of these two fisheye images e , to construc...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a panoramic image fusion method based on a depth convolution neural network and depth information. The method comprises the steps of (S1) constructing a deep learning training data set, selecting overlap regions xe1 and xe2 of two fish eye images to be fused used for training and an ideal fusion area ye of a panoramic image formed after fusing the two fish eye images, and constructing a training set {xe1, xe2, ye} of the images to be fused and a panoramic image block pair, (S2) constructing a convolution neural network model, and (S3) obtaining a fusion area of a test data set based on a test data set and a trained depth convolution neural network model. According to the method, an image can be expressed more comprehensively and deeply, the image semantic representation in a plurality of abstract levels is realized, and the accuracy of image fusion is improved.

Description

technical field [0001] The invention belongs to the technical field of image communication, relates to the technical field of image splicing, and in particular relates to a panoramic image fusion method based on a deep convolutional neural network and depth information. Background technique [0002] Image stitching technology is the technology of stitching several partially overlapping images into a large seamless high-resolution image. Use ordinary cameras to obtain wide-field scene images. Because the resolution of the camera is fixed, the larger the scene is, the lower the image resolution will be. However, panoramic cameras and wide-angle lenses are not only very expensive, but also have serious distortion. In order to obtain an ultra-wide field of view or even a 360-degree panorama without reducing the image resolution, a computer image stitching method has emerged. [0003] Image stitching is one of the key technologies in image processing, and it is the basis for oth...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T3/40G06N3/04G06T5/00G06T7/55
Inventor 不公告发明人
Owner CHANGSHA PANODUX TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products