Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A deep learning network training method for video satellite super-resolution reconstruction

A deep learning network and super-resolution reconstruction technology, applied in the field of machine learning, can solve problems such as limitations, not considering the relative influence of pixel grayscale, and achieve the effect of promoting performance improvement and improving training effect.

Active Publication Date: 2021-06-04
WUHAN UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the definition of this loss function is simple to calculate, it also has obvious limitations in the super-resolution application of video satellites.
First of all, it does not give special consideration to the edges of objects in satellite images, but in fact, enhancing the definition of edge contours of different types of objects has greater value for the interpretation of satellite images
Secondly, according to the brightness masking effect, the perceivable distortion (or reconstruction error) of pixels with different gray levels is different, and the higher the gray value, the greater the distortion is allowed, and vice versa; but the conventional MSE The error measurement function calculates the absolute error, and does not consider the relative influence of the grayscale of the pixel itself

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A deep learning network training method for video satellite super-resolution reconstruction
  • A deep learning network training method for video satellite super-resolution reconstruction
  • A deep learning network training method for video satellite super-resolution reconstruction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0015] In order to facilitate those of ordinary skill in the art to understand and implement the present invention, the present invention will be described in further detail below in conjunction with the examples. It should be understood that the implementation examples described here are only used to illustrate and explain the present invention, and are not intended to limit the present invention.

[0016] The dynamic video of the video satellite has inherent insufficient spatial resolution and blurring limitations, and using its own image as a training sample cannot provide sufficient high-frequency information, which seriously restricts the degree of detail restoration of the reconstructed high-resolution image. Compared with dynamic satellite video, under the same sensor sampling and channel transmission throughput conditions, the spatial resolution of static satellite imagery is much higher and the details of ground objects are more abundant. Therefore, static satellite im...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a deep learning network training method for super-resolution reconstruction of video satellites. First, a training sample set composed of high-resolution static satellite images is constructed, and then a CNN network structure for super-resolution reconstruction is constructed and network training is set. parameters, and finally establish the loss function for deep CNN training. The method of the invention takes into account the influence degree of the target edge and the pixel gray value on the reconstruction error measurement, thereby improving the training effect of the deep CNN network, and finally promoting the improvement of the performance of the image super-resolution method based on deep learning.

Description

technical field [0001] The invention belongs to the technical field of machine learning and relates to a deep learning network training method, in particular to a deep learning network training method for video satellite super-resolution reconstruction. [0002] technical background [0003] The video satellites emerging in recent years provide an effective means for real-time observation of large dynamic targets by collecting continuous dynamic video, thus greatly making up for the lack of dynamic observation capabilities of traditional remote sensing satellites. The improvement of temporal resolution of video satellites is at the expense of spatial resolution. Generally speaking, the spatial resolution of video satellites is lower than that of remote sensing satellites that perform static or sequence image operations in the same period. For example, the ground resolution of the static image of the optical star on my country's "Jilin-1" reaches 0.72 meters, while the ground ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T3/40G06N3/08
CPCG06N3/08G06T3/4046G06T3/4053
Inventor 王中元陈丹江奎易鹏
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products