Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

U-shaped cavity full-convolution integral segmentation network identification model based on remote sensing image

A remote sensing image and network recognition technology, applied in biological neural network models, scene recognition, character and pattern recognition, etc., can solve the problems of large scale changes of ground object information, complex structure of ground object information such as overpasses, etc., to achieve accurate extraction, The effect of expanding the receptive field and reducing the training parameters

Active Publication Date: 2020-05-15
CHONGQING UNIV
View PDF9 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Remote sensing images are different from natural images in that their object information often varies greatly in scale, and the information on objects such as buildings and vehicles generally has a small scale, while the information on objects such as roads and rivers is narrow and long. The structure of object information is intricate

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • U-shaped cavity full-convolution integral segmentation network identification model based on remote sensing image
  • U-shaped cavity full-convolution integral segmentation network identification model based on remote sensing image
  • U-shaped cavity full-convolution integral segmentation network identification model based on remote sensing image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0078] Embodiment 1: The present invention is tested by utilizing a scene image of Gaofen No. 2 image. One scene of Gaofen-2 image covers about 506.25 square kilometers, the spatial resolution is 0.8m, the image size is 29200×27200, and the single fused image is about 6G. In addition, the coverage rate of open-pit mines in the single-scene image is low, that is, open-pit Mine is sparse for imagery. Therefore, directly using the original image training will lead to a large machine load rate and a low positive sample ratio (very low effective data ratio). Training based on this unbalanced sample ratio will improve the accuracy of image feature extraction and category discrimination. Therefore, a series of data preprocessing such as labeling, effective area selection, mine area cutting, and data amplification are performed on the fused remote sensing data to obtain a learning data set that can be used for the recognition model, including training sets, verification sets, and test...

Embodiment 2

[0087] Embodiment 2: Test the present invention by using the data of Gaofen No. 2 urban area again. Gaofen No. 2 urban area covers about 506.25 square kilometers, with a spatial resolution of 0.8m, an image resolution of 29200×27200, and a single fused image of about 6G. Therefore, directly using the original image training will lead to a large machine load rate and a low positive sample ratio (very low effective data ratio). Training based on this unbalanced sample ratio will improve the accuracy of image feature extraction and category discrimination. Therefore, the fused remote sensing is subjected to a series of data preprocessing such as label cutting and data amplification. Before that, a series of processing such as atmospheric correction, orthorectification, image registration, and image fusion must be performed. Here, we preprocess the recognition algorithm based on some pre-fusion preprocessing operations and images obtained after fusion. The original images of TIF ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a U-shaped cavity full-convolution integral segmentation network identification model based on a remote sensing image. The model comprises a data preprocessing module, a modeltraining module and a model evaluation module. The model is characterized in that the data preprocessing module is used for performing data preprocessing operation on a remote sensing image to obtaina data set, and performing equal-ratio sampling on the data set to generate a training set, a verification set and a test set; the model training module is used for establishing a U-shaped cavity full-convolution integral cut network model, training parameters of the U-shaped cavity full-convolution integral cut network model by utilizing data of the training set, performing model learning and updating network weights, adjusting hyper-parameters involved in the U-shaped cavity full-convolution integral cut network model by utilizing the difference between the data of the verification set and the recognition effect, and judging the convergence degree of the U-shaped cavity full-convolution integral cut network model to achieve the purpose of deep training. The model can be widely applied toremote sensing image ground objects of different scales.

Description

technical field [0001] The invention relates to a DUSegNet segmentation recognition model, in particular to a U-shaped cavity full convolution segmentation network recognition model based on remote sensing images. Background technique [0002] Remote sensing monitoring is an important technical means for the supervision and management of surface resources, and the extraction of ground features for multispectral remote sensing images mainly uses the difference in spectral reflectance characteristics of different ground features to distinguish different ground feature information through the response characteristics of different bands of images. The ground object extraction of panchromatic image (Pan) and RGB color image generally uses the image texture features, geometric features, etc. to perform image segmentation to realize ground object extraction. However, these traditional methods have certain applicability in low-resolution images. However, with the advancement of sens...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/13G06N3/045G06F18/241G06F18/214
Inventor 周尚波齐颖张子涵王李闽朱淑芳
Owner CHONGQING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products