Remote sensing image cultivated land parcel extraction method combining edge detection and multi-task learning

A multi-task learning and edge detection technology, applied in the field of remote sensing image analysis, can solve the problems of difficulty in obtaining continuous and closed plot boundaries, low extraction accuracy, etc., to achieve seamless splicing, improve geometric accuracy, and overcome the problem of splicing traces. Effect

Pending Publication Date: 2022-07-29
FUZHOU UNIV
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Existing farmland block extraction techniques mostly use methods such as image segmentation, edge detection, or single-task learning convolutional neural network, but these methods have low accuracy for farmland block extraction in complex terrain areas (such as southern hilly areas). Difficulty obtaining continuous, closed parcel boundaries

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Remote sensing image cultivated land parcel extraction method combining edge detection and multi-task learning
  • Remote sensing image cultivated land parcel extraction method combining edge detection and multi-task learning
  • Remote sensing image cultivated land parcel extraction method combining edge detection and multi-task learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0107] In this example,

[0108] In step S41, the root mean error is defined as follows:

[0109]

[0110] In the formula, L dist is the root mean loss, D p (x), D(x) are the distance value predicted by the model and the real distance value, respectively. Among them, x is the pixel position in the image space R, and N is the number of input samples, which is 2528.

[0111] In this embodiment, in step S41, the two-category cross-entropy loss is defined as follows:

[0112]

[0113] In the formula, l boun (W) is the total boundary loss, W represents all the learned parameters in the RCF model, is the activation value of the kth (k=5) level in the RCF model, |R| represents the number of pixels in the image R; for each boundary point, define a hyperparameter ε, set to 0.5: if a point is an edge If the probability of the point is greater than ε, the point is considered to be the boundary; if the probability of this point is 0, it is not the boundary; in addition, the p...

Embodiment 2

[0131] In this example, the GF-1 PMS image of the 21st Regiment of the Second Agricultural Division of Xinjiang in September 2018 is used, the spatial resolution after preprocessing is 2m, and the bands are red, green, and blue. This example uses 2528 images with a pixel size of 256×256 and the corresponding parcel labels for model training.

[0132] like figure 2 As shown, it is a structural diagram of the edge detection model RCF of this embodiment. As can be seen from the figure, RCF is an edge detection model with a VGG16 network backbone, and the model is completed by five layers. Each convolutional layer in the VGG16 network is followed by a convolutional operation with a channel number of 21, and a convolutional layer with a kernel size of 1×1. Next, use an upsampling operation (ie figure 2 Deconv layer in ) to restore the resolution of the feature maps, and then use the Sigmoid activation function to get the feature maps for each stage. Finally, a 1×1 convolution...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a remote sensing image cultivated land parcel extraction method combining edge detection and multi-task learning. The method comprises the following steps: S1, performing preprocessing operation on a high-resolution remote sensing image of a research area; s2, based on a multi-task learning framework, performing high-level image feature extraction of the land parcel boundary by setting a task for constraining the land parcel boundary and the shape; s3, the problems of discontinuity and non-closure existing in plot boundary extraction are solved by integrating multi-scale image edge information and high-level semantic boundary information; s4, self-adaptive adjustment of different task weights is achieved based on the same variance uncertainty theory; s5, performing model training and fine tuning based on different optimizers; s6, performing sliding window prediction and seamless splicing on the remote sensing image, and improving the efficiency of extracting the large-area cultivated land parcels; according to the method, the problems of discontinuous and unclosed boundary of the traditional convolutional neural network in cultivated land parcel extraction can be solved, and the geometric accuracy of cultivated land parcel extraction is effectively improved.

Description

technical field [0001] The invention relates to the technical field of remote sensing image analysis, in particular to a remote sensing image farmland plot extraction method combining edge detection and multi-task learning. Background technique [0002] Cultivated land is an important means of production for human beings. It is related to the supply of food and is an important material basis for human beings. Accurate arable land plot information is the basis of precision agriculture research, the basic unit for humans to use land resources to carry out agricultural farming activities, and the minimum spatial granularity for digital management and decision-making of precision agriculture. Obtaining accurate cultivated land parcel information is helpful for crop classification, yield estimation, and planting area statistics at the parcel scale. The existing cultivated land plot extraction techniques mostly use methods such as image segmentation, edge detection, or single-tas...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06V20/10G06V10/26G06V10/44G06V10/764G06V10/82G06K9/62G06N3/04
CPCG06N3/045G06F18/2415
Inventor 李蒙蒙龙江汪小钦
Owner FUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products