Semantic Generation Method of Remote Sensing Image Based on Fast Region Convolutional Neural Network

A convolutional neural network and remote sensing image technology, applied in the field of image semantic generation, can solve the problems of not being able to get the relationship between the target and the image as a whole, not being able to get the relationship between the target and the target in the image, and the method is not systematic enough.

Active Publication Date: 2021-09-10
XIDIAN UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method can obtain superficial semantic information for auxiliary recognition, but the method is not systematic enough, and stays at the stage of target positioning and recognition, and cannot obtain the relationship between the target in the image, nor the relationship between the target and the image as a whole. , affecting the accuracy of subsequent tasks such as image detection and scene classification

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic Generation Method of Remote Sensing Image Based on Fast Region Convolutional Neural Network
  • Semantic Generation Method of Remote Sensing Image Based on Fast Region Convolutional Neural Network
  • Semantic Generation Method of Remote Sensing Image Based on Fast Region Convolutional Neural Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] Below in conjunction with accompanying drawing and specific embodiment, the present invention is described in further detail:

[0024] Refer to attached figure 1 , the realization steps of the present invention are as follows.

[0025] Step 1: Construct training sample set and test sample set.

[0026] Download the UCM-Captions Data Set, Sydney-Captions Data Set and RSICD three remote sensing image semantic generation datasets from the website of the State Key Laboratory of Surveying, Mapping and Remote Sensing at Wuhan University, and use 60% of the image-text pairs in each dataset as training samples. The remaining 40% image-text pairs are used as test samples.

[0027] Step 2 uses the fast area convolutional network to extract the image features of the remote sensing images in the training samples:

[0028] The structure of the fast area convolutional network is as follows figure 2 As shown, it contains a region candidate network and a three-layer convolutional ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention proposes a remote sensing image semantic generation method based on a fast regional convolutional neural network, which mainly solves the problem that the existing technology cannot obtain the relationship between objects in the image and the relationship between the object and the image as a whole. The implementation plan is: construct training sample set and test sample set; use fast regional convolutional neural network to extract image features of high-resolution remote sensing images; use bidirectional cyclic neural network to extract text features of corresponding sentences; use probability-based graph-text The matching model matches image features and text features; uses the matched graphic and text features to train the long short-term memory network, and then realizes the semantic generation of high-resolution remote sensing images. The invention fully considers the characteristics of complex background and various objects of the remote sensing image, improves the semantic generation result of the remote sensing image, and can be used for image retrieval or scene classification.

Description

technical field [0001] The invention belongs to the technical field of image processing, in particular to an image semantic generation method, which can be used to automatically describe the contents of remote sensing images. Background technique [0002] The understanding and description of remote sensing image content can provide decision-level support for remote sensing applications, and has a wide range of practical application values. For example, in the field of military reconnaissance, existing research algorithms can quickly identify important military targets such as ports, airports, and ships from remote sensing images. The understanding and description of remote sensing image content can accurately and comprehensively understand large and wide battlefield images, so as to realize real-time interpretation of battlefield geographical environment and dynamic intelligence generation. In terms of civilian use, the understanding and description of remote sensing image ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/04G06N3/08G06V10/757
Inventor 张向荣李翔朱鹏焦李成唐旭侯彪马晶晶马文萍
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products