Scene detection reference model training method and system and application method

A benchmark model and scene detection technology, applied in the field of remote sensing image processing, can solve the problems of lack of semantic concept definition, small number of training samples, significant differences, etc., and achieve the effect of full coverage supervision

Pending Publication Date: 2022-05-03
CHANGJIANG RIVER SCI RES INST CHANGJIANG WATER RESOURCES COMMISSION
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the category definition of these features is relatively clear, and the diversity within the category is relatively limited.
In contrast, complex ground scenes represented by production and construction projects lack widely accepted definitions of semantic concepts, images contain a variety of artificial and natural features, are highly unstructured within the class, have significant differences, and

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene detection reference model training method and system and application method
  • Scene detection reference model training method and system and application method
  • Scene detection reference model training method and system and application method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0044] see figure 1 , the invention provides a kind of scene detection reference model training method, described training method comprises the following steps:

[0045] A1: Obtain the original remote sensing image;

[0046] A2: Mark the production and construction projects in the original remote sensing image and the detailed features inside them, and the mark is a candidate frame;

[0047] A3: Take the original remote sensing image as input, and use the label of the production and construction project in the original remote sensing image as a label to train the production and construction project detector to obtain the production and construction project detection benchmark model, and the production and construction project detector Detection network for Faster RCNN;

[0048] A4: Calculate the amount of information of each detailed feature in the sample data set;

[0049] A5: Take the original remote sensing image as input, and use the labeling of the high-information det...

Embodiment 2

[0070] see Figure 4 , the present invention provides a kind of application method of scene detection reference model, described application method comprises the following steps:

[0071] B1: Acquire remote sensing images;

[0072] B2: Input the remote sensing images into the production and construction project detection reference model and the detailed ground object detection reference model respectively, and obtain the preliminary detection results of the actual production and construction projects and the preliminary detection results of the actual detailed ground objects, wherein the production and construction project detection The benchmark model and the detailed ground object detection benchmark model are the scene detection benchmark model obtained through the training in Example 1; the preliminary detection results of the actual production and construction projects are several actual production and construction project candidate frames and their corresponding confiden...

Embodiment 3

[0125] see Figure 8 , the present invention provides a scene detection benchmark model training system, the training system comprising:

[0126] The original remote sensing image acquisition module 1 is used to acquire the original remote sensing image;

[0127] Annotation module 2, configured to annotate the production and construction projects in the original remote sensing images and the detailed features inside them, the annotations being candidate frames;

[0128] The production and construction project detection reference model training module 3 is used to use the original remote sensing image as input and use the label of the production and construction project in the original remote sensing image as a label to train the production and construction project detector to obtain the production and construction project detection Benchmark model, the production and construction project detector is a Faster RCNN detection network;

[0129] The amount of information calculat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a scene detection reference model training method, a scene detection reference model training system and an application method, and aims to accurately and effectively identify a production construction project under construction and check a water and soil conservation scheme and a water and soil conservation prevention and control responsibility range which are recorded and reported by a national water and soil conservation supervision and management system. Whether the production and construction project breaks rules and is not built in batches and the disturbance exceeds the control responsibility range can be determined, so that the full-coverage supervision of the production and construction project is realized.

Description

technical field [0001] The invention relates to the field of remote sensing image processing, in particular to a scene detection reference model training method, system and application method. Background technique [0002] With the development of computer vision technology and remote sensing image interpretation methods, significant progress has been made in target detection and semantic segmentation based on high-resolution optical remote sensing images. The detection of special ground objects, such as buildings, aircraft, and vehicles, has achieved high accuracy. However, the category definition of these features is relatively clear, and the diversity within the category is relatively limited. In contrast, complex ground scenes represented by production and construction projects lack widely accepted definitions of semantic concepts. The images contain a variety of artificial and natural features, are highly unstructured within the class, have significant differences, and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06V20/10G06V10/44G06V10/26G06V10/764G06V10/82G06V10/766G06T7/62G06T7/00G06K9/62G06N3/04G06N3/08
CPCG06T7/62G06T7/0002G06N3/08G06N3/045G06F18/241
Inventor 蒲坚王志刚许文盛李建明刘晨曦王一峰杨贺菲王可孙蓓沈盛彧牛俊刘纪根
Owner CHANGJIANG RIVER SCI RES INST CHANGJIANG WATER RESOURCES COMMISSION
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products