Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Complex scene recognition method and system based on multispectral image fusion

A multi-spectral image and complex scene technology, applied in neural learning methods, character and pattern recognition, instruments, etc., can solve the problems of accurate information extraction, poor feature extraction ability, long calculation time, etc., to solve the problem of intelligent scene recognition. problems, enhanced extraction capabilities, the effect of reducing computational costs

Pending Publication Date: 2020-12-01
THE THIRD RES INST OF CHINA ELECTRONICS TECH GRP CORP
View PDF5 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] When identifying geographic and scene information on image information, SVM and K-means segmentation and positioning algorithms in the prior art are often greatly disturbed by scene information, such as the shadows of trees and buildings in the scene, vehicles and roads. Temporary construction areas will affect the accurate extraction of information; and the neural network algorithms such as Mask-RCNN and Deeplab in the existing technology are affected by the computing power of the airborne platform, and face the problems of long calculation time and poor feature extraction ability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Complex scene recognition method and system based on multispectral image fusion
  • Complex scene recognition method and system based on multispectral image fusion
  • Complex scene recognition method and system based on multispectral image fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0067] see figure 1 , figure 1 A schematic flow chart of a complex scene recognition method based on multispectral image fusion provided by an embodiment of the present invention;

[0068] The method includes the following steps:

[0069] S100: Divide the image of the scene data set into a training data set and a test data set, where the image of the scene data set is an image obtained by fusing infrared light and visible light;

[0070] S200: Construct a DL-FME convolutional neural network for scene recognition according to the training data set, and use the DL-FME convolutional neural network to segment the fused image to obtain a segmented image;

[0071] S300: Using the DL-FME convolutional neural network to train the segmented image to generate a scene recognition training model;

[0072] S400: Input the images of the test data set into the scene recognition training model to generate a scene recognition model;

[0073] S500: Use the scene recognition model to recogni...

Embodiment 2

[0146] see Figure 6 , Figure 6 A schematic structural diagram of a complex scene recognition system based on multispectral image fusion provided by an embodiment of the present invention, including

[0147] Fusion module, for dividing the image of scene data set into training data set and test data set, the image of described scene data set is the image after infrared light and visible light fusion;

[0148] Training module, for utilizing described DL-FME convolutional neural network to train the image after the segmentation to generate scene recognition training model;

[0149] Extraction module, the extraction module is an important module of the training link, located in the DL-FME convolutional neural network that constructs scene recognition according to the training data set, utilizes the DL-FME convolutional neural network to process the fused image Feature extraction to obtain the features of the fused image;

[0150]An enhancement module, for inputting the images...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a complex scene recognition method based on multispectral image fusion, and the method comprises the steps: dividing an image of a scene data set into a training data set and atest data set, and enabling the image of the scene data set to be an image after the fusion of infrared light and visible light; constructing a DL-FME convolutional neural network for scene recognition according to the training data set, and segmenting the fused image by using the DL-FME convolutional neural network to obtain a segmented image; training the segmented image by using a DL-FME convolutional neural network to generate a scene recognition training model; inputting the images of the test data set into a training model to generate a scene recognition model; and identifying the to-be-identified scene by using the scene identification model. The invention further discloses a complex scene recognition system based on multispectral image fusion. The method is advantaged in that calculation cost is reduced, and scene identification accuracy is high.

Description

technical field [0001] The invention relates to the technical field of video processing, in particular to a complex scene recognition method and system based on multispectral image fusion. Background technique [0002] In a complex environment, identifying and extracting typical targets is an important field in the field of airborne photoelectric reconnaissance, and has a wide range of applications in military surveillance, target detection, damage assessment, and target navigation. [0003] At present, airborne photoelectric reconnaissance equipment often needs to be equipped with multiple sensors with different spectra. This combination of sensors greatly enriches people's observation and measurement of the ground, and can help people recognize ground targets more effectively. [0004] The aerial images obtained by the joint use of multi-spectral sensors have richer spectral feature information, spatial structure and geometric texture and other information. For example, i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/52G06N3/045G06F18/24G06F18/25G06F18/214
Inventor 赵涛程勇策温明袁滔乔宇晨
Owner THE THIRD RES INST OF CHINA ELECTRONICS TECH GRP CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products