Feature extraction and fusion recognition of dual-source images based on convolution neural network

A convolutional neural network and feature extraction technology, which is applied in the field of dual-source image feature extraction and fusion recognition, can solve the problems of target transformation sensitivity, environmental factors, and low classification recognition rate, and achieve good results

Active Publication Date: 2019-02-05
NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
View PDF4 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] Aiming at the problem that the source target recognition of the traditional UAV single sensor is greatly affected by environmental factors, in order to improve the recognition efficiency and expand the scope of applicable scenarios, this invention proposes a dual-source image feature extraction and fusion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature extraction and fusion recognition of dual-source images based on convolution neural network
  • Feature extraction and fusion recognition of dual-source images based on convolution neural network
  • Feature extraction and fusion recognition of dual-source images based on convolution neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] The present invention will be further described below in conjunction with the accompanying drawings and examples.

[0036] Such as figure 1 Shown is a dual-source image feature extraction and fusion recognition method based on convolutional neural network, including the following steps:

[0037]Step 1: Establish image databases of visible light and thermal infrared imaging sensor sources for multiple types of targets. Each library contains L types of targets corresponding to each other. The number of samples for each type of target is n, and the total number of samples is N=nL. There are 15 types of objects in the self-built database, and the number of samples of each type of objects is 375, and the total number of samples is 5625.

[0038] Step 2: Build a deep convolutional neural network model. The model structure is the image input layer (InputLayer), the convolution layer (Convolution Layer) with a total of 13 layers, the pooling layer (Pooling Layer) with a total ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a dual-source image feature extraction and fusion identification method based on a convolution neural network, which comprises the following steps: utilizing the characteristics of the convolution neural network with migration learning, training the convolution neural network model parameters through a large number of visible light databases; The trained model is used to automatically extract the hidden features of visible and thermal infrared target images, and the maximum desampling method is used to reduce the feature dimension. Combining Fisher discriminant method and principal component analysis algorithm, the dimension reduction and fusion of multi-source image features are carried out. Support vector machine classifier is used to classify and recognize the fusion features of the target image. A method for classify and identifying multi-source image target in unmanned aerial vehicle (UAV) platform features that image hidden features are extracted by convolution neural network, and Fisher discriminant method and principal component analysis algorithm are combined for dimension reduction and fusion of features, which provides a new and effective way forclassifying and identifying multi-source image targets based on feature level.

Description

technical field [0001] The invention belongs to the field of image signal processing and pattern recognition, and is a dual-source image feature extraction and fusion recognition method based on a convolutional neural network. Background technique [0002] In the past two decades, feature-based target recognition and classification technology has become a hot spot in the research of image signal processing and pattern recognition, and has been widely used in military and civilian fields. For example, sea ship detection, sea rescue, ground military target strike, suspect tracking, etc. At present, the single-sensor feature-level target recognition and classification technology is relatively mature. Due to the limitation of the sensor itself, its working environment and applicable objects are relatively single, which cannot meet the application needs in complex environments. [0003] Visible light sensors have high imaging resolution, rich target texture details, and clear ed...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62
CPCG06F18/2411G06F18/253
Inventor 冷阳张弓刘文波
Owner NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products