Complex background SAR vehicle target detection method based on CNN

A technology of target detection and complex background, which is applied in the field of complex background SAR vehicle target detection based on CNN, can solve the problems of unrealistic target detection and fixed input image size, etc., to facilitate engineering application, avoid gradient disappearance, and good detection effect of effect

Inactive Publication Date: 2019-01-29
CHINA ELECTRONIC TECH GRP CORP NO 38 RES INST
View PDF3 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But its disadvantages are also obvious. The size of the input image needs to be fixed, and it is impossible to achieve end-to-end target detection in large scenes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Complex background SAR vehicle target detection method based on CNN
  • Complex background SAR vehicle target detection method based on CNN
  • Complex background SAR vehicle target detection method based on CNN

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0028] The complex background SAR vehicle target detection method based on CNN of the present invention comprises steps:

[0029] S1, collecting pattern data and processing to obtain a sample data set;

[0030] S2, the ResNet (network model depth residual network model) and the Faster-RCNN framework are fused to form a fusion framework structure, and the fusion framework is retrained on the basis of pre-training weights;

[0031] S3. Using the retrained fusion framework to perform target detection and recognition on the pattern data.

[0032] Step S1 specifically is to image the same scene through airborne X-band radars of different voyages, and after data preprocessing, sample the pattern data with a ground distance of 0.3m in both azimuth and range, and use 128×128 (unit: pixel ) with a fixed size to manually extract the vehicle samples in the pattern data, and a total of 500 original sample slice datasets containing original samples 1 of various vehicles (trucks, buses and...

Embodiment 2

[0039] Such as image 3as shown, image 3 It is the frame diagram of Faster-RCNN. In step S2, the pattern data needs to be processed through the Faster-RCNN framework.

[0040] Specifically, the Faster-RCNN (Faster Region CNN) framework mainly includes a feature extraction layer, a Region Proposal Network (RPN), an ROI pooling layer, and a classification extraction layer.

[0041] The feature extraction layer mainly includes a plurality of convolutional layers, activation layers and pooling layers, and the feature extraction layer extracts feature maps from the pattern data as the input of the classification recognition layer. The feature extraction layer can be divided into a ZF network model and a VGG network model according to the depth of the network model, wherein the VGG-16 network model includes 13 conv (convolution layers), 13 relu (activation layers) and 4 pooling (pooling layer), the ZF network model includes 5 convs, 4 relus and 2 poolings, the size of the featur...

Embodiment 3

[0047] Preferably, in this embodiment, step S2 is specifically to fuse the ResNet-50 network model and the Faster RCNN framework to form a fusion framework; the fusion process of the ResNet-50 network model and the Faster RCNN framework mainly includes:

[0048] Obtain 50 layers of residual blocks according to the ResNet-50 network model, the feature extraction layer of the fusion framework is set to the residual block of 40 layers, and the size of the output feature map is kept as 1 / of the pattern data 16;

[0049] The region generation network extracts candidate frames through a sliding window in the feature map of the last layer input by the feature extraction layer, and sets 9 anchor points (anchor points) with different sizes and aspect ratios for each pixel in the feature map. ), and combined with frame regression to preliminarily obtain the candidate frame of the pattern data.

[0050] The ROI pooling layer collects the input feature map and the candidate frame, extra...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a complex background SAR vehicle target detection method based on CNN, comprising the following steps: S1, collecting pattern data and processing to obtain sample data set; S2,ResNet and Faster-RCNN framework is fused to form a fusion framework, and the fusion framework is retrained on the basis of pre-training weights; S3, adopting the fusion frame after retraining to carry out target detection and recognition on the pattern data; the invention combines ResNet and Faster-RCNN framework, using Faster-RCNN framework realizes the end-to-end target detection process to realize the full automation of target detection, which is convenient for engineering application. At the same time, the residual network model is used to solve the problem of network model degradation in depth convolution network model, and the phenomenon of gradient disappearance in depth convolution network model is avoided.

Description

technical field [0001] The invention relates to the technical field of vehicle target detection, in particular to a CNN-based complex background SAR vehicle target detection method. Background technique [0002] The image characteristics of Synthetic Aperture Radar (SAR) images will change greatly with different imaging parameters, imaging attitudes, ground object environments, etc., which makes the target detection and recognition of SAR images very difficult. The traditional algorithm based on constant false alarm rate (Constant False Alarm Rate, CFAR) and its derivative algorithm have a high contrast between the target and the background and the scene is simple, and the detection threshold can better separate the target from the background. When there are many types of clutter with different scattering characteristics, the detection performance will usually decrease. [0003] With the continuous development of artificial intelligence, deep learning methods have also been...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/32G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/13G06V10/25G06V2201/08G06N3/045G06F18/214G06F18/25G06F18/24
Inventor 常沛夏勇吴涛万红林李玉景
Owner CHINA ELECTRONIC TECH GRP CORP NO 38 RES INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products