Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Maneuvering target ISAR self-focusing imaging method based on deep learning

A technology of maneuvering targets and imaging methods, applied in the field of radar, can solve the problems of noise interference and low resolution of ISAR images, and achieve the effect of high resolution

Pending Publication Date: 2022-03-01
HARBIN INST OF TECH
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The purpose of the present invention is to solve the problems of low resolution and noise interference in existing ISAR images of maneuvering targets, and to propose a deep learning-based ISAR self-focusing imaging method for maneuvering targets

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Maneuvering target ISAR self-focusing imaging method based on deep learning
  • Maneuvering target ISAR self-focusing imaging method based on deep learning
  • Maneuvering target ISAR self-focusing imaging method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0040] Specific implementation mode 1: In this implementation mode, a specific process of an ISAR self-focusing imaging method for maneuvering targets based on deep learning is as follows:

[0041] Step 1. Set the maximum number of cycles T, T≥2;

[0042] Set the number of cycles t=1, set the initial value of the maneuvering target form and motion parameters, and the maneuvering target motion parameters include angular velocity ω and angular acceleration γ;

[0043] Step 2, the signal source transmits the LFM signal, the LFM signal is received by the radar after being reflected by the maneuvering target, the received signal is an echo, and the obtained echo is dechirped to obtain the processed echo;

[0044] The maneuvering target form is composed of scattered points;

[0045] The maneuvering target is composed of randomly generated scatter points;

[0046] Perform range compression on the processed multiple echoes to obtain multiple one-dimensional range images (to obtain r...

specific Embodiment approach 2

[0061] Specific embodiment two: the difference between this embodiment and specific embodiment one is that the point coordinates (P(x i ,y i )) and motion parameters (angular velocity ω and angular acceleration γ) to calculate the Doppler frequency of the scattering point, thereby determining the distance unit and Doppler unit where the scattering point is imaged at the moment, and obtaining the reference image of the target; the specific process is:

[0062] Step 31. Assume that the carrier frequency is f c , the radar with modulation frequency k transmits linear frequency modulation signal s(t)(LFM);

[0063]

[0064] Where t is the fast time, τ is the pulse width, j is the imaginary unit, j 2 =-1;

[0065] For the instantaneous distance from the radar as R(t m ) target, the echo s(t,t) received by the radar m )Expressed as

[0066]

[0067] where t m is the slow time, σ i Indicates the echo amplitude of the i-th scattering point; R i (t m ) represents the i-...

specific Embodiment approach 3

[0079] Specific embodiment three: the difference between this embodiment and specific embodiment one or two is that in the step five, the ResU-Net network (using the residual block to improve the traditional U-Net network structure to obtain ResU-Net) includes an input layer, Coding unit, connection unit, decoding unit and output layer;

[0080] The coding unit sequentially includes a first coding subunit, a first maximum pooling layer, a second coding subunit, a second maximum pooling layer, a third coding subunit, a third maximum pooling layer, and a fourth coding subunit , the fourth maximum pooling layer;

[0081] The first encoding subunit sequentially includes a first convolutional layer, a first Relu activation layer, and a first residual block;

[0082] The second encoding subunit sequentially includes a second convolutional layer, a second Relu activation layer, and a second residual block;

[0083] The third encoding subunit sequentially includes a third convolutio...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a maneuvering target ISAR self-focusing imaging method based on deep learning, in particular to a maneuvering target ISAR self-focusing imaging method based on deep learning. The objective of the invention is to solve the problems of low resolution, noise interference and the like of the ISAR image of the existing maneuvering target. The method comprises the following steps: 1, setting the maximum cycle index T; setting the cycle index as 1, and setting a maneuvering target form and a motion parameter initial value; 2, obtaining an ISAR image; 3, obtaining a reference image of the target; 4, judging whether T is reached or not, if not, adding 1 to the number of cycles, and randomly changing the form of the maneuvering target and the initial value of the motion parameter; the second step and the third step are repeatedly executed until T is reached, and ISAR images and reference images of T sets of targets are obtained; 5, a ResU-Net network is constructed; and 6, obtaining echoes of the target to be measured, inputting the echoes into the network after being processed in the step 2, and outputting an ISAR imaging result of the maneuvering target after compensation. The method is applied to the technical field of radars.

Description

technical field [0001] The invention belongs to the technical field of radar, in particular to a deep learning-based ISAR self-focusing imaging method for maneuvering targets. Background technique [0002] Due to the time-varying Doppler frequency of maneuvering targets, clear ISAR images cannot be obtained using traditional range-Doppler methods. Therefore, imaging methods such as parameter estimation methods and time-frequency analysis methods that can capture the instantaneous attitude of maneuvering targets have been proposed. However, in the parameter estimation method, parameters related to the target motion state need to be set, but in actual radar work, the target motion state cannot be estimated generally, so for some maneuvering targets with complex motion, the imaging effect is not good. The time-frequency analysis imaging method is closely related to the time-frequency analysis method used, and the existing common time-frequency analysis methods cannot meet the r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G01S13/90
CPCG01S13/9064
Inventor 王勇王海容许荣庆
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products