Aerial photography data target positioning method and system based on deep learning

A technology of target positioning and deep learning, which is applied in the field of image processing and target detection of aviation systems, can solve the problems of low accuracy rate and low automatic detection rate, and achieve high accuracy rate, high target positioning accuracy, and good robust performance Effect

Pending Publication Date: 2022-02-01
CHENGDU AIRCRAFT INDUSTRY GROUP
View PDF0 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to provide a deep learning-based aerial photography data target positioning method and system for the problems of low automatic detection rate and low accuracy of traditional target detection and positioning methods in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Aerial photography data target positioning method and system based on deep learning
  • Aerial photography data target positioning method and system based on deep learning
  • Aerial photography data target positioning method and system based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0048] Such as figure 1 As shown, a deep learning-based aerial data target positioning method includes the following steps:

[0049] S1. Obtain aerial photography data, and preprocess the aerial photography data;

[0050] S2. Input the aerial photography data into the pre-trained neural network; output the target type and target position;

[0051] S3. Acquiring body orientation information, image capture information, and target position; calculating target positioning information through body orientation information, image capture information, and target position.

[0052] Further, the aerial photography data may be picture data and / or video data.

[0053] The body orientation information is used to represent the attitude and position of the body when the UAV is shooting the current aerial data, which can be directly obtained from the inertial navigation system of the body; the image shooting information is used to represent the posture and position of the shooting equipment...

Embodiment 2

[0086] This embodiment provides an aerial photography positioning system based on deep learning, which may include: a raw data management module: used to collect and manage aerial photography data such as images and videos used for model training, realize storage, query, access, and extraction functions, and can support . Files in jpg, .png, .tif, .avi, .mp4, .tfw, .xml and other formats; data enhancement and expansion module: used for data enhancement and expansion of aerial data such as images and videos; data processing and feature extraction module: used for Data analysis, preprocessing, and auxiliary network training to achieve feature extraction; feature data management module: used to store and manage feature model data for model training; model training module: used to build deep learning network layers and training optimization; target recognition and detection Output module: used for feature data and deep convolutional neural network to complete target recognition, po...

Embodiment 3

[0088] For testing the actual effect of the present invention, test by Google map simulation software. Among them, the posture of the drone is as follows: Figure 4 Shown; the parameter information of the UAV is as follows Figure 5 shown. Set the latitude of the drone: 30.63080924, the longitude of the drone: 104.08204021, the heading is 10° east by north, the pitch angle is 5° vertical, the roll angle is 0°, and the cruising altitude is 3000 meters; select a university track and field stadium as the inspection Go out target, utilize relevant tool to derive aerial photograph image and airframe orientation information, image capture information; Wherein the pixel of the aerial photograph image derived is 1280*720, and target center pixel is (649,463), width and height are 44*71 pixel; Practical institute of the present invention The proposed method calculates that the longitude and latitude results of the image center point are: latitude 30.63013788, longitude 104.08251741. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the field of aviation system image processing and target detection, in particular to a target positioning method and system based on deep learning. The invention provides a target positioning method based on deep learning, and the method comprises the following steps: S1, obtaining aerial photography data, and carrying out the preprocessing of the aerial photography data; S2, inputting the aerial photography data into a pre-trained neural network; outputting a target type and a target position; s3, acquiring body azimuth information, image shooting information and a target position; and calculating target positioning information through the body azimuth information, the image shooting information and the target position. The method provided by the invention is good in robustness; meanwhile, the detection accuracy and the detection speed of micro, weak, small and unobvious targets in the high-altitude aerial photography environment are high; moreover, the method considers the influence of different attitudes of the unmanned aerial vehicle on the positioning of aerial photography, and can effectively improve the precision of target positioning.

Description

technical field [0001] The invention relates to the field of image processing and target detection of aviation systems, and in particular to a deep learning-based target positioning method and system. Background technique [0002] With the development of UAV image reconnaissance technology, a large number of high-altitude aerial ground optical images, photoelectric video and other information can be obtained by using reconnaissance equipment, and ground targets can be accurately identified and positioned through precise aerial photography target detection algorithms. It has great application value in many fields, such as marine oil spill detection, post-disaster search and rescue, and crop pest detection. [0003] In the high-altitude detection environment, affected by the angle and intensity of light, shooting hardware, cloud and fog occlusion, and the flight speed of the airborne platform, ships, vehicles or other objects of interest often have tiny, inconspicuous, and eas...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06V10/44G06V10/764G06V20/20G06V10/50G06V10/82G06K9/62G06T7/70G06N3/04G06N3/08
CPCG06T7/70G06N3/08G06T2207/10032G06N3/045G06F18/2414
Inventor 张周贤秦方亮钱晓琼王丽李顺张志翱
Owner CHENGDU AIRCRAFT INDUSTRY GROUP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products