Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Ship target detection method based on joint training of deep learning features and visual features

A technology of deep learning and target detection, applied in neural learning methods, instruments, biological neural network models, etc., can solve problems such as poor comprehension, different degree of feature retention, and inconsistent ship detection results, achieving high accuracy, fast speed, Highly robust effect

Active Publication Date: 2021-04-16
WUHAN UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the black box-style features are poorly understood, and ships of different sizes have different degrees of feature retention after convolution, which will also lead to inconsistencies in the detection results of different ships.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Ship target detection method based on joint training of deep learning features and visual features
  • Ship target detection method based on joint training of deep learning features and visual features
  • Ship target detection method based on joint training of deep learning features and visual features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040]The present invention will be further described in connection with the accompanying drawings and examples in conjunction with the accompanying drawings and examples.

[0041]Seefigure 1 The method of the embodiment of the present invention includes the following steps:

[0042]1 sample data collection.

[0043]The data required for the present invention is mainly monitored to monitor video frame data in a coastal region of visible light. For collected video data, each frame image can be obtained by decoding extraction, and the size is 1920 × 1080 pixels. According to the standard of the Pascal Voc, the image containing the vessel target is labeled, the resulting label file is the four vertex coordinates and corresponding images of the minimum surrounding rectangle of the vessel target on each picture, thus constructing a vessel image sample library. .

[0044]2cnn feature extraction.

[0045]The sample obtained by step 1 is unified to 224 × 224 size, then enter into the convolutional neural ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a ship target detection method for joint training of deep learning features and visual features, comprising the following steps: sample data collection, CNN feature extraction, traditional invariant moment feature and LOMO feature extraction, feature dimensionality reduction, and feature fusion network FCNN construction , and finally use the sample data to train the network, and use the test data to test the model. Compared with the prior art, the visual feature extraction process of the present invention comprehensively considers the characteristics of ship shape, color and texture, making the detection process interpretable, and can standardize the CNN back propagation process to learn other features than traditional features. This method is fast, efficient, and highly accurate. It still has good detection results for complex scenes such as clouds, cloudy days, and rain, and has high robustness. Features that are complementary to traditional features can be extracted, and the speed is extremely fast, which can achieve the effect of real-time monitoring.

Description

Technical field[0001]The present invention belongs to the field of vessel detection computer visual field, and specifically, the present invention relates to a depth learning characteristic and a visual characteristic joint training vessel target detection method.Background technique[0002]my country has a broad coastline, sea area, and rich marine resources. As the economy is growing, the number of sea vessels is increasing, and vessel has an urgent practical demand. The target detection of vessel is to use computer vision and image processing technology to detect the target of interest in the image, further extracting a large amount of useful information, has a wide range of application prospects in military and civilian fields. For example, in the civilian field, by obtaining information such as the location, size, driving direction, driving speed, etc., can monitor the specific sea, the bay port, and monitor the transportation, illegal fishing, illegal smuggling, illegal dump oil...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/08
CPCG06N3/084G06V20/40G06V10/50G06V10/56G06F18/214
Inventor 邵振峰吴文静张瑞倩王岭钢李成源
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products