Supercharge Your Innovation With Domain-Expert AI Agents!

Channel and space fusion perception-based deep learning target detection method

A space fusion and target detection technology, applied in the field of image recognition, can solve the problems of slow performance and speed, and achieve the effect of ensuring real-time performance and accuracy, strong portability, and wide application

Pending Publication Date: 2020-02-14
FUZHOU UNIV
View PDF4 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In addition to simply relying on network depth, there are currently many methods to improve network performance by designing functional modules that improve feature representation capabilities: FPN combines deep features with shallow features, and strengthens spatially powerful shallow features through richer semantic information at a deeper level. Layer features; DSSD uses a deeper basic network ResNet-101 and deconvolution layer feature fusion on the basis of SSD, while skip connections give shallow feature maps better representation capabilities, but the performance is significantly improved while the speed is significantly reduced These methods no longer deepen the model to enhance the feature representation of the network, but directly strengthen the learning of deep features in the convolutional neural network by directly superimposing, sampling, and connecting the feature maps to improve performance.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Channel and space fusion perception-based deep learning target detection method
  • Channel and space fusion perception-based deep learning target detection method
  • Channel and space fusion perception-based deep learning target detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0054] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0055] It should be pointed out that the following detailed description is exemplary and is intended to provide further explanation to the present application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

[0056] It should be noted that the terminology used here is only for describing specific implementations, and is not intended to limit the exemplary implementations according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and / or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and / or combina...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a channel and space fusion perception-based deep learning target detection method, which comprises the following steps: constructing a channel and space fusion perception module, embedding the channel and space fusion perception module into a deep neural network architecture, and performing target detection on a target picture by using the reconstructed deep neural networkarchitecture, wherein the construction of the channel and space fusion sensing module specifically comprises the following steps: firstly, carrying out channel sensing on an originally input featuremap, and then, carrying out cascading of space sensing. According to the channel and space fusion perception-based deep learning target detection method, the depth or width of the network is not deepened, extra space vectors are not introduced, and meanwhile the real-time performance and precision are guaranteed.

Description

technical field [0001] The invention relates to the technical field of image recognition, in particular to a deep learning object detection method based on channel and space fusion perception. Background technique [0002] At present, the target detection framework based on deep learning is mainly divided into two categories: two-stage detectors and single-stage detectors; two-stage target detection is named for its two-stage processing of pictures, also known as region-based methods, which It is to abstract the detection into two processes. One is to use random selection based on the picture to propose several areas that may contain objects, that is, the local cropping of the picture, which is called the candidate area; the other is to generate the feature vector of the area through a deep convolutional neural network. After encoding, it is used to predict each category of the candidate area, so as to obtain the category of objects in each area. The two-stage detector algo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/08G06N3/045
Inventor 吴林煌杨绣郡范振嘉陈志峰
Owner FUZHOU UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More