Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-model integrated target detection method with rich space information

A technology of spatial information and target detection, applied in the image field, can solve the problems of YOLO roughness, single-level frame real-time sacrifice of accuracy, etc., to achieve the effect of verifying effectiveness and improving performance

Active Publication Date: 2019-10-18
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF7 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But YOLO and SSD also proved that the real-time performance of the single-level framework is at the expense of accuracy
At the same time, due to multiple downsampling, YOLO will produce relatively rough features, and YOLO and SSD are not sensitive to small objects

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-model integrated target detection method with rich space information
  • Multi-model integrated target detection method with rich space information
  • Multi-model integrated target detection method with rich space information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0059] figure 1 It is a flowchart of a multi-model integrated target detection method with rich spatial information in the present invention.

[0060] In this example, if figure 1 As shown, a kind of multi-model integrated target detection method with abundant spatial information of the present invention comprises the following steps:

[0061] S1. Build a network model

[0062] S1.1. Build feature extraction module

[0063] For the feature extraction module, we have chosen 3 modes, built ImageNet pre-trained VGG16 model framework and MobileNet-V1 model framework on Pytorch, and integrated VGG16 and MobileNet-V1 model framework as the feature extraction module;

[0064] S1.2. We built a context module by combining dilated convolution and Incepation-Resnet structure, as shown in Figure 2. The specific operations are as follows:

[0065] Based on the hole convolution and Incepation-Resnet structure, construct three context blocks with the same structure, and then cascade th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-model integrated target detection method with rich space information. The multi-model integrated target detection method combines a single-stage framework, context modeling and multi-scale representation, and is applied to a network model for target detection in an integrated way. Specifically, a new context modeling method is adopted, for applying hole convolutioncommonly used in the semantic segmentation field to target detection, and a context detection module is constructed by utilizing the characteristic that the hole convolution can expand a receiving field under the condition that the calculation amount is not increased; meanwhile, fine-grained details are captured through multi-scale representation, so that the representation capability of the modelis enhanced; and the thought of ensemble learning is combined, so that the performance of the detector is further improved.

Description

technical field [0001] The invention belongs to the field of image technology, and more specifically relates to a multi-model integrated target detection method with rich spatial information. Background technique [0002] In recent years, deep learning has been widely used to solve a series of problems such as computer vision, speech recognition, and natural language processing. As an important branch of computer vision, some problems in object detection are gradually solved by deep learning. At the same time, ensemble learning has become a popular learning method and is widely used to improve the learning performance of a single learner. Especially driven by competitions such as ImageNet and Kaggle, the combination of integrated deep learning and computer vision has become a research hotspot and difficulty. In fact, these high-profile competitions also demonstrate the effectiveness and feasibility of combining ensemble learning with computer vision. [0003] Ensemble lea...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/34G06K9/62G06N3/04
CPCG06V10/267G06V2201/07G06N3/045G06F18/214G06F18/253
Inventor 徐杰汪伟王菡苑方伟政
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products