Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Object Detection Method Based on Fully Convolutional Split Network

A technology of target detection and split network, which is applied in the field of target detection based on full volume split network, can solve high complexity problems, achieve the effect of improving the ability to extract features, improve detection accuracy, and balance speed and accuracy

Active Publication Date: 2020-10-09
成都快眼科技有限公司
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The technical problem to be solved by the present invention is to overcome the high complexity of the existing deep learning-based target detection algorithm technology, and provide a target detection method based on the full-volume split network, under the premise of ensuring the accuracy of detection and recognition , to improve the real-time performance of target detection

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Object Detection Method Based on Fully Convolutional Split Network
  • An Object Detection Method Based on Fully Convolutional Split Network

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment 1

[0024] A target detection method based on a full volume split network, the specific method includes:

[0025] Preprocessing the pictures: Randomly select and crop the pictures in the collected data set. The specific cutting method is: take the preset frame of the set size of the picture length and width, and select 5 places in the picture to crop the picture of the size of the preset frame , are the four corners and the center position of the picture respectively, map the target frame corresponding to the target to the processed picture, and obtain the training picture;

[0026] The feature extraction network structure of the feature extraction part of the feature extraction stage is: the number of layers of the special extraction network is 9 layers of convolutional layers; among them, there are n layers of convolutional layers followed by a pooling layer for downsampling; two filtering layers The output sizes of the device are respectively 1x1 (unit: pixel) and 3x3 (unit: pi...

specific Embodiment 2

[0034] On the basis of the specific embodiment 1, the preset frame with a set size is a preset frame with a size of 1 / 3 of the picture length and width.

specific Embodiment 3

[0036] On the basis of the specific embodiment 1 or 2, in the feature extraction network, the type of the pooling layer is selected as the largest pooling.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a target detection method based on a full-volume split network, which preprocesses the pictures: randomly extracts and cuts the pictures in the collected data set, and the specific cutting method is: take the length and width of the set size For the preset frame, select 5 places in the picture to crop the picture of the size of the preset frame, which are the four corners and the center of the picture, and map the target frame corresponding to the target to the processed picture to obtain the training picture. Compared with the existing technology, it is 1 / 100 of the current network model (such as the VGG network) in terms of computing resource consumption; it speeds up the computing time by 300 times, which greatly improves the computing efficiency; in terms of detection performance, this method can Effectively detect large and small targets that appear on the road, achieving a balance between speed and accuracy.

Description

technical field [0001] The invention relates to the field of computer vision target detection, in particular to a target detection method applicable to a full-volume split network. Background technique [0002] Vision is the main way for humans to obtain information, and 70% of the information obtained by humans comes from visual information. With the development of society, the distribution of intelligent sensing sensors is more and more extensive, and we can get a lot of information from these sensors. Humans can accurately locate and detect objects from complex environments, which is the basic function of human vision. Object detection in computer vision aims to use computers to detect and locate objects in natural pictures, which is the basis of object tracking and a lot of follow-up work. It has extremely important research value. In academia and industry, it is very important to study the algorithm of target detection, but in the field of computer vision, the existi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/46G06N3/04
CPCG06V10/44G06N3/045
Inventor 李宏亮
Owner 成都快眼科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products