Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Improved target detection method based on residual network

A target detection and residual technology, applied in the field of image recognition, can solve the problems of weak feature extraction ability and inability to obtain good detection results, and achieve the effect of ensuring real-time performance and small amount of calculation

Active Publication Date: 2019-09-06
DALIAN UNIV OF TECH
View PDF3 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The feature extraction layer of YOLOV3-tiny consists of seven convolutional layers and 6 pooling layers. The parameters of each layer are shown in Table 1. Although YOLOV3-tiny is a target detection network that can run on low-performance hardware, but Its feature extraction ability is weak, and it often cannot get good detection results.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Improved target detection method based on residual network
  • Improved target detection method based on residual network
  • Improved target detection method based on residual network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] The specific implementation manner of the present invention will be described below in conjunction with the accompanying drawings.

[0050] The principle of the YOLOV3-tiny algorithm is to extract features through continuous convolution and other operations, and finally divide the picture into a 13*13 grid. For each grid unit, three anchor boxes are used to predict that the center point falls on the grid unit. The detection box of the object in .

[0051] The overall flow chart of the present invention is as figure 1 shown.

[0052] The first step is to confirm the number of targets to be recognized m, then the number of filters in the last layer is n=3*(m+5), where "3" represents 3 anchor boxes, and "5" represents the center point x of the detection frame , the 5 quantities of y coordinate, width and height, and confidence.

[0053] The second step is to collect pictures containing the target, and mark the position of the target in each picture, and form a data set ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an improved target detection method based on a residual network, based on YOLO V3-tiny network, features are extracted through continuous convolution operation, finally, pictures are divided into 13*13 grids, for each grid unit, a detection frame of a target with a center point falling into the grid unit is predicted through three anchor frames. The method specifically comprises the following steps of determining the number of types of the target to be identified to form a data set; establishing a target detection neural network; obtaining training weight files. Throughthe lightweight target detection network YOLOV3-tiny, the calculation amount is small, the target detection task can be carried out in the embedded hardware, and the real-time performance is ensured.According to the method, the original feature extraction network is replaced by the residual network resnet18, and for the feature extraction network with the same layer number, the feature extraction capability of the network can be improved by adding the residual structure by the residual network, so that the target detection precision can be improved on the premise that the detection speed isnot reduced.

Description

technical field [0001] The invention belongs to the technical field of image recognition, and in particular relates to an optimization method based on a target detection neural network YOLOV3-tiny algorithm, which is particularly suitable for performing target detection tasks on hardware with weak computing capabilities such as embedded platforms. Background technique [0002] In recent years, with the development of artificial intelligence and deep learning technology, the use of convolutional neural networks for image understanding tasks has gradually replaced the method of manually extracting features to make classifiers. For the convolutional neural network model, as the number of network layers increases, the neural network understands images more and more, and the accuracy of target detection and recognition becomes higher and higher, but the amount of calculation also increases. Currently, target detection algorithms are generally computed on servers with GPU accelera...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/08G06N3/04
CPCG06N3/082G06N3/045
Inventor 郭烈何丹妮姚宝珍秦增科赵一兵李琳辉岳明
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products