Target detection method based on global convolution and local deep convolution fusion

A target detection and local depth technology, applied in the field of computer vision, can solve problems such as poor detection effect

Active Publication Date: 2020-07-17
WUHAN UNIV
View PDF4 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The present invention proposes a target detection method based on global convolution and local deep convolution fusion, which is used to solve or at least partially solve the technical problem of poor detection effect existing in the methods in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target detection method based on global convolution and local deep convolution fusion
  • Target detection method based on global convolution and local deep convolution fusion
  • Target detection method based on global convolution and local deep convolution fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0084] This embodiment provides a target detection method based on global convolution and local deep convolution fusion, please refer to figure 1 , the method includes:

[0085] S1: Build a target detection network based on global convolution and local deep convolution fusion. The target detection network includes a backbone network, a global network, and a depth-aware convolutional region proposal network. The backbone network is used to extract features from input images. The global network is used to extract global features from the images processed by the backbone network, and the depth-aware convolutional region proposal network is used to extract local features from the images processed by the backbone network.

[0086] Specifically, 3D target visual analysis plays an important role in the visual perception system of autonomous driving vehicles. Object detection in 3D space using lidar and image data to achieve highly accurate target localization and recognition of obje...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a target detection method based on global convolution and local deep convolution fusion, changes an original three-dimensional area suggestion network, and provides an ASD network structure based on asymmetric segmentation depth perception for target detection. By doing so, the features of each level and depth in the feature map can be extracted more fully. In addition, innovative technologies such as horizontal and vertical convolution fusion networks, a distillation network and an angle optimization algorithm are introduced, so that the detection effect is further improved.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to a target detection method based on global convolution and local deep convolution fusion. Background technique [0002] Object detection is one of the classic problems in computer vision. Its task is to use a box to mark the position of the object in the image and give the category of the object. From the traditional framework of artificially designed features plus shallow classifiers to the end-to-end detection framework based on deep learning, object detection has become more and more mature step by step. Object detection is not difficult for the human eye, but the computer is faced with an RGB pixel matrix, it is difficult to directly obtain abstract concepts such as dogs and cats from images and locate their positions, plus object posture, lighting and complex backgrounds mixed together, making object detection more difficult. The detection algorithm usually includes three par...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V2201/07G06N3/045G06F18/2415
Inventor 高戈杜能余星源李明常军陈怡
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products