Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Target detection method and system based on lightweight deformable convolution

A target detection and lightweight technology, applied in the field of target detection, can solve the problems of changes in the apparent characteristics of object instances, difficulties in visual recognition algorithms, and increased calculations, so as to reduce the amount of data calculations, improve recognition accuracy, and reduce burdens. Effect

Inactive Publication Date: 2020-02-04
SHANDONG UNIV
View PDF2 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

This technology allows for better understanding by converting important factors like image quality into values that help train neural networks or predict future performance more accurately than previous methods. It also makes it easier to learn new models quickly while reducing computational load compared to existing techniques.

Problems solved by technology

This patented technical solution described in the current text discusses how difficultly detecting specific things like people or cars within images captured under varying environmental factors (lighting) leads to difficulty in accurately identifying them. Different methods have been developed over time, including convolutional neural networks trained specifically against these environments, fuzzy logic systems, decision trees, etc., but they all face similar issues when trying to identify unknown items at low resolutions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target detection method and system based on lightweight deformable convolution
  • Target detection method and system based on lightweight deformable convolution
  • Target detection method and system based on lightweight deformable convolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0039] In one or more embodiments, a new target detection method based on a deep learning framework for feature learning is disclosed, including:

[0040] (1) Two deformable convolutions and deformable ROI pooling are used to replace existing algorithms, such as VGG, GoogLeNet, ResNet, DenseNet, ResNeXt and other traditional feature extraction networks and MobileNet, In light-weight feature neural networks such as ShuffleNet, single or multiple 3×3 convolutional network layers and pooling layers construct a depth-separable feature extraction network as the source network.

[0041] In this embodiment, the feature extraction network used is Mobilenetv1, which is not only the basic network of the source network, but also the target network. Its specific network structure is as follows:

[0042] The core of Mobilenet v1 is to split the convolution into two parts: Depthwise+Pointwise.

[0043] In order to explain Mobilenet v1, assume that there is an input of N*H*W*C, N represent...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a target detection method and system based on lightweight deformable convolution. The method comprises the following steps: a deep separable feature extraction network is created as a source network; a target network is created by using an algorithm structure which is the same as that of the source network but not performing deformable convolution layer replacement; distance loss approximation I sperformed on the output of the source network and the target network by simulating features extracted by multiple network layers of the source network; before the classification layer, the last layer of feature extraction enables the source network to approach the output of the target network layer; a new feature extraction network model is acquired after joint training through a mutual learning framework of the source network and the target network; and the new feature extraction network model is taken as a feature extractor to perform feature extraction of the image data to complete target detection. Through mutual learning of the target network and the source network, the identification precision is improved, the data operation amount is reduced, and the burden on hardware equipment is reduced.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products