Significant target detection method based on FCN (fully convolutional network) and CNN (convolutional neural network)

A fully convolutional network, a remarkable technology, applied in biological neural network models, computer parts, character and pattern recognition, etc., can solve the problems of neglect, inaccurate detection results, poor results, etc., to improve accuracy and reduce The effect of complexity

Active Publication Date: 2017-02-22
重庆商勤科技有限公司
View PDF2 Cites 79 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These methods mainly have two shortcomings: one is that they rely on artificially selected features, which often leads to a lot of information contained in the image itself being ignored; the other is that the salient prior information is only combined through simple heuristics, and there is no clear optimal The optimal combination method makes the detection results in complex scenes not accurate enough
In the document "DeepNetworks for Saliency Detection via Local Estimate and Global Search", a deep convolutional network is used to extract features for saliency detection, and local evaluation uses a 51*51 image block centered on each superpixel block as input for image block level Classification, the amount of training data is large; the global evaluation is based on artificially selected features, so the obtained global features cannot fully represent the deep information of the data, and the effect is not good in complex scenes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Significant target detection method based on FCN (fully convolutional network) and CNN (convolutional neural network)

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] Now in conjunction with embodiment, accompanying drawing, the present invention will be further described:

[0022] Step 1. Construct the FCN network structure

[0023] The FCN network structure is composed of thirteen convolutional layers, five pooling layers and two deconvolutional layers. In this model, it is tuned on the VGG-16 model pre-trained by ImageNet. Remove the fully connected layer in the VGG-16 model, and add two layers of bilinear difference layer as the deconvolution layer. The first deconvolution layer performs 4-fold interpolation, and the second deconvolution layer performs 8-fold interpolation to expand the network output to the same size as the original image; set the classification category to two, for each pixel points for binary classification.

[0024] Step 2. Training network structure

[0025] Send the training samples to the network to classify each pixel in the image according to the output of the logistic regression classifier, use the s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a significant target detection method based on an FCN (fully convolutional network) and a CNN (convolutional neural network). The method comprises the following steps: firstly, extracting the deep semantic information by use of the FCN, wherein a fixed size of an input image is not needed, end-to-end prediction is performed, and the training complexity is reduced. Accuracy optimization is performed by adopting the CNN and extracting local features to obtain a rough detection result of the FCN. In the invention, the semantic information in an image can be extracted accurately and efficiently, and the improvement of significant target detection accuracy in a complex scene is promoted.

Description

technical field [0001] The invention belongs to the technical field of salient target detection, and in particular relates to a salient target detection method based on global and local convolutional networks. Background technique [0002] Existing saliency object detection methods are mainly local or global bottom-up data-driven methods, which use color contrast, background prior information, texture information, etc. to calculate saliency maps. These methods mainly have two shortcomings: one is that they rely on artificially selected features, which often leads to a lot of information contained in the image itself being ignored; the other is that the salient prior information is only combined through simple heuristics, and there is no clear optimal The optimal combination method makes the detection results in complex scenes not accurate enough. [0003] Using deep neural network to autonomously extract image features can effectively solve the above problems. In the docum...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06K9/46G06K9/62G06N3/04
CPCG06N3/04G06T2207/20081G06T2207/10024G06V10/56G06V2201/07G06F18/232
Inventor 李映崔凡徐隆浩
Owner 重庆商勤科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products