Visual saliency detection method based on semantic enhanced convolutional neural network

A convolutional neural network and detection method technology, applied in the design field of visual saliency detection methods, can solve the problem of inability to extract deep image features, and achieve the effects of speeding up training, reducing overfitting, and enhancing semantics

Pending Publication Date: 2019-11-05
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to solve the problem that the existing image saliency detection and calculation methods using convolutional neural network cannot extract the deep features of the image, and propose a visual saliency detection based on semantically enhanced convolutional neural network method, which can enhance the detailed information of the image and adaptively weight the extracted features, thereby reducing the loss of the main feature information of the image and the interference of noise in the network transmission

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual saliency detection method based on semantic enhanced convolutional neural network
  • Visual saliency detection method based on semantic enhanced convolutional neural network
  • Visual saliency detection method based on semantic enhanced convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] Exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be understood that the implementations shown and described in the drawings are only exemplary, intended to explain the principle and spirit of the present invention, rather than limit the scope of the present invention.

[0030] Embodiments of the present invention provide a visual saliency detection method based on a semantically enhanced convolutional neural network, such as figure 1 As shown, the following steps S1-S3 are included:

[0031] S1. Construct a semantically enhanced convolutional neural network based on the VGG16 network.

[0032] In the embodiment of the present invention, the semantically enhanced convolutional neural network is improved on the basis of the VGG16 network. As a classic convolutional neural network model, the VGG16 network has a good performance in image classification and semantic segmentation. The mode...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a visual saliency detection method based on a semantic enhanced convolutional neural network, and the method comprises the steps: improving a classic model VGG16, introducing aconvolution layer to replace a full connection layer, and better storing the detail information of an image; and adding a BN layer behind the convolution layer to accelerate the training speed of thenetwork, and adding a dropout layer behind the added convolution layer to solve the over-fitting problem of the network. And an SENet network unit is embedded after the final convolution layer to further improve the semantics of the network performance enhancement features. According to the method, the problem that deep features of the image cannot be extracted by a traditional method can be solved, detail information of the image can be enhanced, and loss of main feature information of the image and interference of noise in network propagation can be reduced by performing self-adaptive weighting on the extracted features. According to the invention, the visual saliency map with more accurate target area and less noise can be obtained.

Description

technical field [0001] The invention belongs to the technical field of visual saliency, and in particular relates to the design of a visual saliency detection method based on a semantically enhanced convolutional neural network. Background technique [0002] In recent years, with the continuous improvement of hardware computing capabilities, deep learning technology has been widely used in the field of computer vision, and has played an important role in various subdivided fields. Compared with various traditional algorithms, deep learning technology does not require too many cumbersome operations such as manual feature extraction and manual model building. extract. In the field of visual saliency, traditional methods often extract effective features that have been verified by experiments, and perform calculations at different scales, and then fuse the feature maps to obtain the corresponding saliency maps. However, after using deep learning technology, the neural network ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/32G06K9/62G06N3/04
CPCG06V10/25G06N3/045G06F18/2193
Inventor 李建平顾小丰胡健王晓明张建国赖志龙娄泽宇
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products