Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal fusion saliency detection method based on spatial pyramid pool

A technology of spatial pyramids and detection methods, applied in neural learning methods, character and pattern recognition, biological neural network models, etc.

Active Publication Date: 2020-01-17
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF5 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Most of the existing saliency detection methods use the method of deep learning, and there are many models that use the combination of convolution layer and pooling layer. However, the feature map obtained by simply using convolution operation and pooling operation is single and not It is representative, which will lead to less feature information of the obtained image, and eventually lead to poor effect of the obtained saliency prediction map and low prediction accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal fusion saliency detection method based on spatial pyramid pool
  • Multi-modal fusion saliency detection method based on spatial pyramid pool
  • Multi-modal fusion saliency detection method based on spatial pyramid pool

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0041] A kind of multimodal fusion saliency detection method based on spatial pyramid pool proposed by the present invention, it includes two processes of training phase and testing phase;

[0042] The specific steps of the described training phase process are:

[0043] Step 1_1: Select the left viewpoint image, depth image and real human eye gaze image of M original stereoscopic images, and form a training set, and use the left viewpoint image, depth image and real human gaze of the i-th original stereoscopic image in the training set The corresponding eye-gaze pattern is denoted as {D i (x,y)} and {Y i (x, y)}; then use the existing one-hot encoding technique (HHA) to process the depth image of each original stereo image in the training set to have the same R channel component, G channel component and B channel component; wherein, M is ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal fusion saliency detection method based on a spatial pyramid pool. A convolutional neural network is constructed in a training stage, the convolutional neural network comprises an input layer, a hidden layer and an output layer, the input layer comprises two sub-input layers, the hidden layer comprises 10 neural network blocks, 2 spatial pyramid pool multi-modal fusion layers, 4 convolutional layers, 3 deconvolution layers and 3 convolutional layers for transition, and the output layer comprises 3 sub-output layers; inputting respective three-channel components of each left viewpoint image and each depth image in the training set into a convolutional neural network for training to obtain three saliency detection images corresponding to each left viewpoint image; obtaining a convolutional neural network training model by calculating a loss function value between a set formed by the saliency detection graph and a set formed by the real human eye gazegraph; in the test stage, a convolutional neural network training model is used for prediction; the method has the advantages of high detection accuracy and high detection efficiency.

Description

technical field [0001] The invention relates to a visual saliency detection technology, in particular to a multi-modal fusion saliency detection method based on a spatial pyramid pool. Background technique [0002] In recent years, saliency detection has become a very attractive research topic in computer vision. Visual saliency detection is a method of identifying the most obvious objects or regions in an image, and it has been used as a preprocessing step in computer vision with great success in vision applications such as object redirection, scene classification, visual tracking, image Retrieval, Semantic Segmentation, etc. Inspired by the human visual attention mechanism, many early visual saliency detection methods exploit low-level visual features (such as color, texture, and contrast) and heuristic priors to simulate and approximate human saliency. These traditional techniques are considered useful because of maintaining good image structure and reducing computation...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/46G06K9/62G06T7/11G06N3/04G06N3/08
CPCG06T7/11G06N3/08G06T2207/10012G06T2207/10024G06T2207/20081G06T2207/20084G06T2207/20016G06V10/462G06N3/045G06F18/214
Inventor 周武杰刘文宇雷景生钱亚冠王海江何成
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products