A Multimodal Fusion Saliency Detection Method Based on Spatial Pyramid Pooling

A space pyramid and detection method technology, applied in neural learning methods, character and pattern recognition, biological neural network models, etc., can solve problems such as poor saliency prediction maps, less image feature information, and non-representative, etc., to achieve The effect of reducing computational complexity, maintaining spatial characteristics, and improving accuracy

Active Publication Date: 2021-07-13
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Most of the existing saliency detection methods use the method of deep learning, and there are many models that use the combination of convolution layer and pooling layer. However, the feature map obtained by simply using convolution operation and pooling operation is single and not It is representative, which will lead to less feature information of the obtained image, and eventually lead to poor effect of the obtained saliency prediction map and low prediction accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Multimodal Fusion Saliency Detection Method Based on Spatial Pyramid Pooling
  • A Multimodal Fusion Saliency Detection Method Based on Spatial Pyramid Pooling
  • A Multimodal Fusion Saliency Detection Method Based on Spatial Pyramid Pooling

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0041] A kind of multimodal fusion saliency detection method based on spatial pyramid pool proposed by the present invention, it includes two processes of training phase and testing phase;

[0042] The specific steps of the described training phase process are:

[0043] Step 1_1: Select the left viewpoint image, depth image and real human eye gaze image of M original stereoscopic images, and form a training set, and use the left viewpoint image, depth image and real human gaze of the i-th original stereoscopic image in the training set The corresponding eye-gaze pattern is denoted as {D i (x,y)} and {Y i (x, y)}; then use the existing one-hot encoding technique (HHA) to process the depth image of each original stereo image in the training set to have the same R channel component, G channel component and B channel component; wherein, M is ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multimodal fusion saliency detection method based on a spatial pyramid pool, which constructs a convolutional neural network in the training stage, including an input layer, a hidden layer and an output layer, the input layer includes two sub-input layers, and the hidden layer Including 10 neural network blocks, 2 spatial pyramid pool multimodal fusion layers, 4 convolutional layers, 3 deconvolutional layers, 3 transitional convolutional layers, the output layer includes 3 sub-output layers; the training is concentrated The respective three-channel components of each left-viewpoint image and depth image are input to the convolutional neural network for training, and three saliency detection maps corresponding to each left-viewpoint image are obtained; the set formed by calculating the saliency detection map is compared with the real The convolutional neural network training model is obtained by the loss function value between the sets formed by the human eye gaze; in the test phase, the convolutional neural network training model is used for prediction; the advantage is that the detection accuracy is high and the detection efficiency is high.

Description

technical field [0001] The invention relates to a visual saliency detection technology, in particular to a multi-modal fusion saliency detection method based on a spatial pyramid pool. Background technique [0002] In recent years, saliency detection has become a very attractive research topic in computer vision. Visual saliency detection is a method of identifying the most obvious objects or regions in an image, and it has been used as a preprocessing step in computer vision with great success in vision applications such as object redirection, scene classification, visual tracking, image Retrieval, Semantic Segmentation, etc. Inspired by the human visual attention mechanism, many early visual saliency detection methods exploit low-level visual features (such as color, texture, and contrast) and heuristic priors to simulate and approximate human saliency. These traditional techniques are considered useful because of maintaining good image structure and reducing computation...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/46G06K9/62G06T7/11G06N3/04G06N3/08
CPCG06T7/11G06N3/08G06T2207/10012G06T2207/10024G06T2207/20081G06T2207/20084G06T2207/20016G06V10/462G06N3/045G06F18/214
Inventor 周武杰刘文宇雷景生钱亚冠王海江何成
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products