Asymmetric multi-modal fusion saliency detection method based on attention mechanism

An attention and asymmetric technology, applied in neural learning methods, computer components, character and pattern recognition, etc., can solve problems such as low accuracy, poor effect of saliency prediction map, loss of image feature information, etc.

Inactive Publication Date: 2020-08-21
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF0 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the existing saliency detection methods have adopted the method of deep learning, using the method of combining the convolution layer and the pooling layer to extract image features, but the image features obtained by simply using the convolutio...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Asymmetric multi-modal fusion saliency detection method based on attention mechanism
  • Asymmetric multi-modal fusion saliency detection method based on attention mechanism
  • Asymmetric multi-modal fusion saliency detection method based on attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0065] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0066] The process of the embodiment of the present invention is as figure 1 As shown, it includes two processes of training phase and testing phase:

[0067] The specific steps of the described training phase process are:

[0068] Step ①_1: Select the RGB image, depth image and corresponding real human eye annotation image of n original stereo images to form the training set, n∈{N + |n≥200}, mark the RGB image of the i-th (n≤i≤n) original stereo image in the training set as The depth map corresponding to the original stereo image is The real human eye gaze corresponding to the original stereo image and depth map is denoted as {G i (x, y)}, where (x, y) represents the coordinate position of the pixel point, represents the width of the original stereoscopic image with W, and H represents the height of the original stereoscopic image, then...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an asymmetric multi-modal fusion saliency detection method based on an attention mechanism. The method comprises: inputting an RGB image and a depth image of an original three-dimensional image into a convolutional neural network for training to obtain a corresponding saliency detection image; calculating a loss function between a set formed by the saliency detection imagegenerated by the model and a set formed by corresponding real human eye gaze images to obtain an optimal weight vector and an offset term of the convolutional neural network classification training model; and inputting the three-dimensional image in the selected data set into the trained convolutional neural network model to obtain a saliency detection image. According to the saliency detection method, RGB and depth map features are fully extracted by adopting an asymmetric coding structure, rich image information of RGB is effectively utilized after an internal sensing module is added, channel and spatial attention mechanisms are added, the expression of a saliency region and saliency features is enhanced, and the detection accuracy of visual saliency detection is improved.

Description

technical field [0001] The invention relates to a visual saliency detection method based on deep learning, in particular to an attention mechanism-based asymmetric multimodal fusion saliency detection method. Background technique [0002] When looking for objects of interest in images, humans can automatically capture semantic information between objects and their contexts, pay high attention to salient objects, and selectively suppress unimportant factors. This precise mechanism of visual attention has been explained in various biological logic models. The goal of saliency detection is to automatically detect the most informative and attractive parts of an image. In many image applications, such as image quality assessment, semantic segmentation, image recognition, etc., identifying salient objects can not only reduce the computational cost, but also improve the performance of saliency models. Early saliency detection methods used manual features, that is, mainly for imag...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V40/10G06V20/00G06V20/53G06V2201/08G06N3/045
Inventor 周武杰张欣悦雷景生靳婷史文彬
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products