Unlock instant, AI-driven research and patent intelligence for your innovation.

Visual Saliency Extraction Method of Stereo Image Based on Parameter Sharing Deep Learning Network

A stereoscopic image and image technology, applied in the field of stereoscopic image visual saliency extraction based on parameter sharing deep learning network, can solve problems such as limited features, inability to deal with complex scenes, and limited ability to distinguish visual saliency

Active Publication Date: 2020-12-25
经易文化科技集团有限公司
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this type of method can quickly obtain a visual saliency map with a certain quality, due to the limited input features, the ability to distinguish visual saliency is limited and cannot cope with complex scenes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual Saliency Extraction Method of Stereo Image Based on Parameter Sharing Deep Learning Network
  • Visual Saliency Extraction Method of Stereo Image Based on Parameter Sharing Deep Learning Network
  • Visual Saliency Extraction Method of Stereo Image Based on Parameter Sharing Deep Learning Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0034] A method for extracting stereoscopic image visual salience based on parameter sharing deep learning network proposed by the present invention, its overall flow chart is as follows figure 1 As shown, it is characterized in that it includes two processes of training phase and testing phase;

[0035] The specific steps of the described training phase process are:

[0036] Step 1_1: Select N stereoscopic images with a width of R and a height of L, and record the left viewpoint image, left parallax image and human gaze map of the nth stereoscopic image as {I L,n (x,y)}, {I D,n (x,y)} and {I F,n (x, y)}; then the left viewpoint image of each stereoscopic image is scaled to 480 × 640 pixel size, and the corresponding 480 × 640 pixel size image of the left viewpoint image of each stereoscopic image is obtained, and {I L,n (x, y)} correspond...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a stereoscopic image visual saliency extraction method based on a parameter sharing deep learning network, which constructs a parameter sharing deep learning network in the training stage, which includes an input layer, a parameter sharing feature extraction framework, a saliency map generation framework, and a parameter sharing feature The extraction framework consists of the first resnet-50 convolutional network block, the second resnet-50 convolutional network block, the third resnet-50 convolutional network block, and the fourth resnet-50 convolution in the resnet-50 network The network block and the fifth resnet-50 convolutional network block are set up in sequence, and the color map feature and the disparity map feature are extracted by using the parameter sharing feature extraction framework; in the test phase, the parameter sharing deep learning network training model is used to train the left side of the stereo image to be tested. The viewpoint image and the left disparity image are predicted, and the human gaze prediction map is obtained, which is the visually salient image. The advantage is that the extracted stereoscopic features conform to the salient semantics, and it has strong extraction stability and high extraction accuracy.

Description

technical field [0001] The invention relates to an image visual salient extraction technology, in particular to a stereoscopic image visual salient extraction method based on a parameter sharing deep learning network. Background technique [0002] Image saliency detection is a technique to search for more important regions in an image, most of which are used in the image preprocessing stage to find the image parts to be processed preferentially, and it is a bionic mechanism that imitates the human visual mechanism. Stereo vision saliency detection is an extension of image saliency detection, and the problem it faces is how to apply additional depth information to assist saliency detection. [0003] The classic method of stereo vision saliency detection using manual features directly uses filtering and other methods to extract depth features when using depth information, such as the stereo vision saliency detection method proposed by Qi Feng et al. This method first converts...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/46G06T7/10
CPCG06T7/10G06T2207/10021G06T2207/20081G06T2207/20084G06V10/462
Inventor 周武杰蔡星宇钱亚冠王海江何成邱薇薇
Owner 经易文化科技集团有限公司