Saliency detection method for cross-modal enhancement and improvement of loss function

A technology of loss function and detection method, which is applied in the field of saliency detection of deep learning, can solve the problems that the results cannot reach the optimal effect, the prediction results are rough, and the detection accuracy is low.

Inactive Publication Date: 2021-01-15
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The above-mentioned second fusion method, such as: PDnet: Prior-model guided Depth-enhanced network for salient object object detection (prior network: a priori model for salient target detection guides depth-enhanced network), which uses color plus depth information to To detect the saliency, an addition operation is added between the color information and the depth information. However, this simple operation cannot well explore the complementarity between the color information and the depth information, which will eventually lead to rough prediction results and poor detection results. low precision
[0005] In addition, the existing saliency detection methods based on convolutional neural networks generally use cross-entropy loss as their own loss function, but the background and foreground in real life are not equal, sometimes the background is more, and sometimes the foreground (requires Detected objects) will be more, people hope to pay more attention to the foreground rather than the background, and the cross-entropy loss treats the foreground and background equally, which will lead to the final result not being optimal

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Saliency detection method for cross-modal enhancement and improvement of loss function
  • Saliency detection method for cross-modal enhancement and improvement of loss function
  • Saliency detection method for cross-modal enhancement and improvement of loss function

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0054] The present invention proposes a saliency detection method for cross-modal enhancement and improved loss function, which includes two processes of a training phase and a testing phase.

[0055] The specific steps of the described training phase process are:

[0056] Step 1_1: Select Q original color images, the depth images corresponding to each original color image and the corresponding true salient detection images, and form a training set, and record the qth original color image in the training set as focus training on The corresponding depth image is recorded as focus training on The corresponding ground truth salient detection image is denoted as Similarly, select Q' original color images, the depth images corresponding to each original color image, and the corresponding real salient detection images to form a verification...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a saliency detection method for cross-modal enhancement and improvement of a loss function, and the method comprises the steps: constructing a convolutional neural network at atraining stage, wherein an input layer comprises a color image input layer and a depth image input layer, wherein a hidden layer comprises a coding frame composed of a color coding stream, a depth coding stream and a coding feature extraction stream and a decoding frame composed of a decoding stream, and the output layer is composed of five sub-output layers; inputting the R, G and B channels ofthe color image in the training set and a three-channel depth map obtained by copying the corresponding depth image into a convolutional neural network for training; obtaining a verification weight vector and a verification bias term of the convolutional neural network training model through a loss function value between the saliency detection graph and the saliency real detection graph obtained through calculation; verifying the verification set to obtain an optimal weight vector and an optimal bias term; in the test stage, using the optimal weight vector and the optimal bias term for prediction to obtain a saliency prediction image; the invention has the advantage of high detection precision.

Description

technical field [0001] The invention relates to a saliency detection method of deep learning, in particular to a saliency detection method of cross-modal enhancement and improved loss function. Background technique [0002] The purpose of saliency detection is to detect the most striking objects in a scene, and it has been widely used in the fields of computer vision and robot vision. Traditional saliency detection methods perform poorly and are limited by hand-crafted relevant features, and with the rise of convolutional neural networks, saliency detection has been greatly developed. At present, the most commonly used saliency detection method is to use color images for saliency detection, and with the development of depth sensors, such as: Microsoft Kinect and Intel RealSense, it has become more and more convenient to obtain depth information. Adding depth information to detect saliency improves the accuracy of image pixel-level detection tasks. [0003] Existing salienc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/46G06K9/62G06N3/04G06T7/50G06T7/90
CPCG06T7/50G06T7/90G06V10/462G06N3/045G06F18/214
Inventor 周武杰朱赟雷景生郭翔强芳芳王海江何成
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products