RGB-D image saliency target detection method based on cross-modal feature fusion

A technology of RGB-D and RGB images, which is applied in the field of saliency target detection of RGB-D images based on cross-modal feature fusion, which can solve problems such as insufficient use of Depth cues.

Pending Publication Date: 2021-07-06
HENAN UNIVERSITY
View PDF4 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The main problems are: due to the differences between RGB and Depth, the direct cascade or simple fusion of RGB and Depth cross-modal strategies cannot make full use of the depth clues provided by Depth.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • RGB-D image saliency target detection method based on cross-modal feature fusion
  • RGB-D image saliency target detection method based on cross-modal feature fusion
  • RGB-D image saliency target detection method based on cross-modal feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037] like Figure 1-4 As shown, a RGB-D image salient target detection method based on cross-modal feature fusion:

[0038] Step 1. Based on the U-Net network of the cross-layer connection method, input RGB and Depth into the ResNet-50 backbone network to extract image features, among which the five-stage features extracted from the RGB image are R1, R2, R3, and R4. , R5; the five-stage features extracted from the Depth image are D1, D2, D3, D4, and D5; the U-Net network is carried out on the U-shaped architecture of the codec, where the encoder process is divided into two paths , for ResNet-50 processing the RGB image path and ResNet-50 processing the RGB path, respectively.

[0039] Step 2. Input the top-level features R5 and D5 under the two modalities into the cross-modal channel refinement module to obtain the cross-modal feature RD; guide D1-D5 and RD in the Depth mode through the cross-modal guidance module to guide RGB Feature extraction; considering the strong com...

Embodiment 2

[0076] Step 1. Based on the U-Net network of the cross-layer connection method, input RGB and Depth into the ResNet-50 backbone network to extract image features, among which the five-stage features extracted from the RGB image are R1, R2, R3, and R4. , R5; Depth image extracts the features of the five stages are D1, D2, D3, D4, D5;

[0077] Step 2. Input the top-level features R5 and D5 under the two modalities into the cross-modal channel refinement module to obtain the cross-modal feature RD; guide D1-D5 and RD in the Depth mode through the cross-modal guidance module to guide RGB Perform feature extraction;

[0078] Step 3. Use R1~R5 and RD in the RGB mode to further retain the foreground salient information of the image through the residual adaptive selection module, and discard the disturbing background information to obtain U1, U2, U3, U4, and U5; through five The cross-entropy loss function performs supervised learning on U1~U5 respectively, and guides the network to ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an RGB-D image saliency target detection method based on cross-modal feature fusion, and the method comprises: a step 1, inputting RGB and Depth into a ResNet-50 backbone network to extract image features based on a U-Net network of a cross-layer connection mode, wherein features of five stages are extracted from an RGB image, and features of five stages are extracted from the Depth image; a step 2, inputting the top layer features R5 and D5 in the two modes into a cross-mode channel refining module to obtain a cross-mode feature RD, and guiding RGB to perform feature extraction on D1-D5 and RD in the Depth mode through a cross-mode guiding module; and a step 3, further retaining foreground significant information of the image for R1-R5 and RD in the RGB mode through a residual adaptive selection module, discarding background information with interference, and finally generating a saliency result graph by the multilayer loss function guiding network. According to the method, a depth clue provided by Depth information can be fully utilized, feature fusion under two modes of RGB and Depth is enhanced, and the distinguishing capability of the model on each channel feature is enhanced.

Description

technical field [0001] The invention relates to the technical field of deep learning image processing, in particular to a method for detecting salient objects in RGB-D images based on cross-modal feature fusion. Background technique [0002] Salient object detection (SOD) aims to separate the most salient objects in an image from the background. Salient object detection has been applied in various computer vision tasks, such as image understanding, image segmentation, object tracking, image compression, etc. In recent years, the popularity of depth information (Depth) has continuously improved the saliency detection performance of RGB-D images. People improve the performance of saliency detection by complementing the different feature information in Depth and RGB modes. [0003] Early RGB-D salient object detection methods employ an early fusion strategy to combine appearance information and depth cues. However, there are great differences between the two modes of RGB and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/46G06K9/62G06N3/04G06N3/08G06T3/40
CPCG06T3/4007G06N3/08G06V10/44G06V2201/07G06N3/048G06N3/045G06F18/253
Inventor 王俊赵正云杨尚钦张苗辉柴秀丽张婉君
Owner HENAN UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products