Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

RGB-T image significance target detection method based on multi-level depth feature fusion

A technology of RGB-T and depth features, applied in the field of image processing, can solve the problem of not being able to detect salient objects

Active Publication Date: 2019-09-06
XIDIAN UNIV
View PDF5 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

It mainly solves the problem that existing technologies cannot detect salient targets completely and consistently in complex and changeable scenes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • RGB-T image significance target detection method based on multi-level depth feature fusion
  • RGB-T image significance target detection method based on multi-level depth feature fusion
  • RGB-T image significance target detection method based on multi-level depth feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0063] Specific embodiments of the present invention will be described in detail below.

[0064] refer to figure 1 , a multi-level deep feature fusion RGB-T image salient target detection method, including the following steps:

[0065] Step 1) Extract rough multi-level features from the input image:

[0066] For RGB images or thermal infrared images, the 5-level features at different depths in the VGG16 network are extracted as rough single-modal features, respectively:

[0067] Conv1-2 (with the symbol Indicates that it contains 64 feature maps of size 256×256);

[0068] Conv2-2 (with the symbol Indicates that it contains 128 feature maps with a size of 128×128);

[0069] conv3-3 (with the symbol representation, containing 256 feature maps of size 64×64);

[0070] Conv4-3 (with the symbol representation, containing 512 feature maps of size 32×32);

[0071] Conv5-3 (with the symbol representation, containing 512 feature maps of size 16×16);

[0072] Wherein: n=...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an RGB-T image significance target detection method based on multi-level depth feature fusion, which mainly solves the problem that in the prior art, a saliency target cannot be completely and consistently detected in a complex and changeable scene. The implementation scheme comprises the following steps: 1, extracting rough multi-level features from an input image; 2, constructing an adjacent depth feature fusion module, and improving single-mode features; 3, constructing a multi-branch-group fusion module, and fusing the multi-mode characteristics; 4, obtaining a fusion output feature map; 5, training an algorithm network; 6, predicting a pixel-level saliency map of the RGB-T image. Supplementary information from different modal images can be effectively fused, image salient targets can be completely and consistently detected in a complex and changeable scene, and the method can be used for an image preprocessing process in computer vision.

Description

technical field [0001] The invention belongs to the field of image processing, and relates to a method for detecting salient objects in RGB-T images, in particular to a method for detecting salient objects in RGB-T images with multi-level depth feature fusion, which can be used for image preprocessing in computer vision. Background technique [0002] Salient object detection aims to use models or algorithms to detect and segment salient object areas in images. As an image preprocessing step, salient object detection plays a vital role in vision tasks such as visual tracking, image recognition, image compression, and image fusion. [0003] Existing target detection methods can be divided into two categories: one is based on traditional salient target detection methods, and the other is salient target detection methods based on deep learning. Based on the traditional salient target detection algorithm, the saliency prediction is completed through manually extracted features s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/46G06N3/04G06N3/08
CPCG06N3/084G06V10/464G06V2201/07G06N3/048G06N3/045G06F18/253
Inventor 张强黄年昌姚琳刘健韩军功
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products