Deep learning saliency detection method based on global a priori and local context

A technology of deep learning and detection methods, applied in the fields of image processing and computer vision, can solve the problems of misdetection of high-level features, failure to effectively detect prominent objects in complex background images, etc., achieve good robustness, reduce learning ambiguity, and detect The results are accurate

Active Publication Date: 2017-10-20
BEIJING UNIV OF TECH
View PDF4 Cites 47 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The problem to be solved by the present invention is: in the salient object detection technology of images, the salient objects in complex background images cannot be effectively detected simply by relying on manually set features and some prior knowledge; The saliency detection method based on deep learning only takes the original image or the local area of ​​the original image as the input of the deep learning model, which may cause false detection due to the noise of the extracted high-level features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning saliency detection method based on global a priori and local context
  • Deep learning saliency detection method based on global a priori and local context
  • Deep learning saliency detection method based on global a priori and local context

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The invention provides a deep learning saliency detection method based on global prior and local context. The method first performs superpixel segmentation on color images and depth images, and based on middle-level features such as compactness, uniqueness and background of superpixels , through the global prior deep learning model, the global prior saliency map is calculated; combined with the global prior saliency map and the local context information in the color image and the depth image, the initial saliency map is obtained through the deep learning model; finally, according to the spatial consistency The initial saliency map is optimized based on the similarity of sex and appearance, and the final saliency map is obtained. The invention is suitable for image salience detection, has good robustness, and the detection result is accurate.

[0034] Such as figure 1 Shown, the present invention comprises the following steps:

[0035] 1) Use the SLIC superpixel segmen...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a deep learning saliency detection method based on the global a priori and local context. The method includes the steps of firstly, performing superpixel segmentation for a color image and a depth image, obtaining a global a priori feature map of each superpixel based on middle-level features such as compactness, uniqueness and background of each superpixel, and further obtaining a global a priori saliency map through a deep learning model; then, combining the global a priori saliency map and the local context information in the color image and the depth image, and obtaining an initial saliency map through the deep learning model; and finally, optimizing the initial saliency map based on spatial consistency and appearance similarity to obtain a final saliency map. The method of the invention can be used for solving the problem that a traditional saliency detection method cannot effectively detect a salient object in a complex background image and also for solving the problem that a conventional saliency detection method based on deep learning leads to false detection due to the presence of noise in the extracted high-level features.

Description

technical field [0001] The invention belongs to the field of image processing and computer vision, in particular to a deep learning saliency detection method based on global prior and local context. Background technique [0002] When human eyes perceive the external environment, they can always extract interesting content from scenes containing a lot of information. This ability is called visual attention. Visual attention is a research hotspot in computer vision. There are two main aspects of research: one is to study eye gaze based on visual attention mechanism, and the other is to study the extraction of salient target regions, that is, saliency detection. The purpose of saliency detection is to separate the more eye-catching target area from the background in the image, and then extract the target and the information it carries, which is widely used in image segmentation, image recognition, video anomaly detection and other fields. [0003] At present, the research on s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06T7/162G06K9/46
CPCG06T7/11G06T7/162G06T2207/20081G06T2207/10028G06T2207/10024G06V10/462
Inventor 付利华丁浩刚李灿灿崔鑫鑫
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products