A Salient Region Detection Method Based on Joint Sparse Multi-Scale Fusion

A multi-scale fusion, joint sparse technology, applied in image enhancement, image analysis, image data processing and other directions, can solve problems such as difficulty in detecting salient objects, low attention, and concentrated detection results.

Inactive Publication Date: 2017-02-22
XIDIAN UNIV
View PDF1 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Although the current bottom-up salient region detection algorithm has achieved good results, most methods, such as the classic Itti method and SR method, have a very serious problem and defect in their calculation process
When detecting, it is easy to make the detection results concentrate on the edge of the target, and it is difficult to detect the entire salient target, because most bottom-up methods use the center-peripheral difference operation, because the pixels on the edge and the surrounding pixels The feature difference of the point is large, and the attention degree is high; while the pixel point located in the center area of ​​the target has a small difference with the surrounding pixel point feature, so the attention degree is low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Salient Region Detection Method Based on Joint Sparse Multi-Scale Fusion
  • A Salient Region Detection Method Based on Joint Sparse Multi-Scale Fusion
  • A Salient Region Detection Method Based on Joint Sparse Multi-Scale Fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0065] The specific implementation steps and effects of the present invention will be described in further detail below in conjunction with the accompanying drawings:

[0066] Reference figure 1 , The implementation steps of the present invention are as follows:

[0067] Step 1, preprocess the training image set, convert the RGB color image into a grayscale image, and then process the grayscale image.

[0068] Step 2. For each image in the training image set, construct its multi-scale Gaussian pyramid to obtain a multi-scale training set {T 1 , T 2 …T n }, where T i Is the image at scale i, and n is the number of multiple scales.

[0069] In this embodiment, there are 65 images in the training image set, and the multi-scale number n is set to 3, which are 1 / 4, 1 / 8, and 1 / 16 respectively.

[0070] The multi-scale representation method of images was first proposed by Rosenfeld and Thurston in 1971. They found that the edge detection effect of the image with different size operators is be...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of image salient area detection, and specifically discloses a method for detecting image salient area based on joint sparse multi-scale fusion, the steps of which include: (1) constructing a multi-layer Gaussian pyramid for training image sets to realize multi-scale, Training to obtain dictionaries at various scales; (2) Take an image block for each pixel in the test image, and jointly sparsely solve the sparse representation coefficient of the image block at each scale; (3) Use the sparse representation coefficient as a feature, and perform Calculation of saliency; (4) Fusing the saliency results at multiple scales to obtain the final saliency map. The present invention achieves the purpose of extracting the region of interest of the human eye in any given image, and its advantages are: firstly, the multi-scale operation overcomes the influence of different scales of the image; secondly, the joint sparse framework is very beneficial to the subsequent saliency calculation. Experiments show that the results of this method are robust and outperform the results of most existing methods.

Description

Technical field [0001] The present invention belongs to the technical field of image salient area detection, and can be used to extract the area of ​​interest of the human eye in any given image, and provide better work for subsequent video image compression, image segmentation, target recognition, image repair, image retrieval, etc. The reference information is specifically a method of image salient area detection based on joint sparse multi-scale fusion. Background technique [0002] 80% of the information that humans obtain from the external environment comes from the visual system. When facing a complex scene, human eyes will quickly turn their eyes to the areas of interest and give priority to these areas. Further processing, this special processing mechanism of the human eye is called the visual attention mechanism. In daily life, the human eye obtains a large amount of information every day, and processes and processes it automatically and efficiently. The visual attentio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/00G06T7/174G06K9/62
CPCG06T7/13G06T2207/20221G06V10/40G06V10/513G06V30/194
Inventor 张小华焦李成孟珂田小林朱虎明马文萍刘红英
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products