Scale-selection-based top-down visual saliency extraction method

A top-down, scale-selective technique applied in the field of visual saliency to reduce time complexity and reduce noise interference

Inactive Publication Date: 2013-03-27
SHANGHAI JIAO TONG UNIV
View PDF0 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Traditional object detection methods based on local features require a large number of window swee

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scale-selection-based top-down visual saliency extraction method
  • Scale-selection-based top-down visual saliency extraction method
  • Scale-selection-based top-down visual saliency extraction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be described in detail below in conjunction with specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be pointed out that for those of ordinary skill in the art, a number of modifications and improvements can be made without departing from the concept of the present invention. These all belong to the protection scope of the present invention.

[0041] The following provides embodiments of the present invention in combination with the drawings and the content of the method.

[0042] The feature descriptor selected in this embodiment is a Scale Invariant Feature Transformation (SIFT) descriptor, which is not sensitive to illumination, scale, and rotation; the coding method is linear coding with local restrictions. The training set data used includes the original image of the target object vehicle and the actual label...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a scale-selection-based top-down visual saliency extraction method. The method includes a training stage, namely learning to obtain a nonlinear model and find an optimal scale in a multi-scale combination for calculation of saliency; and a saliency calculating stage, namely and extracting a saliency map according to the optimal scale and the non-linear model obtained in the training stage. The method fully considers people's intentions and takes advantage of context of multiple scales, and the region of visual saliency related to the people's intentions is effectively extracted. The scale-selection-based top-down visual saliency extraction method is applicable to the fields of object detections and the like.

Description

Technical field [0001] The present invention relates to a visual saliency calculation method, in particular to a top-down visual saliency extraction method based on scale selection, belonging to the field of visual saliency. Background technique [0002] Visual attention is an important mechanism that helps the human visual system to recognize scenes accurately and effectively. Obtaining salient areas in an image is an important research topic in the field of computer vision. It can help the image processing system to reasonably allocate computing resources in the subsequent processing steps. Visually significant region extraction is widely used in many computer vision applications, such as object segmentation, object recognition, adaptive image compression, content-sensitive image scaling, image retrieval, etc. [0003] There are two types of visual saliency detection: fast, task-independent, data-driven bottom-up saliency detection and slower, task-related, target-driven top-do...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62
Inventor 张瑞仇媛媛朱俊付赛男邹维嘉朱玉琨
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products