Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Cross-domain vision search method based on significance detection

A remarkable, cross-domain technology, applied in the fields of image processing and computer vision, it can solve the problems of few researches on visual retrieval algorithms, limited scope of application, reduced scene complexity and reduced matching retrieval accuracy, etc., and achieves improved scale robustness. , the effect of reducing the impact and narrowing the search scope

Active Publication Date: 2017-06-09
XIDIAN UNIV
View PDF5 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In recent years, visual retrieval technology has been continuously improved and developed, but there are not many researches on visual retrieval algorithms between cross-domain images.
In 2008, the Second Artillery Equipment Research Institute proposed a region-based matching retrieval algorithm and a feature-based matching retrieval algorithm for cross-domain images presented by different sensors (visible light, infrared, radar), but these two methods are only applicable to The retrieval of three specific domain images has a limited scope of application and is not suitable for retrieval of cross-domain images in complex scenarios
In 2011, Carnegie Mellon's research team proposed a data-driven cross-domain matching retrieval method, which uses the concept of machine learning to train and optimize feature vectors, but the single feature vector extraction method and the increase in scene complexity will greatly reduce Accuracy of matching searches
Although this method has improved the retrieval accuracy, the interference of the complex background often causes the target to be wrongly retrieved as the background area.
This happens mainly because the existing cross-domain retrieval technology does not take into account the different importance of the target area and the background area in the image for retrieval.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-domain vision search method based on significance detection
  • Cross-domain vision search method based on significance detection
  • Cross-domain vision search method based on significance detection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0024] The application principle of the present invention will be described in detail below in conjunction with the accompanying drawings.

[0025] Such as figure 1 As shown, the cross-domain visual retrieval method based on saliency detection provided by the embodiment of the present invention includes the following steps:

[0026] S101: performing saliency detection on the image, and retaining the subject target area in the image;

[0027] S102: Multi-scale processing is performed on the target image in the database, and a feature template is extracted for the subject target area. Feature extraction and linear classifi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a cross-domain vision search method based on significance detection. The method comprises the following steps: firstly, utilizing each super-pixel zone boundary connecting value to endow all the zones with different significance values, thereby acquiring a main object zone; performing multi-scale treatment on a target image in a database; extracting the feature of the main object zone, thereby acquiring a target image feature template; performing feature extraction on the main object zone of an inquired image and training a linear classifier; performing iteration training on a large quantity of negative samples, thereby acquiring an optimized inquired image feature template; finally, while searching, returning a zone with the highest response score as a final searching result according to the matching degree of each target image feature template and the inquired image feature template. According to the invention, the significance detection for the main zone is performed, so that the influence of a background zone on the searching result is reduced, the searching precision and efficiency can be effectively increased in the cross-domain vision search and the robustness is excellent.

Description

technical field [0001] The invention belongs to the technical field of image processing and computer vision, and in particular relates to a cross-domain visual retrieval method based on saliency detection. Background technique [0002] Cross-domain Visual Retrieval (Cross-domain Visual Retrieval) is one of the very promising technologies in the field of computer vision. With the rapid development of imaging sensor performance and the continuous enrichment of types, the means of acquiring images of the same thing are becoming more and more diverse, and the number of various types of images is also growing exponentially. In order to make full use of these digital resources, it is often necessary to match and retrieve cross-domain images of the same thing acquired under different imaging conditions or different carriers. For example: the retrieval of oil paintings to natural photos of the same building on the Internet, the police need to match the sketches of suspects with the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30G06K9/62
CPCG06F16/5838G06V10/757
Inventor 李静郝学韬李聪聪
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products