Check patentability & draft patents in minutes with Patsnap Eureka AI!

Multi-class target cooperative positioning method based on graph regularization multi-view feature embedding

A co-location, multi-view technology, applied in image analysis, image data processing, instruments, etc., can solve the problem of not being able to provide localization results

Active Publication Date: 2020-10-30
FUZHOU UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

On the one hand, a set of images may have multiple objects at the same time, but existing methods tend to locate the dominant object rather than all objects
On the other hand, with the increase in the number of images or the complexity of image scenes, existing localization methods often cannot provide effective and accurate localization results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-class target cooperative positioning method based on graph regularization multi-view feature embedding
  • Multi-class target cooperative positioning method based on graph regularization multi-view feature embedding
  • Multi-class target cooperative positioning method based on graph regularization multi-view feature embedding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0034] It should be pointed out that the following detailed description is exemplary and is intended to provide further explanation to the present application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

[0035] It should be noted that the terminology used here is only for describing specific implementations, and is not intended to limit the exemplary implementations according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and / or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and / or combina...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a multi-class target cooperative positioning method based on image regularization multi-view feature embedding, and the method comprises the steps: carrying out the fusion ofa cooperative saliency map of each input image through employing a method based on image regularization multi-view feature embedding, so as to obtain a fusion saliency map with regularization items; performing coarse segmentation on the fused saliency map by using the learned foreground priori knowledge and a GrabCut algorithm; and finally, refining the segmentation through the connectivity of thegraph to obtain a target cooperative positioning result. According to the invention, the positioning precision of multiple types of targets can be improved.

Description

technical field [0001] The invention relates to the technical field of target positioning in images, in particular to a multi-type target collaborative positioning method based on graph regularization and multi-view feature embedding. Background technique [0002] Separating objects of interest from the background is a key step in many computer vision tasks, such as object-based image retrieval, image classification, object recognition, etc. When working with a single image, human annotation or visual saliency are common methods to extract the foreground. However, image annotation is time-consuming and labor-intensive. When dealing with images with cluttered backgrounds or indistinct foregrounds, the saliency of a single image may have limited performance gains. Therefore, a variety of methods for simultaneously processing images, such as co-saliency, co-segmentation, and co-localization, have emerged in recent years. The common idea of ​​these methods is to compensate fo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/73G06T7/136G06T7/194G06K9/62
CPCG06T7/73G06T7/136G06T7/194G06F18/25
Inventor 赵铁松暨书逸黄爱萍陈炜玲罗芳
Owner FUZHOU UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More