Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Labelling image scene clustering method based on vision and labelling character related information

A technology for labeling words and images, applied in the field of image processing, can solve problems such as limitations, and achieve the effect of solving sparsity

Inactive Publication Date: 2014-03-26
HARBIN ENG UNIV
View PDF2 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the inherent ambiguity of tag words (such as polysemy, multi-word synonym) also limits the effect of image clustering that only relies on image tag information.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Labelling image scene clustering method based on vision and labelling character related information
  • Labelling image scene clustering method based on vision and labelling character related information
  • Labelling image scene clustering method based on vision and labelling character related information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0020] Below in conjunction with accompanying drawing, the present invention is described in more detail:

[0021] Step 1: Use the NCut (Normalized Cut) image segmentation algorithm to segment the training image (annotated image for learning) and the test image respectively to obtain a visual description of the image area.

[0022] Step 2, Construct all images used for learning {J 1 ,·,J l}PC train The visual nearest neighbor graph of The vertex set V corresponds to each image, each vertex corresponds to an image, and the edge set E represents the visual distance between images. We use the similarity measure of multi-region integrated matching for the visual distance between images - Earth Mover's Distance (EMD), and the weight on the edge connecting two vertices corresponds to the EMD visual distance between images.

[0023] Step 3, in the training image set, each image has a set of initial normalized tag word weight vectors. The normalization method of the weight vecto...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a labelling image scene clustering method based on vision and labelling character related information. The method comprises the following steps of: dividing a training image and a test image respectively by using a NCut (Normalized Cut) image dividing algorithm; constructing a vision nearest-neighbour graph G(C)(V, E) of all images {J1, ., Jl} PCtrain for learning, wherein in a training image set, each image has one group of initial normalized labelling character weight vectors; spreading the labelling character of each training image among the vision nearest neighbours, receiving the accepted images according to the degree of normalized EMD (Earth Mover's Distance) among the accepted images; for each training image, normalizing the accumulated labelling character weights; after the vision characteristics of the image are converted into a group of labelling characters with weights, carrying out the scene semantic clustering by using a PLSA (Probabilistic Latent Semantic Analysis) model; learning each scene semantic vision space by using a Gaussian mixture model; and carrying out the scene classification by using the vision characteristics. With the invention, the coupling precision between the vision characteristics of the image and the labelling character can be increased, and the method can be directly used for the automatic semantic labelling of the image.

Description

technical field [0001] The invention relates to an image processing method. Specifically, it is a method for automatic scene classification of images to be analyzed. Background technique [0002] In the field of image understanding such as automatic semantic annotation of images, to rely on visual features to classify non-annotated images, it is necessary to ensure that the proposed semantic scene categories are consistent in visual distribution. On the one hand, the semantic content that images can express is very rich, and an image placed in different environments may present different levels of information. On the other hand, due to the lack of description ability, the visual features of images have more obvious semantic ambiguity, and visually similar images cannot guarantee the consistency of semantic content. [0003] As a concise and efficient way to describe the high-level semantic content of images, image annotations provide a large number of reliable learning sam...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/66
Inventor 刘咏梅
Owner HARBIN ENG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products