Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for marking picture semantics based on Gauss mixture model

A Gaussian mixture model, semantic annotation technology, applied in character and pattern recognition, instruments, computer parts and other directions, can solve problems such as semantic gap

Inactive Publication Date: 2015-08-05
常熟苏大低碳应用技术研究院有限公司
View PDF7 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the traditional semantic-based image retrieval is based on the extraction, analysis and matching of the underlying features of the image, which cannot solve the problem of the huge difference between the underlying features of the image and the high-level semantic concept expression, that is, the "semantic gap". "The problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for marking picture semantics based on Gauss mixture model
  • Method for marking picture semantics based on Gauss mixture model
  • Method for marking picture semantics based on Gauss mixture model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0095] Example 1: Partial examples of image segmentation, underlying feature extraction, and feature selection modules

[0096] Specific steps;

[0097] 1. Select a semantic concept from the provided image library, each semantic concept includes 200 images;

[0098] 2. Analyze the image color and quantify it.

[0099] 3. Carry out spatial segmentation on the color-quantized image to obtain a set of similar points of any shape, that is, the segmented image area.

[0100] 4. Extract color features, obtain 3-dimensional main color features, and the mean and variance of H, S, and V components in each image area, a total of 9-dimensional feature vectors.

[0101] 5. Extract texture features, obtain the mean and variance of Gabor wavelet coefficients in 4 scales and 6 directions, a total of 48 dimensional regional texture feature vectors.

[0102] 6. Integrate the image features and store them in the image feature library.

[0103] 7. Sampling the image feature library to genera...

Embodiment 2

[0107] Example 2: Partial examples of training

[0108] Specific steps such as Image 6 shown;

[0109] (1) Obtain the optimized image feature library according to the method of embodiment 1;

[0110] (2) Input the feature parameter extracted in (1) to the GMM trained, and seek its likelihood;

[0111] (3) the weighted summation of the matching probabilities of each characteristic parameter obtained in (2) to obtain the total matching degree Match;

[0112] (4) Compare the obtained Match with the threshold. If it is greater than the threshold, the contribution rate of each feature will be calculated; if it is less than the threshold, it will be counted and sent to the end judgment;

[0113] (5) Send back the feature whose Match in (4) is less than the threshold and does not meet the end condition for a new round of training; if the end condition is met, the training ends.

Embodiment 3

[0114] Embodiment 3: Some examples of semantic annotation:

[0115] The specific steps are shown in Figure 7;

[0116] (1) Obtain the optimized image feature library according to the method of embodiment 1;

[0117] (2) Input the feature parameters extracted in (1) into the trained GMM matrix to obtain the likelihood corresponding to each feature;

[0118] (3) According to the likelihood obtained in (2), obtain the probability of matching for each type of feature;

[0119] (4) The probability weighted summation corresponding to all features is used as the total matching Match;

[0120] (5) Compare the obtained Match with the threshold, if it is greater than the threshold, the contribution rate of each feature will be calculated; if it is less than the threshold, it will be counted and sent to the end judgment.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for marking picture semantics based on a Gauss mixture model, which belongs to the technical field of image retrieval and automatic image marking. The method comprises the following steps: S1, obtaining a relationship between a low-level visual feature of the image and a semantics concept through monitoring Bayesian learning, and obtaining an image feature set; S2, establishing two Gauss mixture models for each semantics concept by means of an expectation-maximization algorithm, and adding a step of eliminating a noise area; and S3. according to the image feature set, calculating the color posterior probability of the pattern posterior probability of an area layer, arranging the calculated posterior probabilities which belong to all concepts of the image according to a descending order, and obtaining the color ordering value of each concept; similarly, arranging the pattern posterior probabilities and obtaining the pattern ordering value of each concept; and selecting a concept class marking image with a least summation of front R ordering values. According to the method of the invention, the difference between the low-level visual feature of the image and the high-level semantics concept expression is remarkably reduced, thereby effectively settling a semantic gap problem.

Description

technical field [0001] The invention relates to the technical fields of image retrieval and automatic image labeling, in particular to a method for image semantic labeling based on an optimized Gaussian mixture model. Background technique [0002] With the rapid growth of digital images and image databases, image retrieval has become an important research direction in the field of information retrieval. Its purpose is to quickly extract images or image sequences related to queries from image databases, so that users can quickly Get the specific image you want. [0003] Image semantic automatic annotation is a key link in semantic-based image retrieval, and has become a research hotspot in image retrieval. The automatic annotation of image semantics is to add keywords to the image to represent the semantic content of the image, which can convert the visual features of the image into the annotation word information of the image, inherits the high efficiency of keyword retriev...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62
CPCG06F18/213G06F18/2163G06F18/214
Inventor 张晓俊曹毅陶智
Owner 常熟苏大低碳应用技术研究院有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products