Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semantic relationship network-based cross-mode information retrieval method

A technology of semantic association and information retrieval, applied in the field of information retrieval, can solve the problem of unimproved semantic matching, and achieve the effect of fast cross-modal retrieval, reducing errors and improving retrieval accuracy.

Inactive Publication Date: 2010-11-24
WUHAN UNIV
View PDF3 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These search engines mainly use underlying physical features such as color, texture, and shape for matching, and their visual experience will be much higher than that of traditional keyword search engines, but they have not improved in terms of semantic matching.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic relationship network-based cross-mode information retrieval method
  • Semantic relationship network-based cross-mode information retrieval method
  • Semantic relationship network-based cross-mode information retrieval method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0056] The present invention proposes a cross-modal information retrieval method based on semantic association network, the principle of which is:

[0057] Traditional multimedia search engines mainly use feature vectorization and vector hashing technology to construct indexes, and then realize retrieval based on the principle of vector matching. However, in the field of cross-modal retrieval, the structure and characteristics of different modal data are quite different, resulting in different dimensions of feature vectors. Although dimensionality reduction techniques can be used to make the vector dimensions corresponding to various modes the same, the meanings of each dimension and the entire feature space are still different, and it is meaningless to directly perform vector matching. Therefore, in order to achieve cross-modal indexing, this patent uses the cross-modal association knowledge acquired before, and obtains multi-modal data sets with the same semantics at differe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of information retrieval, in particular a semantic relationship network-based cross-mode information retrieval method. In the method, cross-mode association knowledge is acquired by webpage vision spatial analysis, multimedia search engine label relationship analysis, DeepWe interface mode analysis, analysis on the association of data in different modes in composite multimedia, utilization of direct and potential feedback information of users and association reasoning, and a cross-mode association network is constructed; multimode data sets having the same semanteme and different finenesses are acquired by using the acquired cross-mode association knowledge and hierarchical fuzzy clustering; and typical vectors in different modes are selected from each SC, corresponding semantic vector packets are built, and mapping relations are built among the SCs, the typical vectors and the corresponding semantic vector packets. The method can reduce possible errors in each channel, improve retrieval accuracy effectively, support cross-mode retrievals with semantemes of different finenesses defined by users, and support the retrieval by using multimode data files as samples at the same time.

Description

technical field [0001] The invention relates to the technical field of information retrieval, in particular to a cross-modal information retrieval method based on a semantic association network. Background technique [0002] According to the statistics of Radio Network in 2009, the amount of text, pictures and videos updated daily by mainstream websites in the country is about 310GB / day. "Forbes" reported that the total amount of human written records in 5,000 years was 5 EB, and in 2009 alone, the digital content generated worldwide exceeded 450 EB, of which multimedia data accounted for a considerable proportion. "Information explosion" is intensifying. Applications such as Facebook, Twitter, and microblog promote the exponential expansion of new information, but the current Internet is still unable to cope with the contradiction between excessive information expansion and accurate positioning of information, especially for multimedia information, even Google, Baidu , and...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/30
Inventor 曾承
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products