Cross-media retrieval method based on uniform sparse representation

A sparse representation, cross-media technology, applied in network data retrieval, other database retrieval, network data indexing, etc., can solve the problem of reducing cross-media data noise, without considering sparsity, errors, etc., to enhance the ability of analysis and mining , high cross-media retrieval accuracy, and the effect of improving effectiveness

Inactive Publication Date: 2014-11-26
PEKING UNIV
View PDF2 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method can only consider the relationship between the two media types during the learning process, and does not consider the use of sparsity to reduce noise in cross-media data, and the two learning steps of the method are carried out independe

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-media retrieval method based on uniform sparse representation
  • Cross-media retrieval method based on uniform sparse representation
  • Cross-media retrieval method based on uniform sparse representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0023] A cross-media retrieval method based on unified sparse representation of the present invention, its flow is as follows figure 1 As shown, it specifically includes the following steps:

[0024] (1) Establish a cross-media database containing multiple media types, divide the database into a training set and a test set, and extract the feature vector of each media type data.

[0025] In this embodiment, the multiple media types are five media types, including text, image, video, audio and 3D.

[0026] For text data, extract its hidden Dirichlet distribution feature vector; for image data, extract its bag-of-words feature vector; for video data, extract its bag-of-words feature vector; for audio data, extract its Mel frequency cepstral coefficient feature Vector; for 3D data, extract its light field feature vector. The method of...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a cross-media retrieval method based on uniform sparse representation. The method comprises the following steps that a cross-media database comprising a plurality of media types is established and a feature vector of each kind of media data is extracted; a feature mapping matrix for cross-media uniform sparse representation is studied for each media type, the incidence relations between all the different media types are considered, spatial sparsity of the cross-media data after mapping is conducted is also considered, and the feature spatial sample distribution after mapping and the feature spatial sample distribution before mapping are kept approximate; the probability that every two media data belong to the same type serves as the similarity between different media; the similarity between media data in query samples and a query target set is calculated, and the most similar cross-media retrieval result comprising all the media types is output. According to the cross-media retrieval method based on uniform sparse representation, the incidence relations, the sparsity and the sample distribution between the cross-media data are fully considered, the effectiveness of uniform feature representation can be improved, and therefore the accuracy of cross-media retrieval is improved.

Description

technical field [0001] The invention relates to the technical field of multimedia retrieval, in particular to a cross-media retrieval method based on unified sparse representation. Background technique [0002] With the advent of the era of big data, multimedia data on the Internet has grown rapidly, including text, images, video, audio and other media data. However, existing search engines such as Google and Baidu still rely on keyword-based retrieval. On the one hand, this retrieval method ignores the information of multimedia data such as images, videos, and audios. On the other hand, when there is no text around the multimedia data , the search cannot be performed. Although some research work focuses on content-based single-media retrieval, such as image search, etc., it cannot support content-based cross-media retrieval, such as using an image sample to retrieve all relevant media data, including not only related Images, but also text, video, audio, 3D, etc. This ret...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/30
CPCG06F16/903G06F16/951
Inventor 翟晓华彭宇新肖建国
Owner PEKING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products