Dictionary learning method, visual word bag characteristic extracting method and retrieval system

A visual bag of words, dictionary learning technology, applied in the direction of electronic digital data processing, special data processing applications, instruments, etc., can solve the problem of mobile terminal processor performance, limited memory and power resources, vocabulary takes up a lot of memory, and descriptors are differentiated In order to achieve the effect of shortening training time and feature extraction time, reducing memory usage, and improving retrieval performance

Active Publication Date: 2014-09-10
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF7 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] (3) The processor performance, memory and power resources of the mobile terminal are limited, so it is necessary to study the feature extraction and representation algorithm suitable for the mobile terminal to meet the requirements of the memory occupation, processing speed and accuracy of the mobile terminal in practical applications
At the same time, because the K-Means clustering tends to focus on areas with high data density, some areas with significant features but less data are merged, which leads to a great decrease in the discrimination of descriptors.
[0014] To sum up, although the large vocabulary BoW has achieved great success in the field of PC visual retrieval, all current large vocabulary BoW methods cannot be applied to mobile terminals with limited computing resources. The biggest obstacle is the presence of words. The problem that the table occupies too much memory
The existing large vocabulary BoW method, a 1M-dimensional vocabulary (1M×128) occupies up to 512M of memory, even if the mobile phone memory can store such a large vocabulary, memory loading and calculation time are still a big problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dictionary learning method, visual word bag characteristic extracting method and retrieval system
  • Dictionary learning method, visual word bag characteristic extracting method and retrieval system
  • Dictionary learning method, visual word bag characteristic extracting method and retrieval system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0054] According to an embodiment of the present invention, a high-dimensional visual bag-of-words feature representation method based on segmented sparse coding is proposed. Visual bag-of-words feature representation refers to the use of vector quantization methods to map high-dimensional local feature vectors of images to visual keywords in the large vocabulary BoW, thereby reducing terminal-to-server transmission traffic, reducing network delays, and reducing server-side feature storage Occupied. The high-dimensional visual bag-of-words feature quantification method of this embodiment pioneered the idea of ​​"small code table and big vocabulary", which greatly reduced the memory occupation and time consumption of the terminal’s memory for feature quantification calculation, and originally solved the current current situation. There are methods that can't be used in mobile terminals because they occupy too much memory, which makes it possible for BoW to be widely used in mobi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a dictionary learning method. The dictionary learning method includes 1), dividing local characteristic vector of images into first segments and second segments on the basis of dimensionality; 2) establishing a first data matrix by the first segments of a plurality of local characteristic vectors, and establishing a second data matrix by the second segments of a plurality of local characteristic vectors; 3) subjecting the first data matrix to sparse non-negative matrix factorization to obtain a first dictionary sparsely coding the first segments of the local characteristic vectors; subjecting the second data matrix to sparse non-negative matrix factorization to obtain a second dictionary sparsely coding the second segments of the local characteristic vectors. The invention further provides a visual word bag characteristic extracting method for sparsely indicating the local characteristic vectors of the images segment by segment on the basis of the dictionaries and provides a corresponding retrieval system. Memory usage can be greatly reduced, wordlist training time and characteristic extraction time are shortened, and the dictionary learning method is particularly suitable for mobile terminals.

Description

Technical field [0001] The present invention relates to the technical field of multimedia content analysis and retrieval. Specifically, the present invention relates to a dictionary learning, a feature extraction method of visual bag of words, and a retrieval system. Background technique [0002] Simply put, visual search is "search for pictures with pictures". In order to realize visual search, it is first necessary to build an index library for extracting features from a large-scale image library. When a user searches, extract features from the query image, quickly search in the feature index library, sort by relevance (ie similarity), and return results. The result is a sorted list of images in the library, where each result image may contain related information related to the query image, combined with user characteristics and search scenarios. At present, traditional PC-oriented visual search has accumulated a large number of algorithms and technical solutions to choose fro...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30
CPCG06F16/583
Inventor 唐胜张勇东李锦涛徐作新
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products