An Online Cross-Modal Retrieval Method and System for Streaming Data

A cross-modal, streaming data technology, applied in the direction of still image data retrieval, digital data information retrieval, metadata still image retrieval, etc., can solve the problems of unguaranteed performance, data information loss, etc., to save computation and facilitate dynamic Changes, the effect of ensuring retrieval accuracy

Active Publication Date: 2022-04-01
SHANDONG UNIV
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, using low-bit hash codes faces severe data information loss, and most existing online cross-modal methods cannot guarantee good performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Online Cross-Modal Retrieval Method and System for Streaming Data
  • An Online Cross-Modal Retrieval Method and System for Streaming Data
  • An Online Cross-Modal Retrieval Method and System for Streaming Data

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0048] This embodiment discloses an online cross-modal retrieval method for streaming data, such as figure 1 shown, including the following steps:

[0049] Step 1: Obtain the data to be queried, and use the pre-trained hash function to map to obtain the corresponding hash code, and the data to be queried is an image or text;

[0050] Step 2: By comparing the hash code of the data to be queried with the hash code of the sample in the database, the retrieval result is obtained.

[0051] Among them, such as figure 2 Shown, the training method of described hash function comprises:

[0052] S1: Acquire data and divide it into training data and test data. The training data includes pairs of image and text data. Since the network resources available for retrieval (such as image and text data) are continuously updated in the form of data streams, in order to adapt to online retrieval tasks, the training data is divided into rounds times, used to simulate the arrival of streaming da...

Embodiment 2

[0134] The purpose of this embodiment is to provide an online cross-modal retrieval system for streaming data, including:

[0135] The hash mapping module is used to obtain the data to be queried, and the corresponding hash code is obtained by using the pre-trained hash function mapping, and the data to be queried is an image or text;

[0136] A cross-modal retrieval module, configured to obtain a retrieval result by comparing the hash code of the data to be queried with the hash code of the sample in the database;

[0137] Wherein, the training method of described hash function comprises:

[0138] Obtain training data comprising pairs of images and text, and divide the training data into rounds;

[0139] Starting from the first round, the hash code learning is performed on the training data of each round in turn to obtain the corresponding hash function.

[0140] The steps involved in the above second embodiment correspond to the first method embodiment, and for the specifi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of large-scale streaming data retrieval, and provides an online cross-modal retrieval method and system for streaming data. The method includes the following steps: obtaining the data to be queried, and using a pre-trained hash function mapping to obtain The corresponding hash code, the data to be queried is an image or text; by comparing the hash code of the data to be queried with the hash code of the sample in the database, the retrieval result is obtained; wherein, the training of the hash function The method includes: obtaining training data including paired images and text, and dividing the training data into rounds; starting from the first round, performing hash code learning on each round of training data in turn to obtain the corresponding hash function . The present invention divides the training data into rounds and sequentially learns hash codes, which is more suitable for the requirement of cross-modal retrieval of online streaming data.

Description

technical field [0001] The invention belongs to the technical field of large-scale streaming data retrieval, and in particular relates to an online cross-modal retrieval method and system for streaming data. Background technique [0002] The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art. [0003] With the explosive growth of data composed of multiple heterogeneous modalities, datasets become larger and larger, and it is unrealistic to load all the data into memory, which poses a higher challenge for online cross-modal retrieval methods. Require. Although the efficient search of dynamic image databases has been achieved by online hashing methods, the resource consumption of the model is a concern as the dataset continues to grow. Although the hash method maps high-dimensional data into binary strings, which can greatly reduce the consumption of storage resources, the dimension o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/31G06F16/33G06F16/383G06F16/51G06F16/58G06F16/583
CPCG06F16/325G06F16/3347G06F16/383G06F16/51G06F16/583G06F16/5866
Inventor 罗昕宋佩璇詹雨薇许信顺
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products