A Large-Scale Image Retrieval Method Based on Deep Strong Correlational Hashing Learning

An image retrieval and strong correlation technology, applied in the field of image processing, can solve the problems of inappropriate large-scale image retrieval, increase computing overhead, etc., and achieve the effects of efficient large-scale image retrieval, fast computing speed, and preventing overfitting.

Active Publication Date: 2022-06-21
KUNMING UNIV OF SCI & TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Such methods will inevitably increase the computational overhead and are not suitable for large-scale image retrieval

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Large-Scale Image Retrieval Method Based on Deep Strong Correlational Hashing Learning
  • A Large-Scale Image Retrieval Method Based on Deep Strong Correlational Hashing Learning
  • A Large-Scale Image Retrieval Method Based on Deep Strong Correlational Hashing Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0035] Embodiment 1: as Figure 1-4 As shown, a large-scale image retrieval method of deep strong correlation hash learning, the specific steps of the large-scale image retrieval method of deep strong correlation hash learning are as follows:

[0036] Step1. Extract data from the image data set to form training image data, and then perform preprocessing operations on the image. The input image passes through the convolutional sub-network, and the image information is mapped to the feature space to obtain a local feature representation;

[0037] Step2, and then through the fully connected layer, map the local feature representation obtained by the upper layer into the sample label space, and then enter the hash layer for dimensionality reduction and hash coding;

[0038] Step3, then enter the strong correlation loss layer, use the strong correlation loss function to calculate the loss value of the current iteration; finally return the loss value, update the network parameters a...

Embodiment 2

[0063] Embodiment 2: as Figure 1-4 As shown, a large-scale image retrieval method of deep strong correlation hash learning, the specific steps of the large-scale image retrieval method of deep strong correlation hash learning are as follows:

[0064] This embodiment is the same as Embodiment 1, the difference is:

[0065] The model trained in Step 3 of this embodiment uses AlexNet, and the deep strong correlation hash learning method is applied to AlexNet to obtain a deep strong correlation hash model.

[0066]In the steps Step 1 and 2, the configurations of the convolution sub-network, the fully connected layer, and the hash layer are shown in Table 1, where Hashing is the hash layer, and N is the number of hash codes.

[0067] Table 1 Network structure of strong correlation hash learning model based on AlexNet

[0068]

[0069] Further, the method of this embodiment and the comparative method use a unified network structure, as shown in Table 1. The model adopts the p...

Embodiment 3

[0071] Example 3: as Figure 1-4 As shown, a large-scale image retrieval method based on deep strong correlation hash learning, the specific steps of the large-scale image retrieval method based on deep strong correlation hash learning are as follows:

[0072] This embodiment is the same as Embodiment 1, except that:

[0073] The model trained in Step 3 of this embodiment adopts Vgg16NET, and the deep strong correlation hash learning method is applied to Vgg16NET to obtain a deep strong correlation hash model.

[0074] In the step Step2, since Vgg16 cannot output a hash code, we extract the output matrix of the second fully connected layer of Vgg16 (dimension is 1×4096) for retrieval.

[0075] In the step Step4, top-q=100 is used during retrieval, and Vgg16NET uses Euclidean distance to calculate the similarity. The experimental results are shown in Table 2. Bits is the number of digits of the current output matrix; time is the time taken to calculate the similarity and retu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a large-scale image retrieval method for deep strong correlation hash learning, which belongs to the technical field of image processing. The invention maps the feature information obtained by the input image through the convolutional sub-network and the fully connected layer into the feature space, and adds a hash layer to obtain the hash code, and then changes the sensitivity of the model to the weight matrix through a strong correlation loss function Adjust the distance between features, increase the distance between feature classes, reduce the distance within a class, and complete fast image retrieval by calculating the Hamming distance between low-dimensional hash codes. The method of the invention can realize fast and accurate large-scale image retrieval, and can be widely used in various convolutional neural networks.

Description

technical field [0001] The invention relates to a large-scale image retrieval method for deep strong correlation hash learning, which belongs to the technical field of image processing. Background technique [0002] With the rapid development of mobile devices and the Internet, a large number of images are uploaded to the Internet every day. The amount of image data of millions or even tens of millions makes it more and more difficult to accurately and quickly retrieve the images that users need. Large-scale image retrieval is the foundation of computer vision research and is directly related to the practical application of computer vision. Image retrieval is mainly divided into text-based image retrieval (Text-Based Image Retrieval, TBIR) and content-based image retrieval (Content-Based Image Retrieval, CBIR). The general method of TBIR is to annotate the image, and then perform keyword-based retrieval according to the annotated text. The advantage of TBIR is that users ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/583G06F16/55G06V10/764G06V10/82G06K9/62G06N3/04G06N3/08
CPCG06F16/583G06F16/55G06N3/084G06N3/048G06N3/045G06F18/241
Inventor 黄青松单文琦刘利军冯旭鹏
Owner KUNMING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products