Large-scale image retrieval method for deep strong correlation hash learning

An image retrieval and strong correlation technology, applied in the field of image processing, can solve the problems of increasing computing overhead and not being suitable for large-scale image retrieval, and achieve fast computing speed, efficient large-scale image retrieval, and the effect of preventing overfitting

Active Publication Date: 2020-05-08
KUNMING UNIV OF SCI & TECH
View PDF5 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Such methods will inevitably increase the computational...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Large-scale image retrieval method for deep strong correlation hash learning
  • Large-scale image retrieval method for deep strong correlation hash learning
  • Large-scale image retrieval method for deep strong correlation hash learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0035] Embodiment 1: as Figure 1-4 As shown, a large-scale image retrieval method of deep strong correlation hash learning, the specific steps of the large-scale image retrieval method of deep strong correlation hash learning are as follows:

[0036] Step1. Extract data from the image data set to form training image data, and then perform preprocessing operations on the image. The input image passes through the convolutional sub-network, and the image information is mapped to the feature space to obtain a local feature representation;

[0037] Step2, and then through the fully connected layer, map the local feature representation obtained by the upper layer into the sample label space, and then enter the hash layer for dimensionality reduction and hash coding;

[0038] Step3, then enter the strong correlation loss layer, use the strong correlation loss function to calculate the loss value of the current iteration; finally return the loss value, update the network parameters a...

Embodiment 2

[0063] Embodiment 2: as Figure 1-4 As shown, a large-scale image retrieval method of deep strong correlation hash learning, the specific steps of the large-scale image retrieval method of deep strong correlation hash learning are as follows:

[0064] This embodiment is the same as Embodiment 1, the difference is:

[0065] The model trained in Step 3 of this embodiment uses AlexNet, and the deep strong correlation hash learning method is applied to AlexNet to obtain a deep strong correlation hash model.

[0066]In the steps Step1 and 2, the configuration of the convolutional sub-network, the fully connected layer, and the hash layer is shown in Table 1, where Hashing is the hash layer, and N is the number of hash codes.

[0067] Table 1 Network structure of AlexNet-based strong correlation hashing learning model

[0068]

[0069] Further, the method of this embodiment and the comparison method use a unified network structure, as shown in Table 1. The model uses the pre-t...

Embodiment 3

[0071] Embodiment 3: as Figure 1-4 As shown, a large-scale image retrieval method of deep strong correlation hash learning, the specific steps of the large-scale image retrieval method of deep strong correlation hash learning are as follows:

[0072] This embodiment is the same as Embodiment 1, the difference is:

[0073] The model trained in Step 3 of this embodiment uses Vgg16NET, and the deep strong correlation hash learning method is applied to Vgg16NET to obtain a deep strong correlation hash model.

[0074] In the step Step2, since Vgg16 cannot output a hash code, we extract the output matrix of the second fully connected layer of Vgg16 (with a dimension of 1×4096) for retrieval.

[0075] In Step 4, top-q=100 is used for retrieval, and Vgg16NET uses Euclidean distance to calculate similarity. The experimental results are shown in Table 2. Bits is the number of digits in the current output matrix; time is the time it takes to calculate the similarity and return the fir...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a large-scale image retrieval method for deep strong correlation hash learning, and belongs to the technical field of image processing. According to the invention, feature information of an input image obtained by a convolution sub-network and a full connection layer is mapped into a feature space; a hash layer is added to obtain hash codes, then the sensitivity of a modelto the weight matrix is changed through a strong correlation loss function to adjust the distance between features, increase the inter-feature class distance and reduce the intra-class distance, andthe Hamming distance between low-dimensional hash codes is calculated to complete rapid image retrieval. According to the method, rapid and accurate large-scale image retrieval can be realized, and the method can be widely applied to various convolutional neural networks.

Description

technical field [0001] The invention relates to a large-scale image retrieval method for deep strong correlation hash learning, which belongs to the technical field of image processing. Background technique [0002] With the rapid development of mobile devices and the Internet, a large number of images are uploaded to the Internet every day. The amount of image data of millions or even tens of millions makes it more and more difficult to accurately and quickly retrieve the images that users need. Large-scale image retrieval is the foundation of computer vision research and is directly related to the practical application of computer vision. Image retrieval is mainly divided into text-based image retrieval (Text-Based Image Retrieval, TBIR) and content-based image retrieval (Content-Based Image Retrieval, CBIR). The general method of TBIR is to annotate the image, and then perform keyword-based retrieval according to the annotated text. The advantage of TBIR is that users ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F16/583G06F16/55G06K9/62G06N3/04G06N3/08
CPCG06F16/583G06F16/55G06N3/084G06N3/048G06N3/045G06F18/241
Inventor 黄青松单文琦刘利军冯旭鹏
Owner KUNMING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products