Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep learning-based multi-view image retrieval method

A technology of image retrieval and deep learning, applied in the field of multi-view image retrieval based on deep learning, to achieve the effect of improving intuition

Inactive Publication Date: 2017-04-19
GUANGDONG POLYTECHNIC NORMAL UNIV
View PDF2 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method greatly improves the accuracy of image retrieval results, and solves the problem of feature association and optimal feature representation between multi-view images in the process of multi-view image retrieval

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning-based multi-view image retrieval method
  • Deep learning-based multi-view image retrieval method
  • Deep learning-based multi-view image retrieval method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0031] Embodiment one, refer to figure 1 As shown, a multi-view image retrieval method based on deep learning is characterized in that the training process includes:

[0032] Step 1. Multi-view image preprocessing, normalize the multi-view image scale, normalize the image dimension, and distinguish and classify each view of the multi-view image at the same time; divide the multi-view image data set into a test data set and training data Set in two parts.

[0033] Step 2. Construct a multi-view deep convolutional neural network, use VGG-M network parameters for each type of view according to the view category, but replace the output through Softmax classification with newSoftmax, and use the pre-trained network parameters as the initial weight of the network .

[0034] Step 3. Fine-tuning the network parameters, using the training data set to reversely adjust the network parameters, and further using the labeled test data set to fine-tune the network parameters to obtain the ...

Embodiment 2

[0036] Embodiment two: reference figure 2 As shown, a multi-view image retrieval method based on deep learning is characterized in that the retrieval process includes:

[0037] Step 201. Image preprocessing, that is, to normalize the scale and dimension of the image to be retrieved.

[0038] Step 202. Pass the image through any channel of multi-view deep convolutional neural network to calculate the features of the image.

[0039] Step 203. Compare the feature of the image with the image feature in the database, output the image index number according to the distance from small to large, and extract the optimal viewing angle image corresponding to the index number from the image database.

[0040] Step 204. The search results are sorted and output by similarity, and the retrieved optimal viewing angle images and similar images are grouped and output.

Embodiment 3

[0041] Embodiment three: reference image 3 As shown, a multi-view image retrieval method based on deep learning is characterized in that,

[0042] Step 301. Normalize the image scale, and change the image to a size of 227*227.

[0043] Step 302. Image dimension normalization, if the image is RGB three-dimensional, it remains unchanged; if the image is a grayscale image or a binary image, then the increased two-dimensional image is converted into a three-dimensional image similar to RGB, and the newly increased dimension Same as the original image.

[0044] Step 303. Match each view of the image with the corresponding convolutional neural network.

[0045] In step 304. CNN, image features will be calculated through CNN convolution.

[0046] Step 305.FC2, image features are extracted through the FC2 fully connected network, and the features after FC2 extraction are 4096 dimensions.

[0047] Step 3051.Feature1, extract the feature vectors of each view after FC2 as the first ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a deep learning-based multi-view image retrieval method. A training process of the method comprises steps of normalizing a multi-view image from aspects of dimensionality and scale, building a multi-view deep convolution neural network, building a convolution neural network of multi-path view parallel processing, enabling initial parameters of each path of network to be the same, refining network parameters, refining and adjusting pre-trained network parameters via a label data set, classifying images in an image base and conducting optimization view angle calculation after the training, extracting an image feature and an optimal view angle of each group of multi-view, and storing the image feature and the optimal view angle. During image retrieval, after a single view is input, a feedback result via retrieval comprises a similar image and an image having the optimal view angle. By the use of the method of retrieving the multi-view via the single view, retrieval accuracy and result displaying intuition can be improved.

Description

technical field [0001] The invention relates to the field of image retrieval, in particular to a multi-view image retrieval method based on deep learning. [0002] technical background [0003] The traditional image retrieval method uses a single view for retrieval. Images from different perspectives of the same target object form a group of images that can more vividly describe the target object, and this image group is called a multi-view image. In these multi-view images, some views can accurately reflect the target object, but some view images do not have such expressive ability, and the viewing angle that can best represent the multi-view image target in these multi-view images is called the optimal viewing angle. Multi-view images are widely used. For example, the online display of items on an e-commerce platform is usually reflected in multiple views; design patent images are also a way to use multiple views to represent the appearance of a product. [0004] The trad...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30G06K9/62
CPCG06F16/583G06F18/2411
Inventor 雷方元戴青云赵慧民蔡君魏文国罗建桢
Owner GUANGDONG POLYTECHNIC NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products