Large-scale image multi-scale semantic retrieval method

A large-scale, picture technology, applied in still image data retrieval, metadata still image retrieval, character and pattern recognition, etc., can solve problems such as a lot of manpower, incomplete semantic representation, investment, etc.

Inactive Publication Date: 2018-05-22
FOCUS TECH +1
View PDF8 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to overcome the disadvantages of incomplete semantic representation and the need for a large amount of manpower and material resources in the existing methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Large-scale image multi-scale semantic retrieval method
  • Large-scale image multi-scale semantic retrieval method
  • Large-scale image multi-scale semantic retrieval method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The present invention will be further described below in conjunction with the drawings. As shown in the drawings, the specific implementation is divided into two parts: training and production environment. The training part is mainly to train the generative confrontation network. This training uses the tensorflow platform. The discriminator network is a convolutional neural network, and the generation network is a deconvolutional neural network. 64 images are used per iteration in the network. The main structure is attached figure 2 middle.

[0028] After the training is completed, get the trained model, and then use the trained model to build a standard tensorflow model server. In practical applications, one or a batch of pictures can be sent to the server each time to obtain the vector of the pictures.

[0029] After obtaining the picture vector, calculate the similarity with the picture to be searched, and find out the picture with a similarity greater than 0.5...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A large-scale image semantic retrieval method is provided. The method comprises: using an unsupervised deep learning model to train the network to acquire feature vectors of pictures, and comprehensively considering a semantic relationship between the text descriptions of the pictures to achieve large-scale picture retrieval; for processing the feature vectors of the pictures, using a generated countermeasure network composed of a 4-6th layer discriminant network and a 4-6th generation network to extract the features of the pictures; for processing the texts of the pictures, using a distributed expression method of word vectors to obtain the picture vectors, and using the word nesting to describe the semantic information of the picture; using a clustering method to cluster the retrieved pictures, and displaying only one of a certain class of commodities to the user through clustering, so that the time for the user to look for commodities can be reduced; obtaining picture text description vectors through the trained word vectors; connecting the text vector and the picture vector together as the feature representation of the picture; and clustering the pictures through the k-means++.

Description

technical field [0001] The invention is a large-scale picture semantic retrieval technology, in particular a multi-scale semantic retrieval method for large-scale e-commerce pictures. Background technique [0002] Existing image retrieval technologies are mainly divided into text-based image retrieval technologies and content-based image retrieval technologies. Text-based retrieval technology uses text description to describe the characteristics of pictures. The content-based image retrieval technology analyzes and retrieves images through the color, texture, and layout of images. Text-based retrieval describes pictures by their author, age, genre, and size. This method cannot reflect the semantic similarity between pictures. The content-based image retrieval technology needs to manually extract the features of the image, which requires the input of manpower and material resources. In recent years, deep learning has achieved great success in the field of computer vision, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30G06K9/62
CPCG06F16/58G06F18/23213G06F18/22
Inventor 田腾飞李仁勇崇志宏张云
Owner FOCUS TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products