Combined commodity retrieval method and system based on multi-modal pre-training model

A pre-training, multi-modal technology, applied in biological neural network models, business, character and pattern recognition, etc., can solve the problems of low accuracy and achieve the goal of improving accuracy, improving feature representation effect, and strong generalization Effect

Pending Publication Date: 2022-05-06
SUN YAT SEN UNIV
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the problem of low accuracy caused by current commodity retrieval relying on single-modal data and image-level retrieval, the present invention provid

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Combined commodity retrieval method and system based on multi-modal pre-training model
  • Combined commodity retrieval method and system based on multi-modal pre-training model
  • Combined commodity retrieval method and system based on multi-modal pre-training model

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0046] Example 1

[0047] as Figure 1 As shown in, a combined commodity retrieval method based on multimodal pre training model, the method comprises the following steps:

[0048] S1: divide the commodity image into single product image and combined product image, wherein the single product image represents only one commodity, and the combined product image representation includes a plurality of independent commodities;

[0049] S2: train a combined commodity image detector to detect each independent commodity in the combined commodity image;

[0050] S3: acquire and combine the feature code, position code and segment code of the text mode and picture module in the combined commodity image, so as to learn the embedded representation;

[0051] S4: build a multimodal pre training model and take the learned embedded representation as the input of the multimodal pre training model;

[0052] S5: the bounding box and bounding box features extracted by the commodity detector are used as ...

Example Embodiment

[0112] Example 2

[0113] Based on the combined commodity retrieval method of the multimodal pre training model described in embodiment 1, this embodiment also provides a combined commodity retrieval system of the multimodal pre training model. The system includes a sample construction module, an image detector training module, a learning embedded representation module, a multimodal pre training model module, a single product feature extraction module and a combined product feature extraction module; Among them,

[0114] The sample construction module is used to divide the commodity image into single product image and combined product image;

[0115] The image detector training module is used for training an image detector for detecting each independent commodity in the combined commodity image;

[0116] The learning embedded representation module is used to acquire and combine the feature coding, position coding and segment coding of the text mode and picture module in the combin...

Example Embodiment

[0124] Example 3

[0125] A computer system includes a memory, a processor and a computer program stored on the memory and running on the processor. When the processor executes the computer program, the method steps are as follows:

[0126] S1: divide the commodity image into single product image and combined product image, wherein the single product image represents only one commodity, and the combined product image representation includes a plurality of independent commodities;

[0127] S2: train a combined commodity image detector to detect each independent commodity in the combined commodity image;

[0128] S3: acquire and combine the feature code, position code and segment code of the text mode and picture module in the combined commodity image, so as to learn the embedded representation;

[0129] S4: build a multimodal pre training model and take the learned embedded representation as the input of the multimodal pre training model;

[0130] S5: the bounding box and bounding ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a combined commodity retrieval method and system based on a multi-modal pre-training model. The method comprises the following steps: dividing a commodity image into a single commodity image and a combined commodity image; training a combined commodity image detector; acquiring and combining feature codes, position codes and segment codes of a text mode and a picture module in the combined commodity image, learning embedded representation, and inputting a constructed multi-mode pre-training model; a bounding box and bounding box features extracted by a commodity detector are used as image features, text features are combined, and the image features are input into a multi-modal pre-training model for self-supervised training; extracting retrieval features of a picture mode and a text mode of the single-item image by adopting a multi-mode pre-training model, and storing the retrieval features in a retrieval library; and the multi-modal pre-training model extracts retrieval features of image-text fusion according to a bounding box and bounding box features of each target commodity in the combined product image, calculates a pre-distance between a combined product feature and a single product feature in a retrieval library as a commodity similarity, and selects the most similar single product as a result to be returned.

Description

technical field [0001] The present invention relates to the technical field of commodity retrieval, and more specifically, relates to a combined commodity retrieval method and system based on a multimodal pre-training model. Background technique [0002] The development of Internet technology has led to the rapid expansion of online e-commerce platforms. Due to their convenience, e-commerce platforms have been favored by more and more people. The variety of goods in the e-commerce field and the shopping needs of users have greatly increased. Online products are diverse, and more products are presented in the form of packages, that is, multiple different products are combined in one package. At the same time, when users browse a set of products, they may need to inquire about the single product corresponding to the set for price comparison or individual purchase. In the real scene with large data scale and lack of annotation, how to perform multi-modal combined product retr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06Q30/06
CPCG06Q30/0631G06F16/5866G06F16/583G06N3/045G06F18/253G06F18/214
Inventor 詹巽霖吴洋鑫董晓梁小丹
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products