Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Combined commodity retrieval method and system based on multi-modal pre-training model

A pre-training, multi-modal technology, applied in biological neural network models, business, character and pattern recognition, etc., can solve the problems of low accuracy and achieve the goal of improving accuracy, improving feature representation effect, and strong generalization Effect

Pending Publication Date: 2022-05-06
SUN YAT SEN UNIV
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the problem of low accuracy caused by current commodity retrieval relying on single-modal data and image-level retrieval, the present invention provides a combined commodity retrieval method and system based on a multimodal pre-training model, which has high generalization, Advantages of high availability and high accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Combined commodity retrieval method and system based on multi-modal pre-training model
  • Combined commodity retrieval method and system based on multi-modal pre-training model
  • Combined commodity retrieval method and system based on multi-modal pre-training model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0047] Such as figure 1 As shown, a combined commodity retrieval method based on a multimodal pre-training model, the method includes the following steps:

[0048] S1: Divide the product image into a single product image and a composite product image, wherein the single product image represents only one product, and the composite product image represents multiple independent products;

[0049] S2: Train a combined product image detector to detect each independent product in the combined product image;

[0050] S3: Obtain and combine the feature encoding, position encoding and segmentation encoding of the text mode and picture module in the combined commodity image, thereby learning the embedded representation;

[0051] S4: Construct a multi-modal pre-training model, and use the learned embedding representation as the input of the multi-modal pre-training model;

[0052] S5: The bounding box and bounding box features extracted by the product detector are used as image feature...

Embodiment 2

[0113] Based on the combined product retrieval method of the multi-modal pre-training model described in Embodiment 1, this embodiment also provides a combined product retrieval system of the multi-modal pre-training model. The system includes a sample construction module, an image detection device training module, learning embedding representation module, multimodal pre-training model module, single product feature extraction module, and combined product feature extraction module; among them,

[0114] The sample construction module is used to divide the commodity image into a single product image and a composite product image;

[0115] The image detector training module is used to train an image detector for detecting each independent commodity in the combined commodity image;

[0116] The learning embedding representation module is used to obtain and combine the feature encoding, position encoding and segment encoding of the text mode and picture module in the combined commo...

Embodiment 3

[0125] A computer system includes a memory, a processor, and a computer program stored in the memory and operable on the processor. When the processor executes the computer program, the steps of the method are as follows:

[0126] S1: Divide the product image into a single product image and a composite product image, wherein the single product image represents only one product, and the composite product image represents multiple independent products;

[0127] S2: Train a combined product image detector to detect each independent product in the combined product image;

[0128] S3: Obtain and combine the feature encoding, position encoding and segmentation encoding of the text mode and picture module in the combined commodity image, thereby learning the embedded representation;

[0129] S4: Construct a multi-modal pre-training model, and use the learned embedding representation as the input of the multi-modal pre-training model;

[0130] S5: The bounding box and bounding box fe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a combined commodity retrieval method and system based on a multi-modal pre-training model. The method comprises the following steps: dividing a commodity image into a single commodity image and a combined commodity image; training a combined commodity image detector; acquiring and combining feature codes, position codes and segment codes of a text mode and a picture module in the combined commodity image, learning embedded representation, and inputting a constructed multi-mode pre-training model; a bounding box and bounding box features extracted by a commodity detector are used as image features, text features are combined, and the image features are input into a multi-modal pre-training model for self-supervised training; extracting retrieval features of a picture mode and a text mode of the single-item image by adopting a multi-mode pre-training model, and storing the retrieval features in a retrieval library; and the multi-modal pre-training model extracts retrieval features of image-text fusion according to a bounding box and bounding box features of each target commodity in the combined product image, calculates a pre-distance between a combined product feature and a single product feature in a retrieval library as a commodity similarity, and selects the most similar single product as a result to be returned.

Description

technical field [0001] The present invention relates to the technical field of commodity retrieval, and more specifically, relates to a combined commodity retrieval method and system based on a multimodal pre-training model. Background technique [0002] The development of Internet technology has led to the rapid expansion of online e-commerce platforms. Due to their convenience, e-commerce platforms have been favored by more and more people. The variety of goods in the e-commerce field and the shopping needs of users have greatly increased. Online products are diverse, and more products are presented in the form of packages, that is, multiple different products are combined in one package. At the same time, when users browse a set of products, they may need to inquire about the single product corresponding to the set for price comparison or individual purchase. In the real scene with large data scale and lack of annotation, how to perform multi-modal combined product retr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06Q30/06
CPCG06Q30/0631G06F16/5866G06F16/583G06N3/045G06F18/253G06F18/214
Inventor 詹巽霖吴洋鑫董晓梁小丹
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products