Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Model training method, cross-modal representation method and unsupervised image text matching method and device

A model training, unsupervised technology, applied in the computer field, can solve the problems of expensive acquisition, sampling deviation, and the model cannot distinguish instances with high semantic similarity, so as to reduce the deviation, reduce the sampling deviation, and improve the cross-model. The effect of state representation

Pending Publication Date: 2021-12-31
ZHEJIANG LAB +1
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in supervised image-text matching models, image and sentence token pairs are expensive to obtain
[0003] Current research explores the use of document-level structural information to extract positive and negative instances for model training, but there is sampling bias, which makes the model unable to distinguish instances with high semantic similarity

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Model training method, cross-modal representation method and unsupervised image text matching method and device
  • Model training method, cross-modal representation method and unsupervised image text matching method and device
  • Model training method, cross-modal representation method and unsupervised image text matching method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The technical solutions in the embodiments of this specification will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of this specification. Obviously, the described embodiments are only part of the implementations of this specification, not all of them. Based on the implementations in this specification, all other implementations obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of this application.

[0024] A kind of model training method provided in this specification is applied to unsupervised image-text matching model, and described method may comprise the following steps.

[0025] Step S10: Calculate pairwise similarity values ​​between pictures and sentences in the training document.

[0026] Step S12: Based on the similarity value, determine a positive sample pair set and a negative sample pair set; wherein, the positive sample pair set has a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention aims to provide a model training method, a cross-modal characterization method and an unsupervised image text matching method and device. The method comprises the following steps: calculating a pairwise similarity value of a picture and a sentence in a training document; and determining a positive sample pair set and a negative sample pair set based on the similarity value; wherein the positive sample pair set contains a preset number of positive sample pairs; the negative sample pair set contains a preset number of negative sample pairs; the positive sample pair set and the negative sample pair set are used for further training the model until the average similarity value of the preset number of positive sample pairs is greater than the average similarity value of the preset number of negative sample pairs, and the difference value of the positive sample pairs and the negative sample pairs meets a preset condition. According to the embodiment, the sampling deviation can be reduced, and the picture and the sentence are matched through a better training model.

Description

technical field [0001] The present invention relates to the field of computers, in particular to a model training method, a cross-modal representation method, an unsupervised image-text matching method and a device. Background technique [0002] Image-text matching is one of the fundamental problems in the field of vision and language, where the main goal is to learn to align the semantic spaces of two modalities. Previous research on image-text matching is mainly supervised, requiring a large number of annotated image-sentence pairs. However, in supervised image-text matching models, image and sentence token pairs are expensive to obtain. [0003] Current research explores the use of document-level structural information to extract positive and negative instances for model training, but there is sampling bias, which makes the model unable to distinguish instances with high semantic similarity. Contents of the invention [0004] The purpose of the embodiments of this spe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/58G06F16/33G06F40/211G06F40/289G06N3/04G06N3/08
CPCG06F16/5866G06F16/3344G06F40/211G06F40/289G06N3/08G06N3/045
Inventor 魏忠钰李泽君
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products