Cross-modal understanding and generating method and device based on multi-modal pre-training model

A multi-modal, cross-modal technology, applied in the computer field to achieve the effect of improving reliability and accuracy, and improving accuracy

Active Publication Date: 2021-11-02
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF3 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The present invention provides a cross-modal understanding and generation method and device based on a multi-modal pre-training model to solve the existing cross-modal understanding and generation problems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal understanding and generating method and device based on multi-modal pre-training model
  • Cross-modal understanding and generating method and device based on multi-modal pre-training model
  • Cross-modal understanding and generating method and device based on multi-modal pre-training model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] In order to make the purpose, technical solutions and advantages of the present invention clearer, the technical solutions in the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the present invention. Obviously, the described embodiments are part of the embodiments of the present invention , but not all examples. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

[0041] Multimodal pre-training refers to training a general model on a large-scale multimodal dataset, and then it can handle various downstream cross-modal tasks with or without fine-tuning. In the field of natural language processing, pre-training based on the Transformer model architecture has achieved great success. Subsequently, this research paradigm was introduced into the field of mu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a cross-modal understanding and generating method and device based on a multi-modal pre-training model, and the method comprises the steps of determining to-be-processed multi-modal information which comprises an image, a text and an audio; and inputting the multi-modal information to the multi-modal pre-training model, learning the correlation of the multi-modal information to obtain a fusion representation of the multi-modal information, and inputting the fusion representation to the understanding and / or generating unit to execute a cross-modal understanding and generating task to obtain an understanding result and / or a generating result. According to the method and the device provided by the invention, understanding and generation are carried out by combining three modalities of an image, a text and an audio, so that full application of information is realized. Through combination of two tasks of cross-modal understanding and cross-modal generation, the multi-modal pre-training model can perform feature extraction and cross-modal association construction more comprehensively, so that the accuracy of cross-modal understanding and generation is further improved.

Description

technical field [0001] The invention relates to the field of computer technology, in particular to a method and device for cross-modal understanding and generation based on a multi-modal pre-training model. Background technique [0002] Multimodal pre-training is an interdisciplinary subject that spans multiple domains and involves multiple modal information. This task aims to train a unified framework on a large scale to achieve various cross-modal understanding and generation tasks, such as image recognition, image generation, visual question answering, text generation, etc. [0003] At present, when performing multimodal pre-training, common methods and frameworks only consider a single modality or two modalities, such as image and text, or video and text, and it is very easy to ignore other information that is prevalent in the surrounding environment. Effects on cross-modal comprehension and generation. Moreover, the current multimodality usually only focuses on cross-...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/214G06F18/256
Inventor 刘静朱欣鑫刘飞郭龙腾
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products