Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semantic representation model pre-training method and device, electronic equipment and storage medium

A semantic representation and pre-training technology, which is applied in computing models, semantic analysis, speech analysis, etc., can solve problems such as poor model accuracy, and achieve the effect of improving expression accuracy and semantic representation

Active Publication Date: 2020-08-07
BEIJING BAIDU NETCOM SCI & TECH CO LTD
View PDF6 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the existing above-mentioned direct use of multi-modal alignment training data to pre-train the multi-modal semantic representation model leads to poor accuracy of the trained multi-modal semantic representation model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic representation model pre-training method and device, electronic equipment and storage medium
  • Semantic representation model pre-training method and device, electronic equipment and storage medium
  • Semantic representation model pre-training method and device, electronic equipment and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and they should be regarded as exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

[0028] figure 1 is a schematic diagram according to the first embodiment of the present application; as figure 1 As shown, this embodiment provides a pre-training method based on a multimodal semantic representation model, which may specifically include the following steps:

[0029] S101. Collect a plurality of single-modal training data sets of different modalitie...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a pre-training method and device for a semantic representation model, electronic equipment and a storage medium, and relates to the field of artificial intelligence. The specific implementation scheme is as follows: collecting a plurality of single-mode training data sets with different modes and a multi-mode training data set comprising a plurality of modes at the same time; respectively adopting the training data set of each single mode to pre-train the semantic representation model of the corresponding single mode; and based on the trained semantic representation model of each single mode, adopting a multi-mode training data set to train the fused multi-mode semantic representation model, wherein the multi-mode semantic representation model comprises a pluralityof single-mode semantic representation models and a fusion model. According to the method, the multi-modal semantic representation model can be trained in stages, and the expression accuracy of the multi-modal semantic representation model can be effectively improved.

Description

technical field [0001] The present application relates to the field of computer technology, in particular to the field of artificial intelligence, and in particular to a pre-training method, device, electronic equipment and storage medium based on a multimodal semantic representation model. Background technique [0002] With the widespread application of semantic representation technology in the field of Natural Language Processing (Natural Langunge Possns; NLP), large-scale data pre-training (pre-train) learning general semantic representation and fine-tuning (fine-tune) learning on downstream tasks Paradigm, refreshing the effect on multiple tasks of NLP. The semantic representation technology for multimodal tasks has also received extensive attention recently, and has achieved improved effects on typical multimodal tasks such as visual question answering and image retrieval. [0003] In the current multi-modal semantic representation technology, for pre-training methods,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F40/30G06K9/00G10L15/18G06N20/00
CPCG06F40/30G10L15/1815G06N20/00G06V20/41
Inventor 尹维冲于菲唐霁霁孙宇
Owner BEIJING BAIDU NETCOM SCI & TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products