A multimodal vocabulary representation method based on dynamic fusion mechanism

A lexical representation, multimodal technology, applied in unstructured text data retrieval, semantic tool creation, natural language analysis, etc., can solve problems such as inaccurate representation results, inaccurate lexical weights, and lexical differences are not considered.

Active Publication Date: 2020-02-07
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The multimodal vocabulary representation method in the prior art does not take into account the differences between vocabulary. In practical applications, the semantic representation of more abstract vocabulary depends more on the text modality, and the semantic representation of more concrete vocabulary relies more on the visual modality. , different types of words have different weights in different modalities, and not distinguishing words will lead to inaccurate weights of words in modals, resulting in inaccurate final representation results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A multimodal vocabulary representation method based on dynamic fusion mechanism
  • A multimodal vocabulary representation method based on dynamic fusion mechanism
  • A multimodal vocabulary representation method based on dynamic fusion mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] Preferred embodiments of the present invention are described below with reference to the accompanying drawings. Those skilled in the art should understand that these embodiments are only used to explain the technical principles of the present invention, and are not intended to limit the protection scope of the present invention.

[0041] Such as figure 1 as shown, figure 1 A flowchart of a multimodal vocabulary representation method based on a dynamic fusion mechanism provided by the present invention, including step 1, step 2 and step 3, wherein,

[0042] Step 1: Calculate the text representation vector of the vocabulary to be represented in the text mode and the image representation vector of the vocabulary to be represented in the visual mode;

[0043]The purpose of calculating the text representation vector and image representation vector of the vocabulary is to convert the vocabulary into a form that can be recognized by the computer. In practical applications, ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a multi-modality vocabulary representing method. The multi-modality vocabulary representing method comprises the steps of calculating a text representing vector of a vocabulary to be represented in text modality and a picture representing vector of the vocabulary to be represented in visual modality; inputting the text representing vector into a pre-established text modality weight model to obtain the weight of the text representing vector in the text modality; inputting the picture representing vector into a pre-established visual modality weight model to obtain the weight of the picture representing vector in picture modality; conducting calculation to obtain a multi-modality vocabulary representing vector according to the text representing vector, the picture representing vector and weights corresponding to the text representing vector and the picture representing vector respectively. The text modality weight model is a neural network model of which input is the text representing vector and output is the weight of the text representing vector in the corresponding text modality; the visual modality weight model is a neural network model of which input is the picture representing vector and output is the weight of the picture representing vector in the corresponding visual modality.

Description

technical field [0001] The invention belongs to the technical field of natural language processing, and specifically provides a multimodal vocabulary representation method based on a dynamic fusion mechanism. Background technique [0002] Multimodal vocabulary representation is the basic task of natural language processing, which directly affects the performance of the entire natural language processing system. Among them, a modality refers to collecting data through different methods or angles for a thing to be described, and the method or angle of collecting data is called a modality. Multimodal vocabulary representation is the fusion of information from multiple modalities, and maps words with similar semantics in different modalities to a high-dimensional space. Compared with single-modal vocabulary representation, multi-modal vocabulary representation can be closer to human learning The process of lexical concepts has better performance in natural language processing t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/36
CPCG06F16/36G06F40/20
Inventor 王少楠张家俊宗成庆
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products