Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Knowledge base completion method based on multi-modal representation learning

A multi-modal, knowledge base technology, applied in the field of knowledge base completion, can solve the problems of unstable knowledge base completion effect, performance limited by display and storage knowledge, single modal information, etc.

Active Publication Date: 2021-02-09
FUZHOU UNIV
View PDF4 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the existing methods for integrating external information only consider a single modal information, most of which are text modals, and fail to use the complementary characteristics between multiple modalities to learn more comprehensive features.
[0004] At present, most knowledge graph representation learning only considers the structural knowledge between entities and relationships. The performance of this type of model is limited by the display and storage of knowledge, resulting in unstable knowledge base completion. Possess knowledge of multiple modes, such as text, picture, audio and video, etc.
These external knowledge of different modalities can enrich and expand the existing knowledge base to a certain extent, and then provide richer semantic information for downstream tasks such as question answering and link prediction; existing representation learning methods that incorporate external information , most of them only consider a single modality information, and fail to use the complementary characteristics between multimodalities to learn more comprehensive features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Knowledge base completion method based on multi-modal representation learning
  • Knowledge base completion method based on multi-modal representation learning
  • Knowledge base completion method based on multi-modal representation learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0058] It should be pointed out that the following detailed description is exemplary and intended to provide further explanation to the present application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

[0059] It should be noted that the terminology used here is only for describing specific implementations, and is not intended to limit the exemplary implementations according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and / or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and / or combinatio...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a knowledge base completion method based on multi-modal representation learning, and the method comprises the steps of giving a knowledge base KB which comprises two parts: aknown knowledge set and an unknown knowledge set; performing data preprocessing on the data in the knowledge base; proposing a knowledge base completion model ConvAt, and firstly generating multi-modal representation of a head entity and a tail entity for the acquired data; splicing the multi-modal representation of the head entity, the structural feature vector of the relationship and the multi-modal representation of the tail entity according to columns, respectively processing through a convolutional neural network module, a channel attention module and a spatial attention module, and finally multiplying by a weight matrix to obtain a score of a triple (h, r, t); and training the completion model in the step S2 by using a loss function, and performing knowledge base completion by usingthe trained model. According to the algorithm provided by the invention, external information can be fused, and richer semantic information can be utilized.

Description

technical field [0001] The invention relates to the field of knowledge base completion, in particular to a knowledge base completion method based on multimodal representation learning. Background technique [0002] In recent years, a variety of knowledge base completion methods have emerged, among which the method based on knowledge representation learning is currently an active research field for knowledge base completion. A key problem in representation learning is learning low-dimensional distributed embeddings of entities and relations. [0003] At present, there are mainly two types of information that can be used for knowledge representation learning. The first is triples that already exist in knowledge graphs. It mainly includes: knowledge graph representation learning methods based on translation / translation, such as TransE; methods based on tensor / matrix decomposition, such as the RESCAL model; representation learning models based on neural networks, such as ConvE....

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N5/02
CPCG06N5/022G06N5/027
Inventor 汪璟玢苏华
Owner FUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products