Unlock instant, AI-driven research and patent intelligence for your innovation.

Neural network visual dialogue model and method based on KR product fusion multi-modal information

A neural network, multimodal technology for visual dialogue and multimodal fusion

Active Publication Date: 2021-07-27
TIANJIN UNIV
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to achieve a better visual dialogue model, the current main challenge is: the visual dialogue task needs to model the image content and dialogue history, and obtain useful information for answer prediction from the question vector, visual vector and historical vector

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network visual dialogue model and method based on KR product fusion multi-modal information
  • Neural network visual dialogue model and method based on KR product fusion multi-modal information
  • Neural network visual dialogue model and method based on KR product fusion multi-modal information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0031] A neural network visual dialogue model based on KR product fusion of multi-modal information, including a modal feature extraction module, a different modal information fusion module and a candidate answer prediction module;

[0032] The modality feature extraction module is used to extract the semantic features of questions, the visual features of images and the historical features of historical dialogues. First, the vector representation of the question is obtained through the LSTM network, and a set of entity feature vectors of the image are obtained using the Faster R-CNN network. The historical dialogue information is regarded as a whole or the content of each round of dia...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a neural network visual dialogue model and method based on KR product fusion multi-modal information. The model comprises a modal feature extraction module, a different modal information fusion module and a candidate answer prediction module. The modal feature extraction module extracts features of question texts and features of historical information through an LSTM network, extracts entity features of pictures by using a Faster R-CNN network, and extracts visual features related to questions by using an attention mechanism; the different modal information fusion module captures feature information in different modals by using a later fusion method, captures associated information among the different modals through a feature fusion method based on a KR product, and fuses the information in the modals and the information among the modals; and the candidate answer prediction module performs answer prediction by using a fusion vector fusing intra-modal information and inter-modal information, so that related answers can be found out more accurately. According to the method, the current situation that in a traditional visual dialogue model, associated information among different modals is insufficiently captured through later fusion is overcome.

Description

technical field [0001] The present invention relates to the technical field of visual dialogue and multimodal fusion, specifically, to a model and a method for judging the real answer from candidate answers for a certain picture, historical dialogue information and corresponding questions. Background technique [0002] Visual dialogue is a challenging task at the intersection of language and vision, which needs to consider the historical information of multiple rounds of dialogue and related information in images to find the best candidate answer to the current question. Visual dialogue appears in many application scenarios, such as helping blind people understand their surroundings, interactive search, indoor navigation, etc. In the visual dialogue task, in order to capture the information related to the answer, the model needs to understand the question, capture the visual information and historical information related to the question, and capture the potential association...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/332G06F16/35G06F40/284G06K9/62
CPCG06F16/3329G06F40/284G06F16/35G06F18/2411G06F18/253
Inventor 骆克张鹏
Owner TIANJIN UNIV