Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Visual question and answer method based on multi-modal decomposition model

A multi-modal, model technology, applied in character and pattern recognition, instruments, computer parts, etc., can solve the problems of low performance and low accuracy, and achieve the effect of improving performance

Inactive Publication Date: 2018-02-09
SHENZHEN WEITESHI TECH
View PDF2 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the current low performance and low accuracy, the present invention uses MFB to fuse the problem and the visual features of the image, and uses MFH to obtain more relevant visual features, and cooperates with the attention model to predict the relationship between each spatial grid in the image and the problem. Correlation, which is conducive to accurately predicting the best matching answer, combined with this image attention mechanism, can make the model effectively understand which image region is important for the problem, and significantly improve the performance of the model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual question and answer method based on multi-modal decomposition model
  • Visual question and answer method based on multi-modal decomposition model
  • Visual question and answer method based on multi-modal decomposition model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] specific implementation plan

[0036] It should be noted that, if there is no conflict, the experimental examples and the features in the experiments herein can be combined with each other. The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0037] figure 1 It is a flow chart of a system for performing visual question answering based on a multimodal decomposition model in the present invention. It mainly includes multi-modal decomposition bilinear pooling (MFB), multi-modal decomposition high-order pooling (MFH), and assisted attention model.

[0038] figure 2 It is an MFB flow chart for visual question answering based on a multimodal decomposition model in the present invention. Multimodal decomposition bilinear pooling (MFB), different modalities have two feature vectors, where the visual features of the image are Visual features of question text The formula for multimodal decomposition bil...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A visual question and answer method based on a multi-modal decomposition model is provided. The method is characterized in that the image is trained in an ImageNet dataset and image features are extracted, the question is marked as texts into feature vectors; a collaborative attention model is introduced into the basic network architecture, image and question related features are learned, and fine-grained correlation among multi-modal features are characterized; and multi-modal features enter a multi-modal decomposition bilinear pool (MFB) or a multi-modal decomposition high order cell (MFH) module image to generate a fusion image question feature z, and the z is put into the classifier to predict the best matching answer. According to the method provided by the present invention, the collaborative attention model is used to predict the correlation between each spatial grid in the image and the question, and the best matching answer can be accurately predicted in a facilitated manner;and in combination with the image attention mechanism, the model can effectively understand which image region is important to the question, so that the performance of the model and the accuracy of the question and answer can be significantly improved.

Description

technical field [0001] The invention relates to the field of visual question answering, in particular to a method for visual question answering based on a multimodal decomposition model. Background technique [0002] With the continuous development of machine vision, the automatic understanding of semantic representation in images by machines has been widely studied. Visual question answering is often used in image retrieval, intelligent transportation, visual education, artificial intelligence and other fields. Specifically, in the field of image retrieval, it can By understanding images and questions, decompose key information and obtain corresponding text descriptions. In the field of visual education, specific features are decomposed, and the correct answer is obtained through inference combined with the feature information in the text. A step forward in improving artificial intelligence. The current existing research only considers the visual features in the image, bu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62
CPCG06F18/2451G06F18/253G06F18/214
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products