Image title automatic generation method based on multi-modal attention

An attention, multi-modal technology, applied in the intersection of computer vision and natural language processing, can solve the problem that semantic information does not have strict alignment relationship, the number of categories is limited, does not contain, etc., to alleviate visual features and semantic features. Alignment problems, solving visual and semantic alignment problems, and improving quality

Active Publication Date: 2018-11-16
DALIAN UNIV OF TECH
View PDF5 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] There are two problems in traditional neural network-based methods: 1. The image classification dataset used to train CNN contains a limited number of categories, and many semantic information (such as color and size) that often appear in ima

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image title automatic generation method based on multi-modal attention
  • Image title automatic generation method based on multi-modal attention
  • Image title automatic generation method based on multi-modal attention

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0019] The invention provides a method for automatically generating image captions based on multimodal attention. The specific embodiments discussed are only used to illustrate the implementation of the present invention, but do not limit the scope of the present invention. The following describes the embodiments of the present invention in detail with reference to the drawings. A method for automatically generating image captions based on multi-modal attention. The specific steps are as follows:

[0020] (1) Image preprocessing

[0021] A selective search algorithm is used to extract the image area containing the object from the original image. However, the size of these image regions is different, and it is not suitable for subsequent feature extraction through the ResNet convolutional neural network. Therefore, the present invention scales the extracted image area so that its size can meet the requirements, and at the same time, the image pixel value is regularized.

[0022] (2...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the cross technical field of the computer vision and natural language processing, and provides an image title automatic generation method based on multi-modal attention. The alignment problem of vision feature and language feature and the sentence feature ignoring problem in the word prediction process in the traditional method based on neural network are solved, the convergence speed and the quality of the image title are improved. The method comprises the following steps: automatically performing feature extraction on an image region by utilizing a convolutional neural network; and then realizing sentence feature extraction by utilizing LSTM with vision attention; and finally producing a final image title by designing the LSTM with the multi-modal attention (vision attention and hidden variable attention). The experiment proves that the proposed method has excellent result on the MS COCO and like standard data set.

Description

technical field [0001] The invention belongs to the cross technical field of computer vision and natural language processing, and relates to an image caption automatic generation method based on multimodal attention. Background technique [0002] The essence of generating captions for images is to translate images into language. Designing an efficient algorithm for automatic generation of image captions can enable systems (humans or computers) that lack or have poor vision to have the ability to perceive the surrounding environment. In recent years, there has been a lot of novel work fusing advances in computer vision and natural language processing with promising results. According to the way of caption generation, these works can be divided into three categories: template matching-based methods, transfer-based methods and neural network-based methods. [0003] The method based on template matching first uses multiple classifiers to identify the objects, attributes, and a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/27G06N3/04
CPCG06F40/258G06N3/045
Inventor 葛宏伟闫泽杭
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products