Image description automatic generation method based on relation constraint self-attention

An image description and automatic generation technology, applied in neural learning methods, biological neural network models, instruments, etc., can solve problems such as redundant noise, difficulty in further improving image description effects, lack of prior knowledge, etc., and achieve good quality results

Pending Publication Date: 2021-09-21
BEIJING UNIV OF TECH
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the problem that the image description effect is difficult to further improve due to the redundant noise and lack of a priori of the self-attention mechanism in the image description task, the present invention discloses a self-attention model (Relation Constraint Self -Attention, RCSA), which can introduce prior relationship information into self-attention to limit the distribution of self-attention, thereby improving the relationship learning ability of self-attention

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image description automatic generation method based on relation constraint self-attention
  • Image description automatic generation method based on relation constraint self-attention
  • Image description automatic generation method based on relation constraint self-attention

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The following takes the MS COCO image description data set as an example to illustrate the specific implementation steps of the present invention:

[0027] Step (1) Obtain the MS COCO image description data set and preprocess to obtain the training data set:

[0028] Step (1.1) Obtain the MS COCO image description data set, which contains image data I and its corresponding groundtruth standard description data The MS COCO dataset download address is http: / / cocodataset.org / #download. The dataset contains a total of 164062 images, of which the training set, validation set and test set size are 82783, 40504 and 40775 respectively. Except for the test set, each A picture also contains at least 5 corresponding standard descriptions as labels.

[0029] Step (1.2) describes the data for the ground truth standard in MS COCO preprocessing. Set the maximum length of the image description to 16, and replace words with a word frequency less than 5 with "UNK" to reduce the inte...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The technical scheme adopted by the invention is an image description automatic generation method based on relation constraint self-attention, relates to the three fields of natural image processing, computer vision and natural language processing, and has the following characteristics: 1) a self-attention mechanism (RCSA) based on visual semantic relation constraint is designed; constraint self-attention better focuses on areas related to description generation, irrelevant areas are ignored, and therefore the accuracy of image description generation is improved. 2) the RCSA comprises two sub-modules, and the RCSA-E and the RCSA-D respectively act on the encoding stage and the decoding stage of the image description model; the RCSA-E uses a visual relationship to make the self-attention weight more sparse; the RCSA-D embeds prior semantic relationship information into input high-level context features to enhance semantic expression in a decoding stage. and 3) sufficient experiments are performed on the off-line and on-line evaluation methods, and experimental results show the effectiveness of the proposed methods.

Description

technical field [0001] The invention relates to three fields of natural image processing, computer vision and natural language processing, aiming at the automatic generation target of natural image description, and designs an image description automatic generation method based on relation constraint self-attention. Background technique [0002] Image captioning, which aims to automatically generate natural descriptions for images, is an interdisciplinary task that combines computer vision and natural language processing. It requires the model not only to understand the objects, scenes and their interactions in the image, but also to generate natural language sequences. The research and development of image description depends on the progress of computer vision and natural language processing technology, and it also helps to promote the development of computer vision, natural language processing and other related fields, and also helps to promote the realization of artificial...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06F40/30G06N3/04G06N3/08
CPCG06F40/30G06N3/08G06N3/045G06F18/214
Inventor 冀俊忠王鸣展张晓丹
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products