Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image description text generation method based on generative adversarial network

A technology for image description and text, applied in biological neural network models, neural learning methods, instruments, etc., can solve problems such as inaccurate words, low scores, and insignificant improvement

Active Publication Date: 2021-05-18
SHANGHAI JIAO TONG UNIV
View PDF7 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The image description generation method relies solely on the encoder-decoder architecture and the global attention mechanism. There are still many shortcomings when generating text descriptions: the words are not accurate enough, the score on the objective evaluation index is low, and the improvement is not obvious

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image description text generation method based on generative adversarial network
  • Image description text generation method based on generative adversarial network
  • Image description text generation method based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0045] This method is mainly implemented by Pytorch, such as figure 1 As shown, the present invention provides a method for generating image description text based on generation confrontation network, comprising the following steps:

[0046] 1) Use the target detection model as an encoder to extract the features of the image. The encoder is the target detection model Faster R-CNN, and the image data is passed through the Faster R-CNN model to obtain a set of regional features, a set of bounding boxes, and the category Softmax probability distribution of each region.

[0047] The Faster R-CNN model is built on ResNet-101. ResNet-101 is a pre-trained model for classification training on the ImageNet data set. Faster R-CNN is trained on the Visual Genome data set and used when classifying the target. 1600 category labels and 1 background label, a total of 1601 categories, for the non-maximum value suppression algorithm of the candidate area, the area area overlap rate (Intersect...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an image description text generation method based on a generative adversarial network. The method comprises the following steps: 1) constructing an encoder for realizing feature extraction of an image; 2) performing word embedding on the text, and constructing a decoder for generating an image description text; 3) pre-training a generator composed of an encoder and a decoder according to maximum likelihood estimation; 4) constructing a discriminator based on a convolutional neural network and training the discriminator; 5) jointly training a generator and a discriminator; and 6) inputting the test image data of the description text to be generated into the trained generator, and outputting the generated description text. Compared with the prior art, the method has the advantages that the objective evaluation score of the generated text is improved, the interpretability is good, and the diversity is high.

Description

technical field [0001] The present invention relates to the fields of computer vision and natural language processing in the direction of artificial intelligence, in particular to a method for generating image description text based on generating confrontation networks. Background technique [0002] With the maturity of artificial intelligence technology, computer vision, natural language processing and other fields have developed rapidly. Image description tasks require machines to automatically generate descriptive sentences for images. Therefore, image description models need to have both image understanding capabilities and natural language Comprehension, which relies on the acquisition and processing of image representations and text representations by the model. [0003] The existing mainstream image description method includes the following steps: [0004] 1) Use the encoder to extract image features; [0005] 2) Use the decoder and attention mechanism to decode the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/583G06F40/126G06F40/194G06F40/242G06N3/04G06N3/08
CPCG06F16/5846G06F40/126G06F40/194G06F40/242G06N3/08G06N3/044G06N3/045
Inventor 陆佳妮程帆
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products