Text generation image method based on cross-modal similarity and generative adversarial network

A technology for image generation and similarity, applied in the field of text image generation based on cross-modal similarity and generative adversarial networks, can solve the problems of inability to generate high-quality images, lack of local information, etc., and achieve the effect of rich images

Active Publication Date: 2019-11-22
TONGJI UNIV
View PDF1 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these conditional generative adversarial networks are only based on the overall text expression and lack detailed local information, so they cannot generate clear high-quality images.
Therefore, existing GAN-based successes are limited to small sample corpora, and it remains a challenge for complex image generation with many objects.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Text generation image method based on cross-modal similarity and generative adversarial network
  • Text generation image method based on cross-modal similarity and generative adversarial network
  • Text generation image method based on cross-modal similarity and generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments. This embodiment is carried out on the premise of the technical solution of the present invention, and detailed implementation and specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.

[0049] A text generation image method based on cross-modal similarity and generation confrontation network, the method is implemented by a computer system in the form of a computer program, when implemented, such as figure 1 shown, including the following steps:

[0050] Step S1: Use matched and unmatched data to train a global consistency model, a local consistency model, and a relational consistency model, where the global consistency model, the local consistency model, and the relational consistency model are used to obtain text and image Global representation, local representatio...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a text generation image method based on cross-modal similarity and a generative adversarial network. The method comprises the steps that S1, training a global consistency model, a local consistency model and a relation consistency model by using matched and unmatched data, wherein three models are used for obtaining global representation, local representation and relationrepresentation of a text and an image respectively; S2, obtaining global representation, local representation and relation representation of the to-be-processed text by utilizing the trained global consistency model, local consistency model and relation consistency model; S3, connecting the global representation, the local representation and the relation representation of the to-be-processed textin series to obtain text representation of the to-be-processed text; S4, converting the text representation of the to-be-processed text into a condition vector by utilizing an Fca condition enhancement module; and S5, inputting the condition vector into a generator to obtain a generated image. Compared with the prior art, the method has the advantages of considering local and relation informationand the like.

Description

technical field [0001] The invention relates to image retrieval and matching technology, in particular to a text generation image method based on cross-modal similarity and generation confrontation network. Background technique [0002] In recent years, deep neural networks (DNNs) have achieved great success, especially neural network models trained for discriminative tasks. For example, Convolutional Neural Networks (CNNs) show great promise in computer vision. But the discriminative model focuses on representation learning and cannot capture the data distribution. Learning generative models that can explain complex data distributions is a long-standing problem in the field of deep learning. As a sub-problem of it, image generation from text based on Generative Adversarial Networks (GANs) has made a series of progress. [0003] The text is fed into the generator and the discriminator as conditions, and these deep learning models based on GANs can produce rich and colorfu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T11/00G06K9/62G06F17/27G06N3/04
CPCG06T11/001G06N3/045G06F18/22Y02D10/00
Inventor 赵生捷缪楠史清江张林
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products