Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Image statement conversion method based on improved generative adversarial network

A conversion method and generative technology, applied in biological neural network models, character and pattern recognition, instruments, etc., can solve problems such as incoherent sentence expressions

Inactive Publication Date: 2017-11-24
BEIJING TECHNOLOGY AND BUSINESS UNIVERSITY
View PDF7 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] The technical problem of the present invention is to overcome the deficiencies of the prior art, and provide an image combination conversion method based on an improved generative confrontation network, so as to use a syntactic model with generation and discrimination capabilities to solve the problem of incoherent sentence expression in image sentence conversion question

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image statement conversion method based on improved generative adversarial network
  • Image statement conversion method based on improved generative adversarial network
  • Image statement conversion method based on improved generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention will be described below in conjunction with the accompanying drawings and specific embodiments. in figure 1 An image-to-sentence translation process based on an improved generative adversarial network is described.

[0034] Such as figure 1 Shown, the present invention comprises the following steps:

[0035] (1) Input the image, and use the region-based convolutional neural network to extract the features of the image. According to this method, the prominent position of the image can be used as a block, and the meaning and vocabulary vector of the block can be obtained through the feature vector. This step finally obtains features as vocabulary vectors.

[0036] (2) Input the vocabulary vector into the generator of the generative confrontation network. The generator is composed of a long short-term memory model. The model has memory elements. The vocabulary vector is spliced ​​according to the propagation rules, and a variety of spliced ​​senten...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides an image statement transition method based on an improved generative adversarial network. The objective of the invention is to obtain statements being more accorded with mankind description habits in the image statement conversion process. The method comprises: employing a convolutional neural network based on an area to perform significance detection of images according to the areas through image segmentation, and generating word vectors of each area; inputting the word vectors into the generator of generative adversarial network, and splicing the words into sentences through adoption of the generator; inputting the generated statements into the discriminator of the generative adversarial network, and allowing the discriminator to continuously deny statements with large distances and output statements with the minimum distance through comparison of the distance between a text corpus and the generated statements; continuously training the model, determining the parameters of the model to allow the model to tend to be stable and then stop training, inputting a test image, and performing test of the model.

Description

technical field [0001] The present invention generally relates to the technical fields of image recognition technology and syntax generation, and specifically relates to an image sentence conversion method based on an improved generative confrontation network. Background technique [0002] With the development of science and technology, the popularity of the Internet has brought huge information resources to people. Text information is the main way in the early stage of Internet development. Compared with the singleness of text information, multimedia information such as images and videos is rich in more knowledge and is a clearer information carrier that is more in line with human understanding. With the continuous improvement of computer storage space and computing efficiency, various types of information such as images, audios, and videos have emerged in various websites, and have grown rapidly at an alarming rate. Instagram and other application software share as many a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04
CPCG06V30/40G06N3/045
Inventor 蔡强薛子育毛典辉李海生祝晓斌
Owner BEIJING TECHNOLOGY AND BUSINESS UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products