Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image description generation method based on architectural short sentence constraint vector and dual visual attention mechanism

A technology of image description and dual vision, applied in the field of computer vision, to achieve the effect of strong representation ability

Active Publication Date: 2019-03-29
SUN YAT SEN UNIV
View PDF8 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Aiming at the problem that existing methods lack image descriptions that can effectively combine the relationship between the target and the scene, the present invention proposes an image description generation method based on architectural short-sentence constraint vectors and a dual visual attention mechanism. The technical solution adopted in the present invention Yes:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image description generation method based on architectural short sentence constraint vector and dual visual attention mechanism
  • Image description generation method based on architectural short sentence constraint vector and dual visual attention mechanism
  • Image description generation method based on architectural short sentence constraint vector and dual visual attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0051] Figure 1~2 As shown in , the image description generation method based on the architectural phrase constraint vector and the dual visual attention mechanism includes the following steps:

[0052] S10. The training picture data in the training set contains 5 reference sentences. The words in each sentence are encoded by one-hot, and then projected into the embedding space through the embedding matrix to become a semantic word expression vector W t ;

[0053] S20. The word expression vector is used for the input of the circular convolutional neural network RNN ​​at a certain time frame t, and the recurrent layer of the frame t at this time activates R t is the word expression vector of the current time frame and the recurrent layer R of the previous time frame t-1 t-1 Co-determined, the word input at each moment will be spliced ​​with the visual features obtained by the dual visual attention mechanism as the LSTM input at that moment.

[0054] S30. The image extracts ...

Embodiment 2

[0088] The pseudocode of generating the short sentences of the structure of the present invention is as follows,

[0089] Input: visual target label set L = {l 1 , l 2 ,..., l N}; visual target box B = {b 1 , b 2 ,...,b N}, and the position coordinate b corresponding to each target box i ={x i1 ,y i1 , x i2 ,y i2}, i∈{1,2,...,N}, N=10;

[0090] Output: Schema phrase L s ={l s1 , l s2 ,..., l sN};

[0091]

[0092]

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image description generation method based on an architectural short sentence constraint vector and a dual visual attention mechanism, A semantic model for automatically describing the visual content information of images is obtained by training a large number of labeled texts. The semantic model is composed of three parts: a short sentence generation model, a dual visualattention mechanism and a constrained language model. The text description can be generated automatically for any input test image. The invention can effectively establish the relationship between the word and the image in the text description, and has good performance for describing the salient objects or scenes of the image based on semantics.

Description

technical field [0001] The present invention relates to the field of computer vision, more specifically, to a deep neural network-based image understanding text description generation method. Background technique [0002] Obtaining text-level image descriptions has become an important research topic in the current field of computer vision, and in real life, it has many application scenarios. Examples include early childhood education, image retrieval, and navigation for the blind. With the rapid development of computer vision and natural language processing technology, a large number of effective works on this topic appear, many of which regard it as a retrieval problem. The researchers project the features of text sentences and images into the same semantic space by learning a node embedding layer. These methods generate image descriptions by retrieving similar descriptions from text sentence datasets, but they lack image descriptions that can effectively combine the rela...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08G06F16/50
CPCG06N3/049G06N3/084G06N3/045
Inventor 胡海峰杨梁
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products