Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Image Description Method Fused with Visual Context

A technology of image description and context, applied to instruments, biological neural network models, calculations, etc., can solve problems affecting test performance, etc.

Active Publication Date: 2022-04-22
北京般芸聚合科技有限公司
View PDF7 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, sentences that have not appeared will seriously affect the performance of the test during the test

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Image Description Method Fused with Visual Context
  • An Image Description Method Fused with Visual Context
  • An Image Description Method Fused with Visual Context

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0058] refer to figure 1 , an image description method for fusing visual context, comprising the following steps:

[0059] 1) Divide the images in the MS-COCO image description dataset into a training set and a test set at a ratio of 7:3, horizontally flip and luminance transform the images in the training set, and finally normalize the images to the values ​​of all pixels in each image The mean value is 0, the variance is 1, the photo size of the test set is fixed to 512×512 pixels, and the rest of the processing is not performed;

[0060] 2) Image description tags are preprocessed: 5 sentences corresponding to each image in the MS-COCO image description dataset are used as image description tags, and the description of each image is set to 16 words in length. Sentences are filled with tokens, words that appear less than 5 times are filtered and discarded, and a vocabulary containing 10369 words is obtained, where the description tag corresponding to the image is a fixed val...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image description method for fusion of visual context, comprising the following steps: 1) preprocessing; 2) image description label preprocessing; 3) feature extraction; 4) mean value pooling; 5) convolution and mean value sampling Pooling; 6) Obtaining detected image entities; 7) Obtaining entity attributes; 8) Convolution; 9) Obtaining entity attribute features; 10) Convolution; 11) Convolution; 12) Convolution; 13) Acquiring entities and attributes Relationship; 14) The relationship between entities and attributes; 15) LSTM training; 16) Solve the exposure bias; 17) Reduce the dimension; 18) Normalize; 19) Get the description sentence of the current image, that is, the model; Descriptive statement; 21) Test and verify the training effect and performance of the model. This method can ensure the accuracy of image feature extraction, avoid visual errors, make the generated description more fluent to conform to human grammar rules, and obtain higher scores for evaluation indicators.

Description

technical field [0001] The invention relates to the technical field of computer vision and the field of natural language processing, in particular to an image description method integrating visual context in a deep neural network and a reinforcement learning method. Background technique [0002] Image description can be understood as giving a picture and generating a text described in natural language. Image description and visual question answering belong to the intersection of computer vision and natural language processing, and are more effective than target detection, image classification and semantic segmentation. It is challenging because it extracts image entities and attributes while inferring the relationship between entities and attributes. Image description has broad application prospects in blind navigation, early childhood education, and image-text retrieval. [0003] Image description needs to use encoding network and decoding network. The proposal of residual ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V10/80G06V10/774G06V10/764G06V10/82G06K9/62G06N3/04
CPCG06N3/048G06N3/044G06N3/045G06F18/24G06F18/253G06F18/214
Inventor 张灿龙周东明李志欣
Owner 北京般芸聚合科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products