Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image description method based on adaptive local concept embedding

An image description and self-adaptive technology, applied in neural learning methods, still image data retrieval, metadata still image retrieval, etc., can solve problems such as modeling visual features and conceptual relationships without considering explicit

Active Publication Date: 2020-10-02
南强智视(厦门)科技有限公司
View PDF6 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these methods only focus on the context and visual features of specific tasks, and do not consider the relationship between explicit modeling visual features and concepts.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image description method based on adaptive local concept embedding
  • Image description method based on adaptive local concept embedding
  • Image description method based on adaptive local concept embedding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0081] The technical solutions and beneficial effects of the present invention will be described in detail below in conjunction with the accompanying drawings.

[0082] The purpose of the present invention is to address the shortcomings of traditional attention-based image description methods that do not explicitly model the relationship between local regions and concepts, and propose a scheme to adaptively generate visual regions and thereby generate visual concepts through the context mechanism, so as to enhance visual Connection to language and accuracy, providing an image description method based on adaptive local concept embedding. The specific algorithm flow is as figure 1 shown.

[0083] The present invention comprises the following steps:

[0084] 1) For the images in the image library, first use the convolutional neural network to extract the corresponding image features;

[0085] 2) Use a recurrent neural network to map the current input word and global...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image description method based on self-adaptive local concept embedding, and belongs to the technical field of artificial intelligence. The method comprises the following steps: 1, extracting a plurality of candidate regions of an image to be described and features corresponding to the candidate regions by adopting a target detector; 2, inputting the features extracted in the step 1 into the trained neural network so as to output a description result of the to-be-described image. Aiming at the defect that the traditional attention mechanism-based image description method has no explicit modeling of the relationship between the local area and the concept, the invention provides a scheme for adaptively generating the visual area through a context mechanism and generating the visual concept thereby, and enhances the connection from vision to language, thereby improving the accuracy of generated description.

Description

technical field [0001] The invention relates to automatic image description in the field of artificial intelligence, in particular to a research method for an image description model based on self-adaptive local concept embedding to describe the objective content of the image with natural language based on the image. Background technique [0002] Automatic image description (Image Captioning) is an ultimate machine intelligence task proposed by the artificial intelligence community in recent years. Its task is to describe the objective content of the image in natural language for a given image. With the development of computer vision technology, tasks such as target detection, recognition, and segmentation can no longer meet people's production needs, and there is an urgent need for how to automatically and objectively describe image content automatically. Different from tasks such as target detection and semantic segmentation, automatic image description requires an overall...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/58G06F40/211G06F40/216G06F40/289G06K9/62G06N3/04G06N3/08
CPCG06F16/5866G06F40/211G06F40/216G06F40/289G06N3/08G06N3/045G06F18/214
Inventor 王溢王振宁许金泉曾尔曼
Owner 南强智视(厦门)科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products