Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image marking method based on multi-mode deep learning

A deep learning and image annotation technology, applied in the field of image processing, can solve problems such as difficult to achieve satisfactory results

Active Publication Date: 2015-12-23
NANJING UNIV OF POSTS & TELECOMM
View PDF3 Cites 49 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Due to the well-known semantic gap problem, it is difficult for existing technologies to achieve satisfactory results when semantically annotating images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image marking method based on multi-mode deep learning
  • Image marking method based on multi-mode deep learning
  • Image marking method based on multi-mode deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] The invention will be described in further detail below in conjunction with the accompanying drawings.

[0051] Such as figure 1 As shown, the present invention provides a method for image labeling based on multimodal deep learning. The method includes: first, using unlabeled images to train a deep neural network; secondly, using backpropagation to optimize each single modality; finally, using Power Gradient Algorithm for Online Learning to Optimize Weights Between Different Modalities.

[0052] The deep neural network in the present invention adopts convolutional neural network, and its model structure is as follows figure 2 shown. The present invention evaluates the performance of the image labeling algorithm based on multimodal deep learning proposed by the present invention through a series of experiments.

[0053] Step 1: Introduce the dataset used to evaluate the performance of the algorithm.

[0054] The experiment adopts three public image datasets, includi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image marking method based on multi-mode deep learning. The method comprises the following steps: firstly, a depth neural network is trained by utilization of images without labels; secondly, each single mode is optimized by utilization of counter propagation; finally, weights among different modes are optimized by utilization of on-line learning power gradient algorithm. The method employs a convolution neural network technology to optimize parameters of the depth neural network, and the marking precision is raised. Experiments of public data sets show that the method can raise the image marking performance effectively.

Description

technical field [0001] The invention relates to an image tagging method, in particular to an image tagging method based on multimodal deep learning, and belongs to the technical field of image processing. Background technique [0002] In recent years, with the rapid increase in the number of images, people urgently need to achieve efficient annotation of image content to achieve effective retrieval and management of large-scale images. [0003] From the perspective of pattern recognition, the image annotation problem is regarded as assigning a set of labels to images according to the content, and how to select the appropriate features to represent the image content will greatly affect the annotation performance. Due to the well-known semantic gap problem, it is difficult for existing technologies to achieve satisfactory results when semantically annotating images. In recent years, Hinton et al. proposed to use deep neural networks to efficiently train features from the trai...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/08
Inventor 朱松豪孙成建师哲
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products