Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image labeling method based on convolutional neural network and binary coding features

A convolutional neural network and binary coding technology, applied in the field of visual images, can solve the problem that a single label cannot fully describe the image, and achieve the effect of low cost, high speed and high efficiency

Inactive Publication Date: 2019-11-29
SUZHOU UNIV
View PDF2 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In actual situations, an image is often associated with multiple tags, and a single tag cannot fully describe the entire image.
The current convolutional neural network models are also constructed based on single-label image classification tasks. The loss function is usually a Softmax function, and only the label with the highest probability can be assigned to the image.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image labeling method based on convolutional neural network and binary coding features
  • Image labeling method based on convolutional neural network and binary coding features
  • Image labeling method based on convolutional neural network and binary coding features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments, so that those skilled in the art can better understand the present invention and implement it, but the examples given are not intended to limit the present invention.

[0035] The present invention uses Inception V3 as the basic network structure of the model. The characteristic of the Inception network is that while controlling the calculation amount and parameter amount, it also obtains very good classification performance. The Inception network does not blindly increase the number of layers of the network. It proposes the Inception Module, whose structure is as follows figure 1 Shown is a schematic diagram of the Inception network model. This modular design reduces the number of parameters of the network and shrinks the design space of the network while increasing the thickness of the network. The Inception network model also introduces the i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image annotation method based on a convolutional neural network and binary coding features. The method comprises the following steps: constructing an Incepton V3 basic network model; intercepting a final pooling layer of the Incepton V3 network basic model; removing Logits and softmax functions of the Incepon V3 network basic model, and using a sigmoid function as an activation function of the last layer to obtain a modified first basic network model; adding two full connection layers on the first basic network model, and using a sigmoid function as an activation function of the last layer to obtain a multi-label classification network model; performing training learning on the training set by using a multi-label classification network model, and optimizing the weight of the multi-label classification network model; marking the feature vector set of the target image based on the trained multi-label classification network model to obtain multi-label probability output of the target image; and in combination with multi-label probability output, labeling the target image by adopting a TagProp algorithm. Multi-label labeling of images can be realized, the cost is low, and the efficiency is high.

Description

technical field [0001] The invention relates to the technical field of visual images, in particular to an image labeling method based on convolutional neural network and binary coded features. Background technique [0002] In order to realize the effective management and retrieval of large-scale images, the efficient annotation of images is becoming more and more important. The goal of image annotation is to assign a set of relevant descriptive labels to images. Traditional image annotation algorithms need to spend a lot of time manually extracting image features, and may not achieve good results, so deep learning is applied to image annotation. Deep learning can obtain higher-level semantic features of images, which narrows the difference with high-level semantic concepts such as labels. The automatic image labeling algorithm based on deep learning does not need to manually extract image features, so that the labeling algorithm is no longer subject to the choice of featur...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/583G06K9/62G06N3/04
CPCG06F16/583G06N3/045G06F18/2193G06F18/214G06F18/2415G06F18/2431
Inventor 薛越王邦军吴新建张莉
Owner SUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products