Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-modal-based rapid multi-label image classification method and system

A classification method and cross-modal technology, applied in still image data clustering/classification, still image data retrieval, metadata still image retrieval, etc., can solve the problems affecting the image recognition performance of the image classification model and the convergence efficiency of the image classification model And other issues

Pending Publication Date: 2021-01-08
HUAZHONG UNIV OF SCI & TECH
View PDF0 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Generally speaking, the existing multi-label image classification methods first use the convolutional neural network to obtain the feature vector of the image, then use the graph convolutional network (Graph Convolutional Network, GCN) to obtain the co-occurrence relation word vector between the labels, and finally directly The dot product operation of vectors is used to fuse image features and co-occurrence relationship word vectors of labels, but this will seriously affect the convergence efficiency of the image classification model, which in turn affects the image recognition performance of the image classification model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal-based rapid multi-label image classification method and system
  • Cross-modal-based rapid multi-label image classification method and system
  • Cross-modal-based rapid multi-label image classification method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0056] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0057] The present invention is realized on the basis of graph convolutional network (GCN) and bilinear multimodal factor pooling (MFB) components, by simulating the dependencies between labels, using GCN to learn the co-occurrence relationship between labels, Furthermore, the MFB is used to efficiently integrate the image features and the co-occurrence relationship word vector of the label, whi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a cross-modal-based rapid multi-label image classification method, and the method can mine the co-occurrence relation between different objects in an image, and achieves the high-efficiency fusion of image features and label co-occurrence relations to generate an end-to-end multi-label image classification model. According to the method, the dependency relationship betweenmodel objects is modeled by counting the co-occurrence probability between tags, and the co-occurrence relationship between image features and the tags is fused by adopting a bilinear multi-modal factor pooling component, so that the convergence rate of the model is increased, and the image classification performance is improved. The method provided by the invention comprises the following steps:firstly, respectively generating co-occurrence relation word vectors of features and labels of an image in combination with a convolutional neural network and a graph convolutional network, then fusing the vectors of the two modes by adopting MFB, and finally generating an end-to-end classification model through a multi-label classification function. According to the method, the co-occurrence relation word vectors of the features and the labels of the images are efficiently fused, and the convergence speed of the model is greatly increased.

Description

technical field [0001] The invention belongs to the technical field of pattern recognition and image classification, and more particularly relates to a fast multi-label image classification method and system based on cross-modality, which utilizes dependencies between labels. Background technique [0002] Nowadays, Multi-label image classification (Multi-label image classification) has been widely used in the field of computer vision, including multi-target recognition, sentiment analysis, medical diagnosis recognition, etc. Since each image contains multiple objects, how to effectively learn the relationship between these objects and how to fuse these relationships with image features is still full of challenges. [0003] Generally speaking, the existing multi-label image classification methods first use the convolutional neural network to obtain the feature vector of the image, then use the graph convolutional network (Graph Convolutional Network, GCN) to obtain the co-occ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/55G06F16/58G06K9/62
CPCG06F16/55G06F16/5866G06F18/2415G06F18/25G06F18/214
Inventor 刘渝汪洋涛谢延昭李春花王冲牛中盈周可
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products