A Zero-Shot Image Classification Method Based on Adversarial Autoencoder Model

A self-encoder and sample image technology, applied to biological neural network models, instruments, computer components, etc., can solve the problems of lack of discriminative information for visual features, ignoring the correspondence between visual features and category semantic features, etc.

Active Publication Date: 2021-09-14
TIANJIN UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, most adversarial networks only focus on generating distributions that approximate real visual features, but ignore the correspondence between visual features and category semantic features, making the generated visual features lack discriminative information to a certain extent.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Zero-Shot Image Classification Method Based on Adversarial Autoencoder Model
  • A Zero-Shot Image Classification Method Based on Adversarial Autoencoder Model
  • A Zero-Shot Image Classification Method Based on Adversarial Autoencoder Model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] A zero-shot image classification method based on the adversarial autoencoder model of the present invention will be described in detail below with reference to the embodiments and the accompanying drawings.

[0033] A zero-shot image classification method based on the adversarial autoencoder model of the present invention assumes that the reverse process of generating category semantic features from visual features is considered while using category semantic features to generate visual features. Therefore, on the basis of using the confrontational network, the self-encoder is introduced to complete the two-way generation process through its encoding and decoding process, so as to achieve the purpose of generating visual features and associating visual features with category semantic features.

[0034]An autoencoder is a type of neural network that is trained to copy an input to an output. The self-encoder consists of two parts, the encoder h=E(x) and the decoder x'=G(h)...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A zero-shot image classification method based on an adversarial autoencoder model, using an adversarial autoencoder network trained on visible categories to select a network that can best approximate the distribution of visual features and associate visual features with category semantic features The parameters w and v, and then the category semantic feature a of the unseen category t Input it into the network, use the decoder network G to generate visual features, and calculate the Euclidean distance between the generated visual features and the real visual features. Finally, the category with the smallest distance is considered as the predicted category, so as to realize the zero-shot classification task. The present invention is more in line with the characteristics of real data, and at the same time aligns visual features and category semantic features, and can achieve better classification effects in zero-sample tasks.

Description

technical field [0001] The invention relates to a zero-sample classification method. In particular, it concerns a zero-shot classification method based on an adversarial autoencoder model. Background technique [0002] Deep learning has greatly facilitated the development of computer vision, such as object classification, image retrieval, and action recognition, etc. The performance of these tasks is usually evaluated after training with a large amount of labeled data. However, some tasks have only a small portion of training data or even no training data, making traditional classification models perform poorly. In order to improve the classification performance of traditional classification models for categories with little or no data, zero-shot learning has attracted extensive attention. The task of Zero Shot Learning is to classify categories without training data. Humans have the ability to reason, which means that humans can successfully infer the category of unseen...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/241
Inventor 冀中王俊月于云龙
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products