Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multimodal Manifold Embedding Method for Zero-Shot Learning

A sample learning, multi-modal technology, applied in computer parts, character and pattern recognition, instruments, etc., can solve the problem that zero-sample learning cannot be applied, and achieve the effect of simple and feasible practicability, improved classification effect, and fast speed.

Active Publication Date: 2019-09-13
TIANJIN UNIV
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The method of combining attribute features and text vector features: attribute features and text vector features can complement each other in zero-shot learning. In order to mine more semantic information, many current studies combine attribute features and text features to obtain better classification effect, but this method also has the disadvantages of attribute-based methods and cannot be applied to large-scale zero-shot learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multimodal Manifold Embedding Method for Zero-Shot Learning
  • Multimodal Manifold Embedding Method for Zero-Shot Learning
  • Multimodal Manifold Embedding Method for Zero-Shot Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] The multimodal manifold embedding method for zero-shot learning of the present invention will be described in detail below with reference to the embodiments and the accompanying drawings.

[0025] The multi-modal manifold embedding method for zero-sample learning of the present invention is mainly based on the traditional least squares regression method, adding local manifold constraints, and integrating the manifold information between samples of the same mode in the Maintain before and after mapping, and add intra-class compactness and inter-class separation to the objective function, so that the mapped samples are close to the same kind of samples in the corresponding mode, and separated from the different class samples in the corresponding mode. The method proposed in the present invention will be described below using the image mode and the text mode as two specific modes.

[0026] The image feature matrix of the training sample uses X=[X 1 ,...,X n ] means, wher...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A multimodal manifold embedding method for zero-shot learning, including: input image features of training samples, text vector features corresponding to images, and weight parameters; respectively calculate the diagonal matrix and edge of each type of training samples The weight matrix and the Laplacian matrix corresponding to each class of training samples; use the Laplacian matrix of each class to construct the Laplacian matrix of all classes; calculate the multimodal manifold embedding matrix. The present invention improves the current multimodal embedding method, makes full use of the manifold information between data, achieves the purpose of effectively utilizing data information and improving the classification effect, and is a method suitable for multimodal classification and retrieval correlation domain embedding methods. The method of the present invention belongs to the method based on text vector, and can map the features of different modalities to a common space, and can calculate the similarity between different modalities in this space.

Description

technical field [0001] The invention relates to a feature embedding method of zero sample learning. In particular, it concerns a multimodal manifold embedding method for zero-shot learning. Background technique [0002] With the needs of real-world applications, zero-shot learning has gained a lot of attention. Its common method is to transform the image modality and text modality of the seen category into a common embedding space, and map the image modality of the unseen category to the common space to find its corresponding text modality. In order to determine the category it belongs to. [0003] From the perspective of embedding space, zero-shot learning can be divided into three categories: attribute feature-based methods, text vector-based methods, and methods that utilize both attribute features and text vectors. [0004] Attribute-based methods: Attribute-based methods have been used in zero-shot learning for a long time. This method first establishes an attribute ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62
CPCG06F18/217
Inventor 冀中于云龙
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products