Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Transform algorithm-based single-mode label generation and multi-mode emotion discrimination method

A single-modal and multi-modal technology, applied in character and pattern recognition, computing, computer parts, etc., can solve the problem of time-consuming and laborious manual labeling of single-modal labels, and achieve the effect of improving understanding and generalization ability

Pending Publication Date: 2022-04-22
HEFEI UNIV OF TECH
View PDF0 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, there is a strong correlation between unimodal and multimodal, and manual labeling of unimodal labels is time-consuming and laborious. Therefore, how to use self-supervised methods to learn unimodal feature representations from multimodal features and shared labels, It is of great significance for in-depth understanding of the expression of multimodal emotion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Transform algorithm-based single-mode label generation and multi-mode emotion discrimination method
  • Transform algorithm-based single-mode label generation and multi-mode emotion discrimination method
  • Transform algorithm-based single-mode label generation and multi-mode emotion discrimination method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0069] In this embodiment, a single-modal label generation and multi-modal emotion discrimination method based on the Transformer algorithm, the overall algorithm flow is as follows figure 1 As shown, the steps include: first obtain multi-modal non-aligned data sets, and perform preprocessing to obtain embedded expression features of corresponding modalities; then establish ITE network modules to extract intra-modal features; combine single-modal label prediction with multi-modal Fusion generation of modal emotion decision-making discriminant labels, establishment of inter-modal BTE network module and modal enhanced MTE network module, and acquisition of inter-modal features and modal enhancement features through the global self-attention STE network module to obtain multi-modal emotions The label of the deep prediction; finally, iterative training is carried out in combination with the design of the loss function. Specifically, it is characterized in that it proceeds in the f...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a single-mode label generation and multi-mode emotion discrimination method based on a Transform algorithm, and the method comprises the steps: 1, obtaining a multi-mode non-aligned data set, and carrying out the preprocessing of the multi-mode non-aligned data set, and obtaining an embedded expression feature of a corresponding mode; 2, establishing an ITE network module, and extracting intra-modal features; 3, single-mode label prediction and multi-mode emotion decision discrimination label fusion generation are carried out; 4, establishing an inter-modal BTE network module and a modal enhancement MTE network module, and obtaining inter-modal features and modal enhancement features through a global self-attention STE network module; and 5, obtaining a label of multi-modal emotion deep prediction. According to the method, for the condition that a current multi-modal data set only has one multi-modal label, decision fusion is carried out through a self-supervised weighted voting mechanism to generate a single-modal label, and based on the use of various cross-modal TE, data between modals are fully interacted, so that the precision of multi-modal emotion discrimination can be improved.

Description

technical field [0001] The present invention relates to time series one-dimensional convolutional neural network Conv1D, BiLSTM, Transformer self-attention mechanism and multimodal interactive attention mechanism, involves different fusion strategies of modalities, and realizes multimodal (voice, text, video) emotion Evaluation, and using a self-supervised mechanism with weighted voting, the prediction of single-modal labels and the final multi-modal emotional discrimination are realized, which belongs to the field of multi-modal multi-task emotional computing. Background technique [0002] With the advent of the big data era, the data content is complicated and the data forms are also extremely rich. Human cognition of a certain event is a response to combining multiple modal information perceptions. It is difficult to fully interpret information using only a single modality, especially the judgment of human emotion. For example, the frowning person said to the escort rob...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06V20/40G06V10/80
CPCG06F18/254G06F18/25G06F18/253Y02D10/00
Inventor 师飘胡敏时雪峰李泽中任福继
Owner HEFEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products