Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Face-to-face view and sound multi-view emotion discrimination method and system

A discriminant method and multi-view technology, applied in the field of multi-view learning, can solve the problem of insufficient discrimination accuracy of single-view data

Active Publication Date: 2020-12-18
GUANGDONG UNIV OF TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The present invention provides a face-to-face audio-visual multi-view emotion discrimination method and system in order to overcome the technical defect that the existing emotion discrimination technology only relies on single-view data for emotion discrimination and the discrimination accuracy is not high enough

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face-to-face view and sound multi-view emotion discrimination method and system
  • Face-to-face view and sound multi-view emotion discrimination method and system
  • Face-to-face view and sound multi-view emotion discrimination method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0059] Such as figure 1 As shown, a face-to-face audio-visual multi-view emotion discrimination method includes the following steps:

[0060] S1: Obtain initial video data, and preprocess the initial video data to obtain view data and audio data;

[0061] S2: Extracting the original features of the view data and the audio data respectively;

[0062] S3: The self-encoding network is used to perform secondary feature extraction on the original features of the view data and audio data, respectively, to obtain the potential features of the view data and the latent features of the audio data;

[0063] S4: Fusion of latent features of view data and latent features of audio data to obtain a complete latent representation;

[0064] S5: Classify the complete latent representation into a variety of emotion categories with different probabilities, and output the emotion category with the highest probability as the emotion discrimination result.

[0065] In the specific implementation ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a face-to-face view and sound multi-view emotion discrimination method, and the method comprises the following steps: S1, obtaining initial video data, preprocessing the initialvideo data, and obtaining view data and audio data; S2, respectively extracting original features on the view data and the audio data; S3, performing secondary feature extraction to obtain potentialfeatures of the view data and the audio data; S4, fusing the potential features to obtain complete potential representation; and S5, classifying the complete potential representation into a pluralityof emotion categories with different probabilities, and outputting an emotion category with the highest probability as an emotion discrimination result. The provided face-to-face view and sound multi-view emotion discrimination system comprises a data preprocessing module, a feature extraction module, a degradation network module and a classification module. The invention face-to-face visual soundmulti-view emotion discrimination method and system solve the problem that the discrimination accuracy is not high enough because the existing emotion discrimination technology only depends on emotion discrimination of single-view data.

Description

technical field [0001] The present invention relates to the technical field of multi-view learning, and more specifically, relates to a face-to-face video-audio multi-view emotion discrimination method and system. Background technique [0002] With the gradual overcoming of core technologies in the field of computer software and hardware, the rapid development of the Internet industry has been promoted, and the advent of the era of big data has been accelerated. What follows is the exponential growth of data, making modern data increasingly complex and highly heterogeneous. The diversity of features of the same thing (feature acquisition means, feature processing methods, feature attributes, etc.) is very common in reality, and these features are considered to be multi-view data of the same category of objects. Different views in multi-view data are different reflections and descriptions of the same object, so different views have certain correlations. For example, a docto...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/168G06V40/172G06N3/047G06N3/048G06N3/044G06F18/2415G06F18/241Y02D10/00
Inventor 段意强袁浩亮符政鑫吕应龙汤瑞欣许斯滨
Owner GUANGDONG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products