Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Self-learning Emotional Interaction Method Based on Multimodal Recognition

An interactive method and multi-modal technology, applied in the field of human-computer interaction, can solve problems such as insufficient interactive ability, and achieve the effect of improving interactive experience, human-like self-learning and self-adaptive ability

Active Publication Date: 2021-10-26
SOUTH CHINA UNIV OF TECH
View PDF16 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to solve the defect of insufficient interaction ability, and provide a self-learning emotional interaction method based on multi-modal recognition. Comprehensive consideration of fusion features, combined with emotional history state and dialogue memory network, to complete interactive tasks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Self-learning Emotional Interaction Method Based on Multimodal Recognition
  • A Self-learning Emotional Interaction Method Based on Multimodal Recognition
  • A Self-learning Emotional Interaction Method Based on Multimodal Recognition

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0068] This embodiment specifically discloses a self-learning emotion interaction method based on multimodal recognition, as shown in the attached figure 1 shown, including the following steps:

[0069] S1. Use the microphone array and the non-contact channel of the camera to collect voice, face and gesture information respectively, as shown in the attached figure 2 As shown in the left half, the technologies used are facial recognition, speech recognition and gesture recognition. Face recognition converts face image signals into face image information, speech recognition extracts voice information from voice signals, and gesture recognition converts gesture image signals into gesture information.

[0070] S2. Process the face image information, voice information and gesture information through a multi-layer convolutional neural network, such as figure 2 As shown in the right part, through emotion analysis technology and under the auxiliary processing of NLP, the speech em...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a self-learning emotion interaction method based on multi-modal recognition, the steps are as follows: the non-contact channel collects voice, face and gesture signals respectively; the feature extraction is performed on the signal to obtain the preliminary feature of the signal; the feature is input to the two-way The LSTM layer obtains single-modal private information and multi-modal interactive information, and obtains fusion features based on these information; based on the classification learning algorithm, combines multi-modal fusion features and historical emotional state curves to predict user emotions and select interactive modes; Next, an interactive response is given according to the dialogue memory network; finally, the emotional state curve and the dialogue memory network are fed back and optimized according to the interaction effect. The invention allows the operator to input information through multiple channels through the non-contact human-computer interaction interface, comprehensively considers the fusion characteristics of multiple modes, combines the emotional history state and the dialogue memory network, and completes the interactive task.

Description

technical field [0001] The invention relates to the technical field of human-computer interaction, in particular to a self-learning emotional interaction method based on multimodal recognition. Background technique [0002] Intelligent human-computer interaction is an important direction for the development of artificial intelligence. With the development of mobile Internet, higher requirements are put forward for the human-like and natural nature of human-computer interaction. [0003] The current interaction technology is relatively simple, mostly pure text or voice interaction. Some so-called multi-modal interaction methods are only simple addition operations on multi-modal features, which are separate processing of multiple single-modal information. The interaction information between multimodal information causes ambiguity between multiple modalities, and it is impossible to achieve a complete and unambiguous interaction task. At the same time, most of the interaction ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F3/01G06F3/16G06K9/00G06K9/62G06N3/04
CPCG06F3/011G06F3/017G06F3/167G06V40/113G06V40/168G06N3/044G06N3/045G06F18/253
Inventor 刘卓邓晓燕潘文豪潘粤成蔡典仑
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products