Self-learning emotion interaction method based on multi-modal recognition
An interaction method and multi-modal technology, applied in the field of human-computer interaction, can solve problems such as insufficient interaction ability
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment
[0068] This embodiment specifically discloses a self-learning emotional interaction method based on multimodal recognition, as shown in the attached figure 1 shown, including the following steps:
[0069] S1. Use the microphone array and the non-contact channel of the camera to collect voice, face and gesture information respectively, as shown in the attached figure 2 As shown in the left half, the technologies used are facial recognition, speech recognition and gesture recognition. Face recognition converts face image signals into face image information, speech recognition extracts voice information from voice signals, and gesture recognition converts gesture image signals into gesture information.
[0070] S2, face image information, voice information and gesture information are processed through a multi-layer convolutional neural network, such as figure 2 As shown in the right part, through emotion analysis technology and under the auxiliary processing of NLP, the speec...
PUM
Abstract
Description
Claims
Application Information
- R&D Engineer
- R&D Manager
- IP Professional
- Industry Leading Data Capabilities
- Powerful AI technology
- Patent DNA Extraction
Browse by: Latest US Patents, China's latest patents, Technical Efficacy Thesaurus, Application Domain, Technology Topic, Popular Technical Reports.
© 2024 PatSnap. All rights reserved.Legal|Privacy policy|Modern Slavery Act Transparency Statement|Sitemap|About US| Contact US: help@patsnap.com