Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Multi-channel information emotional expression mapping method for facial expression robot

A facial expression and mapping method technology, applied in instruments, artificial life, computational models, etc., can solve the lack of universal emotion expression modeling standards, lack of universal emotion expression modeling methods for expression robots, and multi-channel emotions for expression robots. Problems such as lack of expression, to achieve the effect of low cost, simple method and convenient use

Inactive Publication Date: 2016-11-09
ANHUI SEMXUM INFORMATION TECH CO LTD
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Different from traditional industrial robots with fixed positions, fixed procedures and fixed operation scenarios, facial expression robots have higher requirements for interactivity, intelligence and autonomy. Their research involves mechanical design, automatic control, computer intelligence, psychology and Cognitive science and other multi-field knowledge has typical interdisciplinary characteristics; this allows some research institutions and companies to design some facial expression robots with certain emotional expressions, but because the technologies involved in emotional expression are very diverse and complex, most of them The research is still in the stage of laboratory exploration, and the multi-channel emotional expression of facial expression robots is lacking; the emotional expression of facial expression robots involves interdisciplinary subjects, how to comprehensively apply multi-field knowledge has become a key issue of emotional expression; there is a lack of general emotional expression modeling for facial expression robots Method; the human face not only has many moving organs, but also the slight and instantaneous changes in the range of organ movement may express different emotions; therefore, in the domestic research field of expression robots, most of the research work still focuses on the mechanism design of the head. How to apply intelligent technologies such as visual expression analysis and speech recognition to expression robots is obviously insufficient, especially the lack of a unified universal emotional expression modeling standard

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0015] This embodiment proposes a multi-channel information emotion expression mapping method for a facial expression robot, including the following steps:

[0016] S1: Pre-built expression library, voice library, gesture library, expression output library, voice output library and gesture output library;

[0017] S2: Collect the voice of the interlocutor and identify the voice expressions by comparing it with the voice database, collect the expressions of the interlocutor and identify the emotional expressions by comparing with the expression database, collect the gestures of the interlocutor and recognize the gesture expressions by comparing with the gesture database, and compare the voice expressions with the voice expressions. , emotional expressions and gesture expressions are integrated to obtain compound expression instructions;

[0018] S3: The facial expression robot selects the voice stream data from the speech output library according to the compound expression inst...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-channel information emotional expression mapping method for a facial expression robot. The method comprises the following steps of S1: pre-establishing an expression library, a voice library, a gesture library, an expression output library, a voice output library and a gesture output library; S2: acquiring a voice of an interlocutor and identifying a sound expression by comparing the voice with the voice library; acquiring an expression of the interlocutor and identifying an emotional expression by comparing the expression with the expression library; acquiring a gesture of the interlocutor and identifying a gesture expression by comparing the gesture with the gesture library; fusing the sound expression, the emotional expression and the gesture expression to obtain a combined expression instruction; and S3: selecting voice stream data from the voice output library by the facial expression robot according to the combined expression instruction to perform output, and selecting an expression action instruction from the expression output library by the facial expression robot according to the combined expression instruction to perform facial expression. According to the method, multi-channel information emotional expression of the facial expression robot can be realized; and the method is simple, convenient to use and low in cost.

Description

technical field [0001] The invention relates to the technical field of facial expression robots, in particular to a multi-channel information emotion expression mapping method for facial expression robots. Background technique [0002] Facial expression robots have positive significance for realizing human-machine natural interaction and reducing the emotional distance between humans and robots. [0003] Different from traditional industrial robots with fixed workstations, fixed procedures and fixed operating scenarios, facial expression robots have higher requirements for interactivity, intelligence and autonomy, and their research involves mechanical design, automatic control, computer intelligence, psychology and Cognitive science and other multi-domain knowledge has typical multi-disciplinary characteristics; this allows some research institutions and companies to design some facial expression robots with certain emotional expression, but because the technologies involve...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/00
CPCG06N3/008
Inventor 虞焰兴
Owner ANHUI SEMXUM INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products