A Real-time Driver Emotion Recognition Method Fused with Facial Expression and Speech

A facial expression and recognition method technology, applied in speech recognition, character and pattern recognition, speech analysis, etc., to achieve high-precision real-time driver negative emotion recognition and high accuracy effects

Active Publication Date: 2019-03-05
JIANGSU UNIV
View PDF1 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In order to solve the problem of high-precision and real-time recognition of driver's emotion, the present invention introduces Kinect, a high-speed 3D camera device, to extract RGB image information, Depth image information and voice information, and proposes a set of feasible driver's emotion recognition for these features method, greatly improving the recognition accuracy and speed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Real-time Driver Emotion Recognition Method Fused with Facial Expression and Speech
  • A Real-time Driver Emotion Recognition Method Fused with Facial Expression and Speech

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.

[0030] like figure 1 Shown, be the method flow chart of the present invention, at first, track people's face in real time by kinect SDK, obtain driver's face image (RGB image and Depth image) and speech signal (comprising acoustic signal and speech content), then to driving The face image (RGB image and Depth image) and acoustic signal of the employee are preprocessed, and the feature extraction model based on unsupervised feature learning and sparse coding is trained according to the given objective function. After the model is obtained, the preprocessed information is input The feature extraction model obtains the emotional features based on facial images and acoustic signals; and extracts words according to the spoken content, and creates a dictionary through the frequent words obtained by the Apriori algorithm, and obtains text-based emotio...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for real-time recognition of driver's emotion by combining facial expression and voice. Firstly, the face is tracked in real time through kinect SDK to obtain the driver's facial image and voice signal, and then the driver's facial image, The acoustic signal is preprocessed, and the feature extraction model based on unsupervised feature learning and sparse coding is trained according to the given objective function. After the model is obtained, the preprocessed information is input into the model to obtain emotional features based on facial images and sound signals ; and extract words according to the spoken content, create a dictionary through the frequent words obtained by the Apriori algorithm, and obtain text-based emotional features through the dictionary, and finally connect the emotional features based on facial images and sound signals with the text-based emotional features. Get the feature vector, input the feature vector to the support vector machine SVM, train the SVM classifier, and get the SVM model. The final SVM model is used to identify the driver's emotion, which has high robustness.

Description

technical field [0001] The invention relates to a method for real-time recognition of driver's emotion, in particular to a real-time driver's emotion recognition method which integrates facial expressions and voice. Background technique [0002] In recent years, with the rapid increase of the number of private cars, the number of annual traffic accidents is also rising sharply, causing huge losses to people's lives and property. There are many reasons for traffic accidents, mainly including two factors: active factors and passive factors. Passive factors mainly refer to the abnormality of the car's own parts and some uncontrollable external factors; active factors refer to the driver's fatigue, abnormal behavior, and emotional abnormalities. At present, there are already detection equipment for the abnormality of the car itself, and many scholars have conducted research on driver fatigue driving and abnormal driver behavior, and great progress has been made. None reported....

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62G10L25/63G10L15/02G10L15/26G06F17/27
CPCG10L15/02G10L15/26G10L25/63G06F40/205G06F40/242G06V40/172G06V40/168G06V40/174G06F18/2136G06F18/2411
Inventor 毛启容刘鹏刘峰陈龙詹永照
Owner JIANGSU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products