Driver emotion real time identification method fusing facial expressions and voices

A facial expression and recognition method technology, applied in speech recognition, character and pattern recognition, speech analysis, etc., to achieve high-precision real-time driver negative emotion recognition and high accuracy effects

Active Publication Date: 2016-07-13
JIANGSU UNIV
View PDF1 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In order to solve the problem of high-precision and real-time recognition of driver's emotion, the present invention introduces Kinect, a high-speed 3D camera device, to extract RGB image inform

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Driver emotion real time identification method fusing facial expressions and voices
  • Driver emotion real time identification method fusing facial expressions and voices

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.

[0030] Such as figure 1 Shown, be the method flow chart of the present invention, at first, track people's face in real time by kinectSDK, obtain driver's face image (RGB image and Depth image) and voice signal (comprising acoustic signal and speech content), afterwards to driver The face image (RGB image and Depth image) and acoustic signal are preprocessed, and the feature extraction model based on unsupervised feature learning and sparse coding is trained according to the given objective function. After the model is obtained, the preprocessed information is input into the feature The extraction model obtains the emotional features based on facial images and sound signals; and extracts words according to the spoken content, and creates a dictionary through the frequent words obtained by the Apriori algorithm, and obtains text-based emotional ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a driver emotion real time identification method fusing facial expressions and voices, comprising: first, tracking a face in real time through kinect SDK to obtain driver facial expression and voice signals; preprocessing the driver facial expression and voice signals; training a feature extraction model based on unsupervised feature learning and sparse coding according to a given objective function, and after obtaining the model, and inputting preprocessed information into the model to obtain emotion features based on the facial expression and voice signals; extracting words according to speaking contents, creating a dictionary according to frequent words obtained through an Apriori algorithm, and obtaining emotion features based on texts through the dictionary; and finally cascading the emotion features based on the facial expression and voice signals and the emotion features based on texts, inputting feature vectors into an SVM(Support Vector Machine), and training an SVM classifier to obtain an SVM model. The finally obtained SVM model can identify driver emotions and have high robustness.

Description

technical field [0001] The invention relates to a method for real-time recognition of driver's emotion, in particular to a real-time driver's emotion recognition method which integrates facial expressions and voice. Background technique [0002] In recent years, with the rapid increase of the number of private cars, the number of annual traffic accidents is also rising sharply, causing huge losses to people's lives and property. There are many reasons for traffic accidents, mainly including two factors: active factors and passive factors. Passive factors mainly refer to the abnormality of the car's own parts and some uncontrollable external factors; active factors refer to the driver's fatigue, abnormal behavior, and emotional abnormalities. At present, there are already detection equipment for the abnormality of the car itself, and many scholars have conducted research on driver fatigue driving and abnormal driver behavior, and great progress has been made. None reported....

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G10L25/63G10L15/02G10L15/26G06F17/27
CPCG10L15/02G10L15/26G10L25/63G06F40/205G06F40/242G06V40/172G06V40/168G06V40/174G06F18/2136G06F18/2411
Inventor 毛启容刘鹏刘峰陈龙詹永照
Owner JIANGSU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products