[0063] In order to make the objectives, technical solutions, and advantages of the present invention clearer, the following further describes the present invention in detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
[0064] The present invention provides figure 1 The shown emotion measurement system based on eye movement technology includes: a data acquisition module, an A/D converter, an FPGA controller, a DSP controller, a data preprocessing module, a data analysis module, and an emotional state confirmation module.
[0065] FPGA controller adopts XC3S500E array chip, DSP controller adopts TMS320C6416 chip.
[0066] The data collection module is used to collect the eye movement information and facial image of the subject within the light range that satisfies the recognition of facial expression images, and send it to the FPGA controller via the A/D converter;
[0067] The data acquisition module includes:
[0068] Eye tracker, the eye tracker is set in the vicinity of the subject’s eyes to collect the subject’s eye movement information;
[0069] An eye movement information collection unit, which is connected to the eye tracker, and the eye movement information collection unit is used to receive the eye movement information of the subject collected by the eye tracker;
[0070] Camera, the camera is set in the vicinity of the subject's face to collect facial video of the subject;
[0071] The facial image acquisition unit is connected to the camera, and the facial image acquisition unit is used to receive the facial video of the subject collected by the camera, and obtain the facial image of the subject from the facial video.
[0072] The data preprocessing module is used to preprocess the eye movement information and facial images received by the FPGA controller;
[0073] The data preprocessing module includes:
[0074] Facial expression feature vector extraction unit, which is used to extract facial expression feature vectors after reading facial images;
[0075] The specific steps of the facial expression feature vector extraction unit are:
[0076] Use the top of the head as a reference point to estimate the approximate position of the facial features, and evenly set marker points on the contours of each feature part of the face;
[0077] The face is divided into two symmetrical parts by the middle axis fitted by the middle point of the eyebrow, the middle point of the two pupils and the middle of the mouth. Under the conditions of no zoom, no translation, and no rotation, the adjusted image will be relative to The symmetrical marking points on the central axis are adjusted to the same horizontal line, and the facial expression shape model is established;
[0078] The facial expression shape model is divided into different regions according to the left eye/left eyebrow, right eye/right eyebrow and mouth, and these regions are defined as feature candidate regions;
[0079] For each feature candidate area, the feature vector is extracted by the differential image method. By performing a differential operation on all image sequences in the image after the previous step, and the neutral expression image in the database, the average value of the differential value in each feature candidate area Extract facial expression feature vectors from the largest image sequence;
[0080] An eye movement feature vector extraction unit, which is used to classify and process the collected eye movement information to extract eye movement feature vectors;
[0081] The specific steps of the eye movement feature vector extraction unit are:
[0082] Categorize the collected eye movement information according to eye movement trajectory, eye movement time, eye movement direction, eye movement distance, fixation time, fixation times, pupil diameter and blink counts to obtain eight types of eye movement feature vectors;
[0083] Draw the eye movement information into an eye trajectory diagram according to the eye movement trajectory, and store it;
[0084] According to the fixation time and the number of fixations obtained by sorting and sorting, the eye movement heat map is obtained and stored.
[0085] The data analysis module is used to analyze and process the pre-processed eye movement information and facial images through the cluster-based SVM multi-classification method;
[0086] The working principle of the cluster-based SVM multi-classification method is:
[0087] Suppose the input sample is x k ∈R n (k=1, 2,...,l), the sample is mapped to a high-dimensional kernel space H by a certain nonlinear mapping Φ, and Φ(x 1 ), Φ(x 2 ),...,Φ(x l ), the dot product of the input space in the high-dimensional kernel space can be expressed by Mercer kernel as:
[0088] K(x i , X j )=Φ(x i )·Φ(x j );
[0089] Kernel function matrix K i, j =K(x i , X j ) Is composed of all samples, and the Euclidean distance in high-dimensional space is expressed as:
[0090]
[0091] In general, the expression of the nonlinear function is unknown, so the above two formulas can be changed to:
[0092]
[0093] Then use the above formula as a measurement function of cluster similarity. The criterion of clustering is to minimize the following objective functions:
[0094]
[0095] Among them, C is the total number of clusters, N i Is C i The number of class samples, the modulus of the class center is:
[0096]
[0097] According to the clustering criteria and the judgment function of sample similarity, the clustering algorithm is established;
[0098] Set up N (N﹥2) classification problem, the algorithm of constructing SVM multi-class classification tree based on kernel clustering is as follows:
[0099] (1) Suppose the number of clustering categories is C=2, and the overlap coefficient is δ∈[0,1];
[0100] (2) Convert all l original samples i , Y i , I=1, 2,...,l,y i ∈[1, N] According to the kernel clustering algorithm (C=2), the original sample is divided into two fuzzy subclasses S 1 And S 2. For the original category i, the sample set can be defined as:
[0101] N i = , I=1, 2,..., N;
[0102] Original category N i Is classified into subclass C j The probability is:
[0103]
[0104] Then for subclass C 1 And C 2 , The distribution plan of the original category i is:
[0105] If |P(i,1)-P(i,2)|≥δ, then if P(i,1)>P(i,2), the category i is divided into C 1 Class; otherwise, class i is divided into C 2 class;
[0106] If |P(i,1)-P(i,2)| 1 And C 2 class;
[0107] Repeat the above application steps for each sub-category to divide, knowing that all sub-categories contain only one original category;
[0108] According to the above-mentioned pre-classification results, the classification subtasks are defined. Each classification subtask corresponds to a node in the decision tree, and each node is an SVM binary classifier.
[0109] The emotional state confirmation module is used to merge the analyzed and processed eye movement information and facial images, and compare them with the emotional information in the pre-established emotional information database to identify the emotional state of the subject, and obtain The final emotional state of the subject.
[0110] The present invention also provides figure 1 The shown method of emotion measurement based on eye movement technology includes the following steps:
[0111] S1. Collect the subject's eye movement information and facial image within the light range that satisfies facial expression image recognition, and send it to the FPGA controller via the A/D converter;
[0112] S2, FPGA controller sends the eye movement information and facial image to the DSP controller, and preprocesses the eye movement information and facial image received by the DSP controller;
[0113] S3. Analyze and process the pre-processed eye movement information and facial images through the SVM multi-classification method based on clustering;
[0114] S4. After fusing the analyzed and processed eye movement information and facial images, compare them with the emotional information in the pre-established emotional information database to identify the subject’s emotional state and obtain the subject’s final Emotional state.
[0115] To sum up: the present invention provides an emotion measurement system and method based on eye movement technology. The present invention uses a data acquisition module to collect the subject’s eye movement information and facial images, and the data preprocessing module measures the eye movement information and The facial image is preprocessed, the data analysis module analyzes and processes the preprocessed eye movement information and facial image through the cluster-based SVM multi-classification method, and the emotional state confirmation module analyzes the processed eye movement information and facial image after fusion , And then compare, get the final emotional state of the testee, realize the comprehensive, systematic and quantitative recognition of the emotional state of the testee, so as to achieve accurate and efficient recognition of the emotional state, and interpret the person’s thinking state ,have a broad vision of application.
[0116] Finally, it should be noted that the above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. Although the present invention has been described in detail with reference to the foregoing embodiments, it is still for those skilled in the art. The technical solutions recorded in the foregoing embodiments can be modified, or some of the technical features can be equivalently replaced. Any modification, equivalent replacement, improvement, etc., made within the spirit and principle of the present invention shall be included in Within the protection scope of the present invention.