Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

70 results about "Microexpression" patented technology

A microexpressionis the innate result of a voluntary and an involuntary emotional response occurring simultaneously and conflicting with one another. This occurs when the amygdala (the emotion center of the brain) responds appropriately to the stimuli that the individual experiences and the individual wishes to conceal this specific emotion. This results in the individual very briefly displaying their true emotions followed by a false emotional reaction. Human emotions are an unconscious bio-psycho-social reaction that derives from the amygdala and they typically last 0.5–4.0 seconds, although a microexpression will typically last less than 1/2 of a second. Unlike regular facial expressions it is either very difficult or virtually impossible to hide microexpression reactions. Microexpressions cannot be controlled as they happen in a fraction of a second, but it is possible to capture someone's expressions with a high speed camera and replay them at much slower speeds. Microexpressions express the seven universal emotions: disgust, anger, fear, sadness, happiness, contempt, and surprise. Nevertheless, in the 1990s, Paul Ekman expanded his list of emotions, including a range of positive and negative emotions not all of which are encoded in facial muscles. These emotions are amusement, embarrassment, anxiety, guilt, pride, relief, contentment, pleasure, and shame.

Microexpression-based credit authorization method and device, terminal and readable storage medium

The invention provides a microexpression-based credit authorization method. The microexpression-based credit authorization method includes the steps: acquiring a credit microexpression sample set, andconstructing a microexpression fraud identification model according to the credit microexpression sample set; when receiving a credit authorization instruction, acquiring an original video stream ofapplicant credit Q&A (Questions and Answers), wherein the original video stream includes microexpressions of the applicant credit Q&A process; inputting the original video stream into the microexpression fraud identification model to perform microexpression identification, and obtaining a microexpression identification result; and according to the microexpression identification result, generatingcorresponding credit decision-making suggestion information. The invention also provides a microexpression-based credit authorization device, equipment and a readable storage medium. The microexpression-based credit authorization method uses the microexpression fraud identification model to analyze the microexpressions of a credit applicant to determine the real feeling of the applicant to determine whether the applicant tell lies so as to detect fraud, thus reducing the workload of artificial authorization, and being conductive to improving efficiency and accuracy of credit authorization.
Owner:ONE CONNECT SMART TECH CO LTD SHENZHEN

Multi-view microexpression recognition method and device, storage medium and computer device

PendingCN109165608ARealize multi-pose micro-expression recognitionFast Multi-Pose Micro-expression RecognitionAcquiring/recognising facial featuresComputer deviceMicroexpression
The invention provides a multi-view micro-expression recognition method and device, a storage medium and a computer device. The method comprises the following steps: obtaining facial expression data of a target expression of a user, wherein the facial expression data includes figure expression images of a target expression at multiple angles; inputting the expression data of the user into a plurality of preset microexpression recognition models to obtain a set of expression classification probabilities corresponding to each microexpression recognition model, wherein the set of expression classification probabilities comprises a matching probability between the expression data and different expression classifications; calculating an average value of matching probabilities of the same expression classification in each expression classification probability set; determining the expression classification corresponding to the target expression according to the average value of the matching probability of the expression classification. This method can quickly and accurately realize multi-angle multi-pose micro-expression recognition, and meet the needs of multi-angle multi-pose micro-expression recognition.
Owner:ONE CONNECT SMART TECH CO LTD SHENZHEN

Real-time video emotion analysis method and system based on deep learning

The invention discloses a real-time video emotion analysis method and system based on deep learning. The analysis method comprises the following steps of S1, obtaining a training data set; S2, recognizing microexpressions of the training data set through an algorithm based on a deep neural network, performing screening, and outputting 8 kinds of expression predicating values, wherein 8 kinds of expressions comprise a gentle expression, a happy expression, an amazed expression, a sad expression, an angry expression, a disgusted expression, a fear expression and a despised expression; S3, predicating shot human expressions through a heart rate algorithm, and obtaining corresponding heart rate values; and S4, comparing the heart rate values obtained in the step S3 with the expression predicating values obtained in the step S2, and outputting the expressions the same as the heart rate values obtained in the step S3. According to the real-time video emotion analysis method and system basedon deep learning disclosed by the invention, human face recognition in machine vision and an image classification algorithm are applied to detection of microexpressions and the heart rate, recognitionof the microexpressions is realized through the deep learning algorithm, and the real-time video emotion analysis method and system based on deep learning can be applied to the clinical field, the juridical field and the security field.
Owner:南京云思创智信息科技有限公司

Mcroexpression recognition method based on the principal direction of optical flow

The invention provides a micro-expression recognition method based on the main direction of optical flow, which comprises the following steps: interpolating a micro-expression sequence formed by framing a micro-expression video; positioning the eyes and nose of each frame, and calculating the position of the middle point of the eyes as the origin of the coordinate axis; calculating the distance dem and dmn from the origin of the coordinate axis of each frame to the center of each eye and the nose; constructing normalized mesh according to dem and dmn; calculating the optical flow direction ofeach pixel region in all adjacent frame grids and clustering to obtain the optical flow main direction; a rectangular characteristic region being formed in the main direction of the optical flow, andthe pixel region in the main direction of the optical flow being assigned with weight coefficients to obtain the optical flow field optimized by adjacent frame grids, and the curves formed by connecting the related data drawing points being smoothed to obtain the global optical flow field; the pixel region in the main direction of the optical flow being assigned with weight coefficients. A training classifier inputs each optical flow characteristic region and compares the sum of similarities between the traversed-level microexpression sequence and the trained microexpression sequence with S toobtain a classification result.
Owner:UNIV OF SHANGHAI FOR SCI & TECH

Double-recording method and device for electronic contract signing, computer equipment and storage medium

The invention discloses a double-recording method and device for electronic contract signing, computer equipment and a storage medium. The method comprises the following steps: receiving an electroniccontract signing request sent by a client; performing identity verification on the contract signer; if the identity of the contract signer is legal, sending a contract signing execution instruction to the client; receiving audio data and video data sent by the client, performing voiceprint emotion recognition on the contract signer according to the audio data, and performing micro-expression emotion recognition on the contract signer according to the video data to obtain the voiceprint emotion score and the micro-expression emotion score of the contract signer in each preset emotion state; performing normalization calculation on the voiceprint emotion score and the micro-expression emotion score in each emotion state by using a preset normalization processing model to obtain a cooperationtendency score; and sending the cooperation tendency score to the client. According to the technical scheme, efficient contract signing is guaranteed, and the intelligent level of double recording iseffectively improved.
Owner:PING AN TECH (SHENZHEN) CO LTD

Lie-telling detection method based on micro-expressions in interview

PendingCN110889332AHigh technical precisionPrediction cheating goodOffice automationNeural architecturesMicroexpressionAlgorithm
The invention relates to a lie-telling detection method based on micro-expressions in an interview, which comprises the following steps: firstly, training five expressions, namely eyebrow wrinkling, eyebrow lifting, mouth closing, pouting and head tilting, by a model, and labeling each type of expression data; secondly, inputting an image of the facial micro-expression into a pre-trained SSD network taking VGG16 as a backbone, enabling the image to pass through a convolutional neural network to extract features, and generating a feature map; performing convolution operation on each feature mapto evaluate a default bounding box, and predicting an offset and a classification probability for each bounding box; combining bounding boxes obtained by different feature maps, executing a non-maximum suppression method to filter a part of overlapped or incorrect borders, and generating a final bounding box set; and finally, classifying detection results by using a classifier. According to the method, high-level and low-level visual features are used at the same time, and compared with human beings, the method is obviously better in cheating prediction; and compared with naked eye judgment of human beings, the speed is higher, and the technical accuracy is higher.
Owner:中科南京人工智能创新研究院 +1

listening evaluation method based on listener micro-expression, DEVICE, COMPUTER DEVICE AND STORAGE MEDIUM

The invention discloses a listening evaluation method based on listener micro-expression, DEVICE, COMPUTER DEVICE AND STORAGE MEDIUM, The listening evaluation method based on the listener's micro-expression comprises the following steps: because for every listener in the lecture, the microexpressions extracted from the collected facial images of the listener belong to the true state of the listener 's heart, it is determined that the preset emotional state corresponding to the micro-expression also belongs to the true emotional state of the listener 's heart, Next, each first evaluation scorecorresponding to the emotional state of the listener is determined, the first overall rating score of the listener is then calculated, Finally, the activity evaluation score of listening activities isdetermined according to the first total evaluation score of all the listeners, Therefore, this kind of scoring method is more reflective of the audience's true evaluation of listening activities, will not be affected by the audience's subjectivity and other reasons, a better understanding of the effect of listening activities, undoubtedly improve the accuracy of the statistics of the audience's evaluation of listening activities.
Owner:PING AN TECH (SHENZHEN) CO LTD

Face microexpression recognition method based on video magnification and depth learning

InactiveCN109034143AImprove accuracyIncrease the range of facial expressionsAcquiring/recognising eyesNeural architecturesData setMicroexpression
The invention provides a method for recognizing facial micro-expression based on video amplification and depth learning. The method comprises the following steps of: using a video amplification technique based on interference cancellation to amplify the motion amplitude of the micro-expression video data; the enlarged video data being divided into video frame images, and all image sequences belonging to micro-expression are extracted according to the micro-expression tags in the data set to form a new data set; facial clipping preprocessing being carried out on the processed video, and all video image sequences being uniformly clipped into 110*110 size gray-scale images; the new data after preprocessing being put into the convolution neural network model and trained to extract the micro-expression feature data to achieve the task of micro-expression recognition. The technical proposal provided by the invention enlarges the amplitude of the expression action through the video amplification operation of eliminating interference to the complete data set, and simultaneously introduces a neural network model for training, thereby effectively improving the accuracy rate of the micro expression recognition on the basis of the full classification of the emotion label.
Owner:YUNNAN UNIV

A microexpression recognition method based on sparse projection learning

The invention discloses a micro expression recognition method based on sparse projection learning. The method comprises the steps of: Step 1, collecting a micro expression sample, extracting LBP features P, Q, R of three orthogonal planes of the micro expression, and defining C, D, E as feature optimization variables of three orthogonal planes of XY, XT, YT respectively; constructing an optimization model; 2, setting the initial value and maximum value of iterative counting variable t and n and initializing the regularization parameter kappa, kappa max, and the scale parameter ruho; step 3, initializing the expression (shown in the description), calculating C, and updating T1 and Kappa; If the expression (shown in the description) converges or n> nmax, proceeding to step 4; step 4, initializing the expression (shown in the description), calculating D, and updating T2 and Kappa; and if the expression (shown in the description) converges or n>nmax, proceeding to step 5; step 5, initializing the expression (shown in the description), calculating E, and updating T3 and Kappa; and if the expression (shown in the description) converges or n> nmax, proceeding to step 6; step 6, makingt=t+1, if t <= tmax, returning to step 3, otherwise, outputting C, D, E; step 7, optimizing the LBP features of the three orthogonal plane by optimizing the variables C, D and E to obtain a new fusionfeature Ftest, and predicting the emotion category of the test sample through the trained SVM classifier for the fusion feature Ftest.
Owner:JIANGSU UNIV

Examination behavior detection method based on improved Openpose model and facial micro-expressions

The invention discloses an examination behavior detection method based on an improved Openpose model and facial micro-expressions, and the method comprises the steps: arranging a camera in front of adesk, and detecting the examination behaviors of students in real time; recognizing facial information and upper body skeleton information through an artificial intelligence model, serving whether keypoints can be recognized or not and the distance between the key points as main judgment conditions, serving changes of micro-expressions as auxiliary judgment conditions, and if a certain student does not meet the condition for a period of time, judging that the examination behavior of the student is abnormal. Besides, through the video stream of one class, the possible stage of abnormal behaviors of students is found out and analyzed, and innovation and reform of teaching are realized. Interference factors are reduced by means of machine vision recognition, equipment is simplified, and a network model is further optimized by means of a residual network, weight trimming and the like. Compared with a traditional mode, self-service examination behavior detection and feedback are achieved,the test efficiency is high, the accuracy can reach 95%, and the method can be applied to general examination detection.
Owner:NANTONG UNIVERSITY

Multi-modal emotion recognition method and system fusing voice and micro-expressions

The invention discloses a multi-modal emotion recognition method and system fusing voice and micro-expressions, and relates to the technical field of situation recognition. The method comprises the steps of: establishing a voice emotion database and an emotion association function; acquiring voice information and face image information of the same target object at the same time, and extracting emotion representation vocabularies and micro-expression data; obtaining an emotion correlation function and an emotion fluctuation value corresponding to the micro-expression according to a matching result; establishing an emotion recognition network, and decomposing step by step to obtain a plurality of emotion recognition lines; obtaining a corresponding emotion fluctuation value, and establishingan emotion recognition curve; and selecting a qualified emotion recognition line according to a preset fluctuation degree after the emotion fluctuation degree is calculated. According to the invention, the authenticity of the real-time emotion of the target object represented by the voice information and the face image information is enhanced, the probability that the same situation reflects different situations is reduced, the accuracy of the emotion recognition result is improved, and the error of the emotion recognition result is reduced.
Owner:JIANGXI UNIV OF SCI & TECH

Classroom behavior detection method based on improved Openpose model and facial micro-expressions

The invention discloses a classroom behavior detection method based on an improved Openpose model and facial micro-expressions, and the method comprises the steps: arranging a camera in front of a desk, and detecting the classroom behaviors of students in real time; and recognizing facial information and upper body skeleton information through an artificial intelligence model, taking whether key points can be recognized or not and the distance between the key points as main judgment conditions, serving changes of micro-expressions as auxiliary judgment conditions, if a certain student does notmeet the condition for a period of time, judging that the examination behavior of the student is abnormal. Besides, through the video stream of one class, the possible stage of abnormal behaviors ofstudents is found out and analyzed, and innovation and reform of teaching are realized. Interference factors are reduced through machine vision recognition, equipment is simplified, and meanwhile theinvention further provides a corresponding data analyzing and processing system. According to the invention, the network model is further optimized through adoption of methods such as residual network, weight trimming and the like. According to the invention, self-service classroom behavior detection and feedback are realized, the test efficiency is high, and the accuracy can reach 95%.
Owner:NANTONG UNIVERSITY

Depression tendency recognition method based on multi-modal characteristics of limbs and micro-expressions

The invention discloses a depression tendency recognition method based on multi-modal characteristics of limbs and micro-expressions. The method comprises the following steps: detecting human motion by means of a non-contact measurement sensor Kinect, and generating motion text description; capturing a face image frame by adopting a non-contact measurement sensor Kinect, performing Gabor wavelet and linear discriminant analysis on a face region of interest, performing feature extraction and dimensionality reduction, and then realizing face expression classification by adopting a three-layer neural network to generate expression text description; performing fusion through text description extracted by a fusion neural network with a self-organizing mapping layer and generating information with emotion features; and S4, using a Softmax classifier to classify the feature information generated in the S3 in emotion categories, wherein a classification result is used for evaluating whether the patient has a depression tendency or not. Static body movement and dynamic body movement are considered, and higher efficiency is achieved. Body movement is helpful for identifying the emotion of adepression patient.
Owner:SOUTH CHINA UNIV OF TECH

Virtual reality sleep promoting method and device

The embodiment of the invention provides a virtual reality sleep promoting method and device. The method comprises the steps of: obtaining a preset hypnotic voice, outputting the hypnotic voice through virtual reality, and obtaining a micro-expression of a user after the hypnotic voice is output through a camera device; analyzing the micro-expression of the user, and judging whether the micro-expression of the user is a positive emotion micro-expression or a negative emotion micro-expression according to an analysis result; when the micro-expression of the user is a negative emotion micro-expression, obtaining a similar voice of the hypnosis voice, outputting the similar voice through virtual reality, and obtaining a feedback micro-expression of the user through the camera device; and comparing the negative emotion micro-expression with the feedback micro-expression to obtain a micro-expression difference of the user, and correspondingly adjusting the content of the hypnosis voice according to the micro-expression difference. By adopting the method, targeted adjustment of the hypnotic voice can be performed for different users; and when the user falls asleep, the hypnotic voice is adjusted through the micro-expression change of the user, so that the falling asleep experience of the user is not affected.
Owner:ZHEJIANG BUSINESS TECH INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products