Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

146 results about "Facial motion" patented technology

Method for tracking gestures and actions of human face

The invention discloses a method for tracking gestures and actions of a human face, which comprises steps as follows: a step S1 includes that frame-by-frame images are extracted from a video streaming, human face detection is carried out for a first frame of image of an input video or when tracking is failed, and a human face surrounding frame is obtained, a step S2 includes that after convergent iteration of a previous frame of image, more remarkable feature points of textural features of a human face area of the previous frame of image match with corresponding feather points found in a current frame of image during normal tracking, and matching results of the feather points are obtained, a step S3 includes that the shape of an active appearance model is initialized according to the human face surrounding frame or the feature point matching results, and an initial value of the shape of a human face in the current frame of image is obtained, and a step S4 includes that the active appearance model is fit by a reversal synthesis algorithm, so that human face three-dimensional gestures and face action parameters are obtained. By the aid of the method, online tracking can be completed full-automatically in real time under the condition of common illumination.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Facial expression recognition method based on facial motion unit combination features

The invention discloses a facial expression recognition method based on facial motion unit combination features. The facial expression recognition method based on the facial motion unit combination features is used for solving the technical problem of low recognition rate of a single facial motion unit with an existing facial expression recognition method based on facial motion units. The technical scheme includes that the facial expression recognition method based on the facial motion unit combination features comprises the steps of building a large-scale facial expression data base, carrying out clustering of each category of facial expressions and corresponding training samples by using the affinity propagation (AP) clustering algorithm, judging arbitrary unit (AU) combinations in each sub-category, determining the number of the sub-categories under the same facial expression by combining main AU combinations, generating the number of the categories of the training sample by combining the sub-categories of all facial expressions, and carrying out classification training by using the support vector machine (SVM) method. According to the facial expression recognition method based on the facial motion unit combination features, the recognition rate of the single facial motion unit is improved, namely that average recognition rate of a single AU unit is improved from 87.5% in prior are to 90.1% and by 2.6%.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Face recognition method and system

ActiveCN110738102AOvercome the defect of poor facial expression recognition effectAccurate identificationAcquiring/recognising facial featuresFeature extractionMedicine
The invention discloses a face expression recognition method and system, and the method comprises the steps: obtaining a to-be-recognized face image which comprises a plurality of face action units, wherein there are dependence relationships between the face action units and expressions and between the face action units; utilizing a backbone network of the neural network to obtain a first featurerepresenting the global feature of the face image; extracting a second feature representing local features of the facial action unit on the basis of the first feature according to a preset relationship between the facial action unit and the expression; after the first feature and the second feature are fused, a third feature is obtained according to the dependency relationship between the facial action units; and splicing the third feature and the first feature to obtain a fourth feature, and performing facial expression prediction according to the fourth feature. According to the embodiment of the invention, the feature extraction is assisted by introducing the expression-action unit relationship and the action unit relationship and combining the expression and action unit knowledge interaction, so that more accurate recognition of the facial expression is realized.
Owner:暗物智能科技(广州)有限公司

Multi-domain fusion micro-expression detection method based on motion unit

The invention relates to a multi-domain fusion micro-expression detection method based on a motion unit, and the method comprises the steps: (1) carrying out the preprocessing of a micro-expression video: obtaining a video frame sequence, carrying out the face detection and positioning, and carrying out the face alignment; (2) performing motion unit detection on the video frame sequence to obtainmotion unit information of the video frame sequence; (3) according to the motion unit information, finding out a facial motion unit sub-block containing the maximum micro-expression motion unit information amount ME as a micro-expression detection area through a semi-decision algorithm, and meanwhile, extracting a plurality of peak frames of the micro-expression motion unit information amount ME as reference climax frames of micro-expression detection by setting a dynamic threshold value; and (4) realizing micro-expression detection through a multi-domain fusion micro-expression detection method. According to the method, the influence of redundant information on micro-expression detection is reduced, the calculated amount is reduced, and the micro-expression detection has higher comprehensive discrimination capability. The calculation speed is high, and the micro-expression detection precision is high.
Owner:SHANDONG UNIV

Artificial-intelligence-based eyelid movement function evaluation system

The invention discloses an artificial-intelligence-based abnormal eyelid movement evaluation system. The system comprises an examined object obtaining module, an eye and specific part positioning module, a TSN model, a probability output module and an evaluation module; the examined object obtaining module is used for obtaining a facial video which is obtained from a facial movement video of the input examined object and only contains the examined object, the eye and specific part positioning module is used for positioning the eye and specific part of the facial video and obtaining an eye movement video and a specific part linkage video which only contain the examined object, the TSN model is used for processing the eye movement video and the specific part linkage video and outputting movement signals of eyes and specific parts, the probability output module is used for outputting probability signals appearing when a computer judges that eyelid movement is abnormal in each frame, and the evaluation module is used for obtaining the abnormal level according to the probability signals appearing in abnormal eyelid movement and an abnormal eyelid movement probability judging mechanism.Accordingly, universal acceptability, convenience, accuracy, objectivity and repeatability are achieved, and the clinical applicability is high.
Owner:THE PEOPLES HOSPITAL OF GUANGXI ZHUANG AUTONOMOUS REGION

Musculus facialis three-dimensional motion measuring device on basis of motion capture

The invention relates to a musculus facialis three-dimensional motion measuring device on the basis of motion capture. The musculus facialis three-dimensional motion measuring device comprises a fixed head support and a motion capture device, and the fixed head support is fixed on a skull through a plurality of bone fulcrums of the skull of a measured user. The motion capture device comprises motion capture cameras arranged in front of the face of the measured user and used for capturing relative relation of various facial motion observation points on the face and the fixed head support, and the motion capture device calculates musculus facialis static parameters and dynamic parameters of facial motion observation points according to the relative relation. The fixed head support can be used as an 'absolute' reference system for reflecting the static conditions of the musculus facialis and the dynamic conditions of the facial observation points, is fixed through the bone fulcrums of the skull and can be stably fixed on the skull to move along with the skull without affection of facial expression. Further, the static parameters of the musculus facialis and the dynamic parameters of the facial motion observation points can be quickly, conveniently and accurately measured, and objective evaluation of the motion function of the musculus facialis is realized.
Owner:PEKING UNION MEDICAL COLLEGE HOSPITAL CHINESE ACAD OF MEDICAL SCI

Facial action unit recognition method and device based on joint learning and optical flow estimation

The invention discloses a facial action unit recognition method and device based on joint learning and optical flow estimation. The method comprises the following steps: firstly extracting an original image pair needed by model training from video data to form a training data set, then preprocessing the original image pair to obtain an amplified image pair, constructing a convolutional neural network module I to extract multi-scale regional features of the amplified image pair, constructing a convolutional neural network module II to extract static global features of the amplified image pair, constructing a convolutional neural network module III to extract optical flow features of the amplified image pair, and finally, constructing a convolutional neural network module IV, fusing the static global features and the optical flow features, and carrying out facial action unit recognition. An end-to-end deep learning framework is adopted to jointly learn action unit recognition and optical flow estimation, action unit recognition is promoted by utilizing the relevance between tasks, the motion condition of facial muscles in a two-dimensional image can be effectively recognized, and unified facial action unit recognition system construction is achieved.
Owner:CHINA UNIV OF MINING & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products