Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

690 results about "Facial affect" patented technology

Method and system for measuring emotional and attentional response to dynamic digital media content

The present invention is a method and system to provide an automatic measurement of people's responses to dynamic digital media, based on changes in their facial expressions and attention to specific content. First, the method detects and tracks faces from the audience. It then localizes each of the faces and facial features to extract emotion-sensitive features of the face by applying emotion-sensitive feature filters, to determine the facial muscle actions of the face based on the extracted emotion-sensitive features. The changes in facial muscle actions are then converted to the changes in affective state, called an emotion trajectory. On the other hand, the method also estimates eye gaze based on extracted eye images and three-dimensional facial pose of the face based on localized facial images. The gaze direction of the person, is estimated based on the estimated eye gaze and the three-dimensional facial pose of the person. The gaze target on the media display is then estimated based on the estimated gaze direction and the position of the person. Finally, the response of the person to the dynamic digital media content is determined by analyzing the emotion trajectory in relation to the time and screen positions of the specific digital media sub-content that the person is watching.
Owner:MOTOROLA SOLUTIONS INC

Mood based virtual photo album

A method and system for providing a mood based virtual photo album which provides photos based upon a sensed mood the viewer. The method may include the steps of capturing a first image of a facial expression of a viewer by a camera, providing the image to a pattern recognition module of a processor, determine a mood of the viewer by comparing the facial expression with a plurality of previously stored images of facial expressions having an associated emotional identifier that indicates a mood of each of the plurality of previously stored images, retrieving a set of photos from storage for transmission to the viewer based on the emotional identifier associated with the determined mood, and transmitting the set of photos in the form of an electronic photo album. A system includes a camera, a user interface for transmitting a first image of a facial expression of a viewer captured by the camera, a processor for receiving the transmitted image by the user interface, and including a pattern recognition module for comparing the image received by the processor with a plurality of images of facial expressions from a storage area to determine a mood of the viewer. A retrieval unit retrieves a set of electronic photos corresponding to the mood of the viewer, and transmits the set of electronic photos for display as a virtual photo album.
Owner:KONINKLIJKE PHILIPS ELECTRONICS NV

Expression cloning method and device capable of realizing real-time interaction with virtual character

The invention discloses an expression cloning method and device capable of realizing real-time interaction with a virtual character and belongs to the fields such as computer graphics and virtual reality. The method includes the following steps that: 1, modeling and skeleton binding are performed on the virtual character; 2, the basic expression base of the virtual character is established; 3, expression input training is carried out: the maximum displacement of facial feature points under each basic expression is recorded; 4, expression tracking is carried out: the facial expression change of a real person is recorded through motion capture equipment, and the weights of the basic expressions are obtained through calculation; 5, expression mapping is carried out: the obtained weights of the basic expressions are transferred to the virtual character in real time, and rotation interpolation is performed on corresponding skeletons; and the real-time rendering output of the expression of the virtual character is carried out. With the method adopted, the expression of the virtual character can be synthesized rapidly, stably and vividly, so that the virtual character can perform expression interaction with the real person stably in real time.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

A classroom teaching effect evaluation system based on facial expression recognition

The invention discloses a classroom teaching effect evaluation system based on facial expression recognition. According to the invention, videos in a classroom are analyzed in real time; the facial expressions of students in class are extracted, and the understanding degree, the activity degree, the doubtful degree and the activity time index information of the students in class are counted and analyzed, so that teachers can know the psychological states of the students and the mastery degree of the students on the knowledge points, the teachers can adopt corresponding teaching regulation andcontrol means, and the teaching quality of the class is improved; The classroom expression state of the teacher is automatically analyzed, the classroom expression index of the teacher is counted, andthe classroom emotion basis is provided for teaching management personnel to examine the teacher. The classroom expressions of teachers and students are recorded, the classroom expression indexes ofthe teachers and the students are obtained through statistical analysis, and the indexes serve as reference indexes for classroom teaching evaluation and have comprehensiveness and objectivity. the classroom teaching effect of the target course in each school for course reform is counted as a reference. Through the technical scheme of the invention, teachers and teaching management personnel can better accomplish the teaching concept taking students as the main part.
Owner:TAIZHOU UNIV

Synthetic video generation method based on three-dimensional face reconstruction and video key frame optimization

The invention discloses a synthetic video generation method based on three-dimensional face reconstruction and video key frame optimization. The synthetic video generation method comprises the following steps: optimizing and fitting each parameter of a three-dimensional face deformation model for an input face image by adopting a convolutional neural network; training a voice-to-expression and head posture mapping network by using the parameters of the target video and the face model, and acquiring facial expression and head posture parameters from the input audio by using the trained voice-to-expression and head posture mapping network; synthesizing a human face and rendering the synthesized human face to generate a vivid human face video frame; training a rendering network based on a generative adversarial network by using the parameterized face image and the face image in the video frame, wherein the rendering network is used for generating a background for each frame of face image; and performing face background rendering and video synthesis based on video key frame optimization. The background transition of each frame of the output synthesized face video is natural and vivid, and the usability and practicability of the synthesized face video can be greatly enhanced.
Owner:GUANGDONG UNIV OF TECH

Fine-grained visualization system and method for emotional electroencephalography (EEG)

ActiveCN110169770ASolve fine-grained visualization problemsRich and detailed expressionSensorsPsychotechnic devicesEeg dataBrain computer interfacing
The invention discloses a fine-grained visualization system and method for emotional electroencephalography (EEG), and solves the technical problem of how to display fine-grained information in the emotional EEG. The system is connected with a data acquisition module, a data preprocessing module, a feature extraction module and a network training control module in sequence; an expression atlas provides a target image; the network training control module and an affective computing generative adversarial network (AC-GAN) module complete the training of an AC-GAN; a network forward execution module controls to complete the generation of fine-grained expressions. The method comprises the steps of collecting emotional EEG data, preprocessing the EEG data, extracting EEG features, constructing the AC-GAN, preparing the expression atlas, training the AC-GAN, and obtaining a fine-grained facial expression generation result. The emotional EEG is directly visualized into facial expressions withthe fine-grained information which can be directly recognized, and the visualization system is used for interactive enhancement and experience optimization of rehabilitation equipment, emotional robots, VR devices and the like with a brain-computer interface.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products