Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

4181results about "Acquiring/recognising facial features" patented technology

Generating, recording, simulating, displaying and sharing user related real world activities, actions, events, participations, transactions, status, experience, expressions, scenes, sharing, interactions with entities and associated plurality types of data in virtual world

Systems and methods for virtual world simulations of the real-world or emulate real-life or real-life activities in virtual world or real life simulator or generating a virtual world based on real environment: host, at a server, a virtual world geography or environment that correspondences the real world geography or environment as a result, as the user continuously moves about or navigates in a range of coordinates in the real world, the user also continuously moves about in a range of coordinates in the real world map or virtual world; generate and access, by the server, a first avatar or representation, that is associated with a first user or entity in the virtual world; monitor, track and store, by the server, plurality types of data associated with user's real life or real life activities, actions, transactions, participated or participating events, current or past locations, checked-in places, participations, expressions, reactions, relations, connections, status, behaviours, sharing, communications, collaborations, interactions with various types of entities in the real world; receive, by the server, first data associated with a mobile device of the first user related to the first activity from the first geo-location co-ordinates or place; determine, by the server, one or more real world activities of the first user based on the first data; generate, record, simulate and update, by the server, virtual world based on said stored data, wherein updating a first avatar, that is associated with the first user or entity, in the virtual world; causing, by the server, a first avatar associated with the first user or entity, to engage in one or more virtual activities in the virtual world, that are at least one of the same as or sufficiently similar to or substantially similar to the determined one or more real world activities, by generating, recording, simulating, updating and displaying, by a simulation engine, simulation or a graphic user interface that presents a user a simulation of said real-life activities; and display in the virtual world, by the server, said real world activity or interacted entity or location or place or GPS co-ordinates related or associated or one or more types of user generated or provided or shared or identified contextual one or more types of contents, media, data and metadata from one or more sources including server, providers, contacts of user and users of network and external sources, databases, servers, networks, devices, websites and applications, wherein virtual world geography correspondences the real world geography. In an embodiment receiving from a user, a privacy settings, instructing to limit viewing of or sharing of said generated simulation of user's real world life or user's real world life activities to selected one or more contacts, followers, all or one or more criteria or filters specific users of network or make it as private.
Owner:RATHOD YOH

Method and system for measuring emotional and attentional response to dynamic digital media content

The present invention is a method and system to provide an automatic measurement of people's responses to dynamic digital media, based on changes in their facial expressions and attention to specific content. First, the method detects and tracks faces from the audience. It then localizes each of the faces and facial features to extract emotion-sensitive features of the face by applying emotion-sensitive feature filters, to determine the facial muscle actions of the face based on the extracted emotion-sensitive features. The changes in facial muscle actions are then converted to the changes in affective state, called an emotion trajectory. On the other hand, the method also estimates eye gaze based on extracted eye images and three-dimensional facial pose of the face based on localized facial images. The gaze direction of the person, is estimated based on the estimated eye gaze and the three-dimensional facial pose of the person. The gaze target on the media display is then estimated based on the estimated gaze direction and the position of the person. Finally, the response of the person to the dynamic digital media content is determined by analyzing the emotion trajectory in relation to the time and screen positions of the specific digital media sub-content that the person is watching.
Owner:MOTOROLA SOLUTIONS INC

Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus

A facial expression recognition system that uses a face detection apparatus realizing efficient learning and high-speed detection processing based on ensemble learning when detecting an area representing a detection target and that is robust against shifts of face position included in images and capable of highly accurate expression recognition, and a learning method for the system, are provided. When learning data to be used by the face detection apparatus by Adaboost, processing to select high-performance weak hypotheses from all weak hypotheses, then generate new weak hypotheses from these high-performance weak hypotheses on the basis of statistical characteristics, and select one weak hypothesis having the highest discrimination performance from these weak hypotheses, is repeated to sequentially generate a weak hypothesis, and a final hypothesis is thus acquired. In detection, using an abort threshold value that has been learned in advance, whether provided data can be obviously judged as a non-face is determined every time one weak hypothesis outputs the result of discrimination. If it can be judged so, processing is aborted. A predetermined Gabor filter is selected from the detected face image by an Adaboost technique, and a support vector for only a feature quantity extracted by the selected filter is learned, thus performing expression recognition.
Owner:SAN DIEGO UNIV OF CALIFORNIA +1

Classroom behavior monitoring system and method based on face and voice recognition

The invention discloses a classroom behavior monitoring system and method based on face and voice recognition. The method comprises the following steps: a camera acquires the video information of students and teachers in classrooms; voice recording equipment acquires the voice information of the students and teachers in the classrooms; a main control processor preprocesses the received video information of the students and teachers and extracts the facial expression features and behavior features of the students and teachers; the main control processor processes the received voice information of the students and extracts the voice features of the students; and the main control processor processes the received voice information of the teachers, extracts the voice features of the teachers, calculates the scores of the teaching effect of the teachers, evaluates the teaching effect of the teachers according to the scores, and provides guidance suggestions. According to the classroom behavior monitoring system and method disclosed by the invention, the classroom behaviors of the teachers and students in the classrooms are observed, and thus the accuracy and objectivity of the evaluation can be increased, the teaching methods can be improved, and the teaching quality can be increased.
Owner:SHANDONG NORMAL UNIV

Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus

A facial expression recognition system that uses a face detection apparatus realizing efficient learning and high-speed detection processing based on ensemble learning when detecting an area representing a detection target and that is robust against shifts of face position included in images and capable of highly accurate expression recognition, and a learning method for the system, are provided. When learning data to be used by the face detection apparatus by Adaboost, processing to select high-performance weak hypotheses from all weak hypotheses, then generate new weak hypotheses from these high-performance weak hypotheses on the basis of statistical characteristics, and select one weak hypothesis having the highest discrimination performance from these weak hypotheses, is repeated to sequentially generate a weak hypothesis, and a final hypothesis is thus acquired. In detection, using an abort threshold value that has been learned in advance, whether provided data can be obviously judged as a non-face is determined every time one weak hypothesis outputs the result of discrimination. If it can be judged so, processing is aborted. A predetermined Gabor filter is selected from the detected face image by an Adaboost technique, and a support vector for only a feature quantity extracted by the selected filter is learned, thus performing expression recognition.
Owner:SAN DIEGO UNIV OF CALIFORNIA +1

Animation image driving method and device based on artificial intelligence

The embodiment of the invention discloses an animation image driving method based on artificial intelligence. The method comprises the following steps: collecting media data of facial expression changes when a speaker speaks voice, determining a first expression base of a first animation image corresponding to the speaker, and reflecting different expressions of the first animation image through the first expression base. After target text information used for driving the second animation image is determined, acoustic features and target expression parameters corresponding to the target text information are determined according to the target text information, the collected media data and the first expression base. Acoustic features and target expression parameters are used. A second animation image with a second expression base can be driven. According to the method and the device, the second animation image can give out the sound of the target text information spoken by the speaker through acoustic feature simulation, and the facial expression conforming to the due expression of the speaker is made in the sounding process, so that vivid substitution feeling and immersion feeling are brought to the user. The interaction experience of the user and the animation image is improved.
Owner:TENCENT TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products