Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

614 results about "Head posture" patented technology

Equipment and method for detecting head posture

The invention provides equipment and a method for detecting a head posture. The equipment for detecting the head posture comprises a multi-visual-angle image acquiring unit, a front human face image estimation unit, a head posture estimation unit and a coordinate conversion unit, wherein the multi-visual-angle image acquiring unit is used for acquiring visual angle images, which is shot from different angles, of an object; the front human face image estimation unit detects a visual angle image of a human face having a minimum yaw angle from the acquired visual angle images; the head posture estimation unit acquires a three-dimensional coordinate of a predetermined human face characteristic point from a human face three-dimensional model, detects the predetermined human face characteristic point and a two-dimensional coordinate of the predetermined human face characteristic point from the detected visual angle image, and calculates a first head posture relative to image capturing equipment for shooting the visual angle image of the human face having the minimum yaw angle according to the two-dimensional coordinate and the three-dimensional coordinate of the predetermined human face characteristic point; and the coordinate conversion unit converts the first head posture into a second head posture represented by a world coordinate system according to the world coordinate system coordinates of the image capturing equipment.
Owner:SAMSUNG ELECTRONICS CO LTD +1

Robust continuous emotion tracking method based on deep learning

The invention relates to a robust continuous emotion tracking method based on deep learning. The method comprises the steps that (1) a training sample is constructed, and a normalization model and a continuous emotion tracking model are trained; (2) an expression image is acquired and preprocessed, the expression image obtained after being preprocessed is sent to the trained normalization model, and an expression picture with standard illumination and a standard head posture is obtained; (3) a standard image obtained after normalization is used as input of the continuous emotion tracking model, expression-related features are automatically extracted and input through the continuous emotion tracking model, and a tracking result of a current frame is generated according to time sequence information; and the steps (2) and (3) are repeated till a whole continuous emotion tracking process is completed. The method based on deep learning is adopted to construct an emotion recognition model so as to realize continuous emotion tracking and prediction, the method has robustness on illumination and posture changes, and the time sequence information of expressions can be fully utilized to track the emotion of a current user more stably based on historical emotion features.
Owner:INST OF SOFTWARE - CHINESE ACAD OF SCI

Gesture recognition system and method adopting action segmentation

The invention provides a gesture recognition system and method adopting action segmentation and relates to the field of machine vision and man-machine interaction.The gesture recognition method comprises the following steps that firstly, head movements are detected, and head posture changes are calculated; then, a segmentation signal is sent according to posture estimation information, gesture segmentation beginning and end points are judged, if the signal indicates initiative gesture action segmentation, gesture video frame sequences are captured within a time interval of gesture execution, and preprocessing and characteristic extraction are conducted on gesture frame images; if the signal indicates automatic action segmentation, the video frame sequences are acquired in real time, segmentation points are automatically analyzed by analyzing the movement change rule of adjacent gestures for action segmentation, then vision-unrelated characteristics are extracted from segmented effective element gesture sequences, and a type result is obtained by adopting a gesture recognition algorithm for eliminating spatial and temporal disparities.The gesture recognition method greatly reduces redundant information of continuous gestures and the calculation expenditures of the recognition algorithm and improves the gesture recognition accuracy and real-timeliness.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Intelligent display system automatically tracking head posture

The invention relates to an intelligent display system automatically tracking a head posture. The system can capture the head posture of a human body in real time, a display follows the head of the human body, the display and the head are constantly kept at optimum positions, eye fatigue can be relieved, and myopia can be prevented. The system comprises an image acquisition module, a vision algorithm processing module and a display control module. The image acquisition module is used for acquiring the image of the head of the human body in real time; the vision algorithm processing module is used for firstly preprocessing the image, constantly keeping the head at a vertical position in the image, secondly extracting feature points of a human face by the aid of an ASM (active shape model) algorithm, and finally acquiring the spatial posture of the head according to the triangular surveying principle of elevation angles and side-tipping angles. Head posture information is transmitted to a steering gear control module and then calculated by a single chip microcomputer to form a PWM (pulse width modulation) signal for controlling a steering gear, so that the display follows the head posture at two degrees of freedom of elevation and side-tipping.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Synthetic video generation method based on three-dimensional face reconstruction and video key frame optimization

The invention discloses a synthetic video generation method based on three-dimensional face reconstruction and video key frame optimization. The synthetic video generation method comprises the following steps: optimizing and fitting each parameter of a three-dimensional face deformation model for an input face image by adopting a convolutional neural network; training a voice-to-expression and head posture mapping network by using the parameters of the target video and the face model, and acquiring facial expression and head posture parameters from the input audio by using the trained voice-to-expression and head posture mapping network; synthesizing a human face and rendering the synthesized human face to generate a vivid human face video frame; training a rendering network based on a generative adversarial network by using the parameterized face image and the face image in the video frame, wherein the rendering network is used for generating a background for each frame of face image; and performing face background rendering and video synthesis based on video key frame optimization. The background transition of each frame of the output synthesized face video is natural and vivid, and the usability and practicability of the synthesized face video can be greatly enhanced.
Owner:GUANGDONG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products