Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

72 results about "Active appearance model" patented technology

An active appearance model (AAM) is a computer vision algorithm for matching a statistical model of object shape and appearance to a new image. They are built during a training phase. A set of images, together with coordinates of landmarks that appear in all of the images, is provided to the training supervisor.

Three-dimensional facial reconstruction method

InactiveCN101751689AGeometry reconstruction speed reducedImplement automatic rebuild3D-image rendering3D modellingAdaBoostFace model
The invention relates to a three-dimensional facial reconstruction method, which can automatically reconstruct a three-dimensional facial model from a single front face image and puts forward two schemes. The first scheme is as follows: a deformable face model is generated off line; Adaboost is utilized to automatically detect face positions in the inputted image; an active appearance model is utilized to automatically locate key points on the face in the inputted image; based on the shape components of the deformable face model and the key points of the face on the image, the geometry of a three-dimensional face is reconstructed; with a shape-free texture as a target image, the texture components of the deformable face model are utilized to fit face textures, so that a whole face texture is obtained; and after texture mapping, a reconstructed result is obtained. The second scheme has the following differences from the first scheme: after the geometry of the three-dimensional face is reconstructed, face texture fitting is not carried out, but the inputted image is directly used as a texture image as a reconstructed result. The first scheme is applicable to fields such as film and television making and three-dimensional face recognition, and the reconstruction speed of the second scheme is high.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI +1

Single-photo-based human face animating method

The invention discloses a single-photo-based human face animating method, which belongs to the field of graph and image processing and computer vision. The method is to automatically reconstruct a three-dimensional model of a human face according to a single human front face photo and then to drive the reconstructed three-dimensional model to form personal human face animation. The method uses a human three-dimensional reconstruction unit and a human face animation unit, wherein the human face three-dimensional reconstruction unit carries out the following steps: generating a shape-change model off-line; automatically positioning the key points on the human faces by utilizing an active appearance model; adding eye and tooth grids to form a complete human face model; and obtaining the reconstruction result by texture mapping. The human face animation unit carries out the following steps: making animation data of far spaced key points; mapping the animation data onto a target human face model by using a radical primary function; realizing motion data interpolation by using spherical parametrization; and generating the motion of eyes. The method has the characteristics of high automation, robustness and sense of reality and is suitable to be used in field of film and television making, three-dimensional games and the like.
Owner:北京盛开智联科技有限公司

Method for tracking gestures and actions of human face

The invention discloses a method for tracking gestures and actions of a human face, which comprises steps as follows: a step S1 includes that frame-by-frame images are extracted from a video streaming, human face detection is carried out for a first frame of image of an input video or when tracking is failed, and a human face surrounding frame is obtained, a step S2 includes that after convergent iteration of a previous frame of image, more remarkable feature points of textural features of a human face area of the previous frame of image match with corresponding feather points found in a current frame of image during normal tracking, and matching results of the feather points are obtained, a step S3 includes that the shape of an active appearance model is initialized according to the human face surrounding frame or the feature point matching results, and an initial value of the shape of a human face in the current frame of image is obtained, and a step S4 includes that the active appearance model is fit by a reversal synthesis algorithm, so that human face three-dimensional gestures and face action parameters are obtained. By the aid of the method, online tracking can be completed full-automatically in real time under the condition of common illumination.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Expression identification method fusing depth image and multi-channel features

The invention discloses an expression identification method fusing a depth image and multi-channel features. The method comprises the steps of performing human face region identification on an input human face expression image and performing preprocessing operation; selecting the multi-channel features of the image, extracting a depth image entropy, a grayscale image entropy and a color image salient feature as human face expression texture information in the texture feature aspect, extracting texture features of the texture information by adopting a grayscale histogram method, and extracting facial expression feature points as geometric features from a color information image by utilizing an active appearance model in the geometric feature aspect; and fusing the texture features and the geometric features, selecting different kernel functions for different features to perform kernel function fusion, and transmitting a fusion result to a multi-class support vector machine classifier for performing expression classification. Compared with the prior art, the method has the advantages that the influence of factors such as different illumination, different head poses, complex backgrounds and the like in expression identification can be effectively overcome, the expression identification rate is increased, and the method has good real-time property and robustness.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Method for locating central points of eyes in natural lighting front face image

The invention discloses a method for locating central points of eyes in a natural lighting front face image. The method comprises the steps that an automatic face detecting algorithm is used for detecting a face area of the input front face image; an active appearance model is used for automatically locating key points on the face in a rectangular frame of the face area to define primary areas of the eye areas; on the basis that the eye areas are primarily located, an eye local appearance model is applied to further defining eye locating areas; light treatment is performed on the eye locating areas which are further defined, so that lighting influence on the local eye areas is eliminated, a boundary operator is applied to detecting the boundary characters, and inner and outer eye corner points are accurately located by using the boundary characters; the connection line of the inner and outer eye corner points of each eye area serves as a calculation starting point, the largest corresponding point is calculated by adopting the method that the circle integral is used for solving the gradient, and the point is the central point of the corresponding eye. The method can be used for accurately locating the eyes, thereby having definite robustness on lighting and eyelid shielding.
Owner:WUHAN INSTITUTE OF TECHNOLOGY

Three-dimensional face calibrating method capable of resisting posture and facial expression changes

The invention discloses a three-dimensional face calibrating method capable of resisting posture and facial expression changes. The three-dimensional face calibrating method comprises an active appearance model establishing stage and a face calibrating stage. In the active appearance model establishing stage, a three-dimensional face is acquired through three-dimensional image acquisition equipment, important markers of the face are manually marked, and through the grid shape and appearance information of the face, an active appearance model based on a depth image is established; in the face calibrating stage, the face is roughly calibrated through an average nose model first and then a test face is matched with the active appearance model based on the depth image to finely calibrate the face. By the method from rough calibration to fine calibration, the posture and facial expression changes of the face can be resisted, so that the face can be accurately calibrated under a natural condition; through conversion of the three-dimensional face onto the depth image, the calibrating efficiency is improved; therefore, the three-dimensional face calibrating method has an important significance in promoting practical application of the three-dimensional face to identity authentication.
Owner:ZHEJIANG UNIV

Single-sample human face recognition method compatible for human face aging recognition

The invention provides a single-sample human face recognition method compatible for human face aging recognition, which comprises the steps of conducting the aging simulation on the pre-stored image model of a human face sample to re-construct the image model of the human face sample; conducting the global feature matching for a to-be-recognized human face image model with the image model of the human face sample, wherein if the matching fails, regarding the recognition result as mismatching; and conducting the local feature matching for the to-be-recognized human face image model with the image model of the human face sample, wherein if the matching fails, regarding the recognition result as mismatching. The above to-be-recognized human face image model is an active appearance model of a to-be-recognized human face image. The image model of the human face sample is an active appearance model of a reserved human face sample image. According to the technical scheme of the invention, the recognition effect compatible for human face aging influence is realized and improved based on the combination of the AAM technique with the IBSDT technique. Meanwhile, based on the combination of the AAM technique with the triangulation matching technique, the matching reliability of global features is greatly improved. Based on the combination of the LBP technique with the SURF technique, the matching reliability of local features and the illumination robustness are improved. Finally, the high recognition rate for the reserved human face image as a single sample is realized.
Owner:BEIJING TCHZT INFO TECH CO LTD

Face characteristic point automation calibration method based on conditional appearance model

InactiveCN102663351ATo achieve the purpose of precise calibrationImprove balanceCharacter and pattern recognitionKernel ridge regressionModel parameters
The invention, which belongs to the computer vision field, discloses a face characteristic point automation calibration method based on a conditional appearance model. The method comprises the following steps: assuming that front face calibration is known; firstly, establishing that a discrete characteristic point of the front face corresponds to the discrete characteristic point of a side face; through a mapping relation between discrete characteristic points and a structural calibration point, acquiring an initialization calibration result of the side face, wherein the mapping relation is acquired by a regression algorithm; then, establishing the conditional model between the side face calibration point and the front face calibration point, continuously carrying out iteration optimization on a model parameter according to a reverse synthesis algorithm so as to obtain a final calibration result. According to the invention, the space mapping of the discrete characteristic points and the structural calibration point is established through kernel ridge regression (KRR) so as to obtain the initial calibration of the face characteristic. A subsequent iteration frequency is reduced and calibration precision is improved. The conditional appearance model and the reverse synthesis iteration algorithm are designed. Appearance deformation searching can be avoided and a searching efficiency can be improved. Compared to a traditional active appearance model (AAM), by using the calibration method of the invention, the calibration result is more accurate.
Owner:JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products