Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

397 results about "Expression Feature" patented technology

Describes the expression pattern of a gene.

Depth convolution wavelet neural network expression identification method based on auxiliary task

The invention discloses a depth convolution wavelet neural network expression identification method based on auxiliary tasks, and solves problems that an existing feature selection operator cannot efficiently learn expression features and cannot extract more image expression information classification features. The method comprises: establishing a depth convolution wavelet neural network; establishing a face expression set and a corresponding expression sensitive area image set; inputting a face expression image to the network; training the depth convolution wavelet neural network; propagating network errors in a back direction; updating each convolution kernel and bias vector of the network; inputting an expression sensitive area image to the trained network; learning weighting proportion of an auxiliary task; obtaining network global classification labels; and according to the global labels, counting identification accuracy rate. The method gives both considerations on abstractness and detail information of expression images, enhances influence of the expression sensitive area in expression feature learning, obviously improves accuracy rate of expression identification, and can be applied in expression identification of face expression images.
Owner:XIDIAN UNIV

Model training method, method for synthesizing speaking expression and related device

The embodiment of the invention discloses a model training method for synthesizing speaking expressions. Expression characteristics, acoustic characteristics and text characteristics are obtained according to videos containing face action expressions of speakers and corresponding voices. Because the acoustic feature and the text feature are obtained according to the same video, the time interval and the duration of the pronunciation element identified by the text feature are determined according to the acoustic feature. A first corresponding relation is determined according to the time interval and duration of the pronunciation element identified by the text feature and the expression feature, and an expression model is trained according to the first corresponding relation. The expressionmodel can determine different sub-expression characteristics for the same pronunciation element with different durations in the text characteristics; the change patterns of the speaking expressions are added, the speaking expressions generated according to the target expression characteristics is determined by the expression model. The speaking expressions have different change patterns for the same pronunciation element, and therefore the situation that the speaking expressions are excessively unnatural in change is improved to a certain degree.
Owner:TENCENT TECH (SHENZHEN) CO LTD

Multi-feature fusion behavior identification method based on key frame

A multi-feature fusion behavior identification method based on a key frame comprises the following steps of firstly, extracting a joint point feature vector x (i) of a human body in a video through anopenpose human body posture extraction library to form a sequence S = {x (1), x (2),..., x (N)}; secondly, using a K-means algorithm to obtain K final clustering centers c '= {c' | i = 1, 2,..., K},extracting a frame closest to each clustering center as a key frame of the video, and obtaining a key frame sequence F = {Fii | i = 1, 2,..., K}; and then obtaining the RGB information, optical flow information and skeleton information of the key frame, processing the information, and then inputting the processed information into a double-flow convolutional network model to obtain the higher-levelfeature expression of the RGB information and the optical flow information, and inputting the skeleton information into a space-time diagram convolutional network model to construct the space-time diagram expression features of the skeleton; and then fusing the softmax output results of the network to obtain a final identification result. According to the process, the influences, such as the timeconsumption, accuracy reduction, etc., caused by redundant frames can be well avoided, and then the information in the video can be better utilized to express the behaviors, so that the recognition accuracy is further improved.
Owner:NORTHWEST UNIV(CN)

Text classification method based on feature information of characters and terms

The invention discloses a text classification method based on feature information of characters and terms. The method comprises the steps that a neural network model is utilized to perform character and term vector joint pre-training, and initial term vector expression of the terms and initial character vector expression of Chinese characters are obtained; a short text is expressed to be a matrixcomposed of term vectors of all terms in the short text, a convolutional neural network is utilized to perform feature extraction, and term layer features are obtained; the short text is expressed tobe a matrix composed of character vectors of all Chinese characters in the short text, the convolutional neural network is utilized to perform feature extraction, and Chinese character layer featuresare obtained; the term layer features and the Chinese character layer features are connected, and feature vector expression of the short text is obtained; and a full-connection layer is utilized to classify the short text, a stochastic gradient descent method is adopted to perform model training, and a classification model is obtained. Through the method, character expression features and term expression features can be extracted, the problem that the short text has insufficient semantic information is relieved, the semantic information of the short text is fully mined, and classification of the short text is more accurate.
Owner:SUN YAT SEN UNIV

Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment

The invention provides a three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment. The method comprises the following steps of: obtaining an original three-dimensional face mesh model corresponding to an original two-dimensional face image, wherein the original three-dimensional face mesh model contains a second expression feature point corresponding to a first expression feature point of the original two-dimensional face image; calculating a camera parameter matrix of the original three-dimensional face mesh model according to a formula (1); and mapping the second expression feature point onto the original two-dimensional face image according to the camera parameter matrix for judging the matching degree of the second expression feature point and the first expression feature point, and adjusting the original three-dimensional mesh model according to the judging result. The original three-dimensional face mesh model and the original two-dimensional face image are subjected to matching degree judgment according to the camera parameters, and the original three-dimensional face mesh model is adjusted under the condition of low matching degree, so that the regulated three-dimensional face mesh model can be enabled to achieve a better matching degree with the original two-dimensional face image.
Owner:广东思理智能科技股份有限公司

Three-dimensional human head and face model reconstruction method based on random face image

The invention provides a three-dimensional human head and face model reconstruction method based on a random face image. The method includes; establishing a human face bilinear model and an optimization algorithm by using a three-dimensional human face database; gradually separating the spatial attitude of the human face, camera parameters and identity features and expression features for determining the geometrical shape of the human face through the two-dimensional feature points, and adjusting the generated three-dimensional human face model through Laplace deformation correction to obtaina low-resolution three-dimensional human face model; finally, calculating the face depth, and achieving high-precision three-dimensional model reconstruction of the target face through registration ofthe high-resolution template model and the point cloud model, so as to enable the reconstructed face model to conform to the shape of the target face. According to the method, while face distortion details are eliminated, original main details of the face are kept, the reconstruction effect is more accurate, especially in face detail reconstruction, face detail distortion and expression influences are effectively reduced, and the display effect of the generated face model is more real.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Robot system and method for detecting human face and recognizing emotion

InactiveCN103679203AImprove home monitoringImprove the ability to accompanyImage enhancementCharacter and pattern recognitionFace detectionColor image
The invention discloses a robot system and method for detecting a human face and recognizing emotion. The system comprises a human face expression library collecting module, an original expression library building module, a feature library rebuilding module, a field expression feature extracting module and an expression recognizing module. The human face expression library collecting module is used for collecting a large number of human face expression color image frames through a video collecting device and processing the human face expression color image frames to form a human face expression library. The original expression library building module is used for extracting expression features after removing image redundant information of training images in the human face expression library to form an original expression feature library. The feature library rebuilding module is used for rebuilding the original expression feature library as a structuralized hash table through the distance hash method. The field expression feature extracting module is used for collecting field human face expression color image frames through the video collecting device and extracting field expression features. The expression recognizing module is used for recognizing the human face expression through the k neighbor sorting algorithm in the feature library in which the field expression features extracted by field expression feature extracting module are rebuilt.
Owner:江苏久祥汽车电器集团有限公司

Recording and lamplight control system and method based on face recognition and facial expression recognition

The invention discloses a recording and lamplight control system and method based on face recognition and facial expression recognition. The system comprises a face image collection unit, a storage, an operation processing unit, a wireless transmission module and a lamplight theme controller. The face image collection unit is used for collecting face images and converting the collected face images into feature data to be stored; the storage is used for facial storing expression features which are offline trained and a face registering template collected primarily; the operation processing unit achieves face recognition and facial expression recognition, and sends out corresponding wireless control signals through the wireless transmission module according to a recognition analysis result; the lamplight theme controller stores lamplight themes suitable for different moods and receives controls signals sent by the wireless transmission module, and lamplight theme control signals conforming to the corresponding moods are generated and transmitted to the lamplight system. Mood recording and household control based on face recognition and facial expression recognition are achieved, the good experience can be brought to a user, and entertainment life of a home is enriched.
Owner:SHANDONG UNIV

Tristimania diagnosis system and method based on attention and emotion information fusion

The invention provides a tristimania diagnosis system and method based on attention and emotion information fusion. The system comprises an emotion stimulating module, an image collecting module, a data transmission module, a data pre-processing module, a data processing module, a feature extraction module and an identification feedback module, wherein the emotion stimulating module is used for setting a plurality of emotion stimulating tasks and providing the emotion stimulating tasks to a subject; the image collecting module is used for collecting eye images and face images of the subject in the emotion stimulating task performing process; the data transmission module is used for obtaining and sending the eye images and the face images; the data pre-processing module is used for pre-processing the eye images and the face images; the data processing module is used for calculating the attention point position and the pupil diameter of the subject; the feature extraction module is used for extracting attention type features and emotion type features; and the identification feedback module is used for performing tristimania diagnosis and identification on the subject. The system and the method have the advantage that the tristimania can be comprehensively, systematically and quantificationally identified by using the attention point center distance features, the attention deviation score features, the emotion zone width and the face expression features.
Owner:BEIJING UNIV OF TECH +1

Expression recognition method and system based on local and global attention mechanism

The invention discloses an expression recognition method and system based on local and global attention mechanisms. The method comprises the following steps: firstly, constructing a neural network model based on a local and global attention mechanism, wherein the model is composed of a shallow feature extraction module, a spatial domain local and global attention module, a residual network module, a multi-scale feature extraction module, a channel domain local and global attention module, a full connection layer and a classification layer; training the neural network model by using sample images in the facial expression image library; and finally, inputting a to-be-tested face image into the trained neural network model for expression recognition. According to the invention, a multi-scale feature extraction module is used to extract texture features of different scales in a face image, so that loss of discriminative expression features is avoided; local and global attention modules of a spatial domain and a channel domain are used for enhancing features which play a key role in expression recognition and have higher discriminability, and the accuracy and robustness of expression recognition can be effectively improved.
Owner:NANJING UNIV OF POSTS & TELECOMM

Expandable three-dimensional display remote video communication method

The invention discloses an expandable three-dimensional display remote video communication method. According to the expandable three-dimensional display remote video communication method, the image of a first user is obtained by means of an RGB-D camera at the data transmission end, the image of the first user comprises a texture image and a depth image, facial feature information and expression feature information are extracted from the texture image, physical feature information is extracted from the depth image, and point clouds of the first user are reconstructed; all the feature information is optimized by means of the point clouds, the optimized feature information is obtained to generate a three-dimensional model A, the three-dimensional model A is projected on a corresponding texture image, the corresponding texture information is extracted, and acquired voice information, the optimized feature information and texture data are sent to a second user; at the data receiving end, the second user receives the data from the first user and extracts the optimized feature information from the received data to generate a three-dimensional model B, the three-dimensional model B is rendered by means of the texture data, and a rendering result is output through a three-dimensional display device, and the voice information is played.
Owner:ZHEJIANG UNIV

A facial expression conversion method based on identity and expression feature conversion

The invention provides a facial expression conversion method based on identity and expression feature conversion, and mainly solves the problem of personalized facial expression. Most of the existingfacial expression synthesis work attempts to learn conversion between expression domains, so that paired samples and marked query images are needed. Identity information and expression feature information of an original image can be stored by establishing the two encoders, and target facial expression features are used as condition tags. The method mainly comprises the following steps: firstly, carrying out facial expression training, preprocessing a neutral expression picture and other facial expression pictures, then extracting identity characteristic parameters and target facial expressioncharacteristic parameters of a neutral expression, and establishing a matching model; Secondly, performing facial expression conversion, inputting the neutral expression picture into a conversion model, and applying model output parameters to expression synthesis to synthesize a target expression image. Pairing data sets of different expressions with the same identity are not limited any more, identity information of an original image can be effectively reserved due to existence of the two encoders, and conversion from a neutral expression to different expressions can be achieved.
Owner:CENT SOUTH UNIV

A shielded expression recognition algorithm combining double dictionaries and an error matrix

The invention discloses a shielded expression recognition algorithm combining double dictionaries and an error matrix, which comprises the following steps of: firstly, separating expression characteristics and identity characteristics in each type of expression images by utilizing low-rank decomposition, and respectively carrying out dictionary learning on a low-rank matrix and a sparse matrix toobtain an intra-class related dictionary and a difference structure dictionary; Secondly, when the shielded images are classified, original sparse coding does not consider coding errors, the coding errors caused by shielding cannot be accurately described, it is proposed that the errors caused by shielding are expressed by a single matrix, and the matrix can be separated from a feature matrix of the unshielded training image; a clear image can be recovered by subtracting the error matrix from the test sample; a clear image sample is decomposed into identity features and expression features ina low-rank manner by using double-dictionary cooperative representation, and finally classification is realized according to contribution of each type of expression features in joint sparse representation. The method has robustness for random shielding expression recognition.
Owner:GUANGDONG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products