Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

76 results about "Lip feature" patented technology

Face image processing method and device, electronic equipment and computer storage medium

The embodiment of the invention provides a face image processing method. The invention discloses a device, electronic equipment and a computer storage medium. The method comprises the following stepsof: obtaining a sample; lip makeup parameter modification information based on the to-be-processed face image is included in the lip makeup adjustment instruction; detecting lip feature points in theto-be-processed face image, performing interpolation processing on the lip feature points to obtain interpolation feature points, determining a lip region based on the lip feature points and the interpolation feature points, and performing corresponding processing on the to-be-processed lip region according to the lip makeup parameter modification information to obtain an adjusted to-be-processedface image. According to the scheme of the embodiment, When the lip makeup adjusting instruction is received, the to-be-processed face image can be processed according to the lip makeup adjusting instruction, that is, the function of adjusting the lip makeup of the user in the image through one key is achieved, the user does not need to edit the lip makeup of the face image manually, the processing time is shortened, and the use experience of the user is improved.
Owner:SHENZHEN LIANMENG TECH CO LTD

Personalized voice and video generation system based on phoneme posterior probability

The invention discloses a personalized voice and video generation system based on phoneme posterior probability. The personalized voice and video generation system mainly comprises the following steps: S1, extracting phoneme posterior probability through an automatic voice recognition system; s2, training a recurrent neural network to learn a mapping relationship between phoneme posterior probability and lip features, and through the network, inputting an audio of any target speaker to output the corresponding lip feature; s3, synthesizing the lip-shaped features into a corresponding face image through face alignment, image fusion, an optical flow method and other technologies; and S4, generating a final speaker speech video from the generated face sequence through dynamic planning and other technologies. The invention relates to the technical field of speech synthesis and speech conversion. According to the method, the lip shape is generated based on the phoneme posteriori probability, the requirement for the video data volume of the target speaker is greatly reduced, meanwhile, the video of the target speaker can be directly generated from the text content, and the audio of the speaker does not need to be additionally recorded.
Owner:深圳市声希科技有限公司

Lip characteristic and deep learning based smiling face recognition method

InactiveCN105956570AImprove recognition accuracySuppresses the effects of non-Gaussian noiseAcquiring/recognising facial featuresFeature vectorPositive sample
The invention discloses a lip feature and deep learning based smiling face recognition method, which comprises the steps of firstly tailoring on a positive sample image containing a smiling face and a negative sample image without a smiling face so as to acquire lip image training samples, carrying out feature extraction on all lip image training samples respectively so as to acquire feature vectors corresponding to each training sample, and training a deep neural network by adopting the feature vectors of the training samples; as for an image to be recognized, acquiring a lip feature vector of a human face in the image to be recognized by adopting the same method, inputting the lip feature vector into the well trained deep neural network so as to carry out recognition, and acquiring a recognition result on whether the human face is a smiling face or not. The smiling face recognition method disclosed by the invention improves the smiling face recognition accuracy under complicated conditions by combining lip features and feature learning capacity of the deep neural network; and influences of non-Gaussian noises are suppressed through improving an overall cost function in training of the deep neural network, and the recognition accuracy is improved.
Owner:UNIV OF ELECTRONIC SCI & TECH OF CHINA

Mouth-movement-identification-based video marshalling method

Disclosed in the invention is a mouth-movement-identification-based video marshalling method. According to the invention, on the basis of distribution differences of a tone (H) component, a saturation (S) component, and a brightness (V) component at lip color and skin color areas in a color image, three color feature vectors are selected; filtering and area connection processing is carried out on a binary image that has been processed by classification and threshold segmentation by a fisher classifier; a lip feature is matched with an animation picture lip feature in a material library; and a transition image between two frames is obtained by image interpolation synthesis, thereby realizing automatic video marshalling. The fisher classifier is constructed by selecting color information in the HSV color space reasonably, thereby obtaining more information contents for lip color and skin color area segmentation and enhancing reliability and adaptivity of mouth matching feature extraction in a complex environment. Moreover, with the image interpolation technology, the transition image between the two matched video frame pictures is generated, thereby improving the sensitivity and ornamental value of the video marshalling and realizing a smooth and complete video content.
Owner:COMMUNICATION UNIVERSITY OF CHINA

Lip language recognition method combining graph neural network and multi-feature fusion

The invention discloses a lip language recognition method combining a graph neural network and multi-feature fusion. The method comprises the following steps: firstly, extracting and constructing a face change sequence, marking face feature points, correcting a lip deflection angle, performing pre-processing through a trained lip semantic segmentation network, training a lip language recognition network through a graph structure of a single-frame feature point relationship and a graph structure of an adjacent-frame feature point relationship, and finally, generating a lip language recognition result through the trained lip language recognition network. CNN lip features and lip region feature points obtained after CNN extraction and feature fusion are performed on an identification network data set and a lip semantic segmentation network data set are subjected to the extraction and fusion by the GNN lip features obtained after GNN extraction and fusion and then input into BiGRU for identification, and the problems that time sequence feature extraction is difficult and lip feature extraction is affected by external factors are solved; the method effectively extracts the static features of the lip and the dynamic features of the lip change, and has the characteristics of high lip change feature extraction capability, high recognition result accuracy and the like.
Owner:HEBEI UNIV OF TECH

Intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction

ActiveCN103425987AOvercome the effects of recognition errorsImprove recognition rateCharacter and pattern recognitionFeature vectorWheelchair
The invention discloses an intelligent wheelchair man-machine interaction method based on double-mixture lip feature extraction, and relates to the field of feature extraction and recognition control of a lip recognition technology. The method includes the steps of firstly, conducting DT_CWT filtering on a lip, then, conducting DCT conversion on a lip feather vector extracted through the DT_CWT so that the lip features extracted after conversion is conducted through the DT_CWT can be concentrated in a large coefficient obtained after the DCT conversion, enabling the feature vector to contain the largest amount of lip information, and enabling the effect of dimensionality reduction to be achieved at the same time, wherein due to the fact that the DT_CWT has approximate translation invariance, the difference between feature values of the same lip in different positions in an ROI is small after the DT_CWT filtering is conducted, and the influence produced when the lip recognition is wrong due to position offset of the lip in the ROI is eliminated. According to the intelligent wheelchair man-machine interaction method, the lip recognition rate is greatly improved, and the robustness of a lip recognition system is improved.
Owner:CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products