Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

64 results about "Expression - action" patented technology

Intelligent home steward central control system with somatosensory function and control method of intelligent home steward central control system

The invention relates to an intelligent home steward central control system with a somatosensory function and a control method of the intelligent home steward central control system. The intelligent home steward central control system comprises a wireless communication module, a human face identification system and a somatosensory identification system, wherein the human face identification system is used for acquiring human face information, and the somatosensory identification system is used for acquiring human body action information; the somatosensory identification system, a power supply management system, an operation instruction output module and a human-machine exchange interface are respectively electrically connected with a master control module, the master control module is further in remote network connection with an external cloud terminal through the wireless communication module, and the operation instruction output module is used for carrying out wireless network control on intelligent home equipment through the wireless communication module. The intelligent home steward central control system can be used for replacing a manner of manually starting intelligent home equipment software to operate and carrying out corresponding control after receiving limb/expression action of a set figure, so that tedious operations as in the prior art are omitted, a ''human-similar'' intelligent home steward function is realized, and efficient, convenient and comfortable operating experiences can be brought to a user.
Owner:ELITE ARCHITECTURAL CO LTD

Interactive edit and extended expression method of network video link information

The present invention relates to an expression method for the interactive edition and expansion of network video linkage information. The expression method comprises that the network video is collected, and a system network video storehouse is established; a system network video player is established, and a system edition platform and a playing system are established; the interactive linkage of the video in the system network video storehouse and the correlated information through the edition platform, and the edited correlated information is collected and stored into the correlated information database of the system network video storehouse; the content of the played video is collected and detected in real time according to the correlated information obtained by a client-side script program, and when information corresponding to the correlated domain is detected, the system network video player informs the client-side script program to express the correlated information correspondingly. The present invention solves the technical problem that the network video can not be correlated with the content of the existing information or the user-defined information in the network in the background technology. The way of the correlation between the network video and the information of the present invention supports all the information expression actions and the information expression ways which can appear on network pages.
Owner:张伟华 +2

Three-dimensional human face animations editing and synthesis a based on operation transmission and Isomap analysis

The invention discloses a method for compiling and compounding 3D face animation based on sport communication and Isomap analysis. Firstly, the method requires a user to choose a control point in the 3D face model and designate the restriction conditions of expression action in a two-dimension plane; the system trains a prior probability model according to the data of face animation, spreads less user restriction to the other parts of face grids so as to generate completed lively face expression; then modeling to 3D face animation knowledge is carried out by Isomap learning algorithm, key frames designated by the user are combined, smooth geodesic on the higher order surfaces is fitted, and new face animation serial is generated automatically. The method of the invention can lead the compiling of face expression to be carried out only in the two-dimension projection space and the compiling of face expression is easier; owing to the restriction of the prior probability model, 3D face expression satisfying the principle of anthropotomy can be obtained, the new face animation serial can be obtained by combining the key frames designated by the user, and the production of face expression animation is simplified.
Owner:ZHEJIANG UNIV

Multi-feature fusion driver abnormal expression recognition method

The invention discloses a multi-feature fusion driver abnormal expression recognition method. The multi-feature fusion driver abnormal expression recognition method comprises the steps: S1, tracking and monitoring expression actions of a driver in real time through a camera installed on the driver side; S2, precisely identifying expression details in the real-time driver video; S3, detecting the positions of the eyes, and judging whether the eyes are tired or not; S4, positioning the edge contour of the mouth, and judging whether yawn action occurs or not; S5, detecting a head action, and judging whether fatigue occurs or not; S6, weighting the detection results of the eye state, the mouth state and the head motion state, finally judging whether fatigue occurs or not, and outputting the detection results; and S7, combining the identification result of the current frame as an estimated position of subsequent frame identification, and respectively detecting actions in subsequent frames to realize continuous detection and identification of abnormal behaviors of the driver. According to the invention, real-time monitoring and alarm triggering can be carried out, a driver is warned andreminded, and traffic accidents are prevented, and the safety in the driving process is ensured.
Owner:WUHAN INSTITUTE OF TECHNOLOGY

High-simulation robot head structure and motion control method thereof

The invention discloses a high-simulation robot head structure and a motion control method thereof. The high-simulation robot head structure comprises a shell assembly, a face motion assembly and an internal control assembly. The face motion assembly comprises an eye assembly, a mouth assembly and a neck assembly. An eyeball assembly is connected with the internal control assembly through a firstmotor. An eyelid assembly is connected with the internal control assembly through a second motor and a third motor. The mouth assembly is connected with the internal control assembly through a fourthmotor. An expression control and management module, independently researched and developed by our company, for realizing expression actions can be added to the internal control assembly. The expression control and management module works in cooperation with the face assembly under the external silica gel skin to realize the changes of expressions. With the high-simulation robot head structure andthe motion control method thereof, it is solved that in the prior art, when a robot for the service industry simulates the facial expressions of a real human being, the facial expressions are stiff because a traditional actuator is adopted for providing kinetic energy, the robot is in lack of the function of interacting with real human beings, and the robot cannot provide new and interesting consumption experience for consumers in the service industry.
Owner:深圳市小村机器人智能科技有限公司

Face microexpression recognition method based on video magnification and depth learning

InactiveCN109034143AImprove accuracyIncrease the range of facial expressionsAcquiring/recognising eyesNeural architecturesData setMicroexpression
The invention provides a method for recognizing facial micro-expression based on video amplification and depth learning. The method comprises the following steps of: using a video amplification technique based on interference cancellation to amplify the motion amplitude of the micro-expression video data; the enlarged video data being divided into video frame images, and all image sequences belonging to micro-expression are extracted according to the micro-expression tags in the data set to form a new data set; facial clipping preprocessing being carried out on the processed video, and all video image sequences being uniformly clipped into 110*110 size gray-scale images; the new data after preprocessing being put into the convolution neural network model and trained to extract the micro-expression feature data to achieve the task of micro-expression recognition. The technical proposal provided by the invention enlarges the amplitude of the expression action through the video amplification operation of eliminating interference to the complete data set, and simultaneously introduces a neural network model for training, thereby effectively improving the accuracy rate of the micro expression recognition on the basis of the full classification of the emotion label.
Owner:YUNNAN UNIV

Multi-channel information emotional expression mapping method for facial expression robot

The invention discloses a multi-channel information emotional expression mapping method for a facial expression robot. The method comprises the following steps of S1: pre-establishing an expression library, a voice library, a gesture library, an expression output library, a voice output library and a gesture output library; S2: acquiring a voice of an interlocutor and identifying a sound expression by comparing the voice with the voice library; acquiring an expression of the interlocutor and identifying an emotional expression by comparing the expression with the expression library; acquiring a gesture of the interlocutor and identifying a gesture expression by comparing the gesture with the gesture library; fusing the sound expression, the emotional expression and the gesture expression to obtain a combined expression instruction; and S3: selecting voice stream data from the voice output library by the facial expression robot according to the combined expression instruction to perform output, and selecting an expression action instruction from the expression output library by the facial expression robot according to the combined expression instruction to perform facial expression. According to the method, multi-channel information emotional expression of the facial expression robot can be realized; and the method is simple, convenient to use and low in cost.
Owner:ANHUI SEMXUM INFORMATION TECH CO LTD

Extracting method, system and device for virtual character expressions and actions and medium

The invention discloses a virtual character expression action extraction method, device and apparatus, and a storage medium. The method comprises the following steps: acquiring a character action video; and extracting character action information from the character action video, constructing a character action library, obtaining a corresponding lip image according to a voice signal, embedding thelip image into a face image corresponding to the character action library; generating an image containing character expressions and actions; and extracting character expression actions from the image.Different second label information can be generated by constructing a character action library and simply modifying two-dimensional point coordinates or the shape of a two-dimensional mask, so that the content of the character action library can be enriched; according to the method, different expression actions can be extracted at any time while the character expression action extraction operation is simplified, rich character action libraries can be provided, new actions can be conveniently added into the character action libraries, and the working efficiency is improved. The method is widely applied to the technical field of image processing.
Owner:RES INST OF TSINGHUA PEARL RIVER DELTA +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products