Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1636results about How to "Improve interactive experience" patented technology

Mobile terminal and design method of operation and control technology thereof

The invention discloses a mobile terminal and a design method of an operation and control system thereof, the mobile terminal comprises a machine body and an embedded central processing unit, wherein a back surface touch panel is mounted on the back surface of the machine body, and three-state buttons are respectively mounted in lower middle positions on two sides of the machine body; and the embedded central processing unit is the embedded central processing unit equipped with the operation and control system including finger action semantic recognition, a virtual keyboard and an input method. The design method comprises the steps of designing the shape of the machine body, designing the finger action semantic recognition under the support of an algorithm of the operation and control system and designing the virtual keyboard and the input method in the operation and control system. The invention designs the back surface touch type mobile terminal with natural operation and control concept, thereby realizing the purposes of maximizing the liberation of a front surface touch screen, easily realizing operation and control, in particular to text input by single hand, improving the intelligent operation and control based on full combination of system software algorithm, software and hardware, leading the operation to cater for habitual thinking and natural reaction of a user, and being suitable for long-time use.
Owner:DALIAN POLYTECHNIC UNIVERSITY

Household information acquisition and user emotion recognition equipment and working method thereof

The invention discloses a household information acquisition and user emotion recognition equipment, which comprises a shell, a power supply, a main controller, a microcontroller, multiple environmental sensors, a screen, a microphone, an audio, multiple health sensors, a pair of robot arms and a pair of cameras, wherein the microphone is arranged on the shell; the power supply, the main controller, the microcontroller, the environmental sensors, the audio and the pair of cameras are arranged symmetrically relative to the screen respectively on the left and right sides; the robot arms are arranged on the two sides of the shell; the main controller is in communication connection with the microcontroller, and is used for controlling the microcontroller to control the movements of the robot arms through motors of the robot arms; the power supply is connected with the main controller and the microcontroller, and is mainly used for providing energy for the main controller and the microcontroller. According to the household information acquisition and user emotion recognition equipment, the intelligent speech recognition technology, the speech synthesis technology and the facial expression recognition technology are integrated, thus the use of the household information acquisition and user emotion recognition equipment is more convenient, and the feedback is more reasonable.
Owner:HUAZHONG UNIV OF SCI & TECH

Speech synthesis method and related equipment

The present application provides a speech synthesis method and related equipment. The method includes the following steps that: the identity of a user is determined according to the current input speech of the user; an acoustic model is obtained from an acoustic model library according to the current input speech; basic speech synthesis information is determined according to the identity of the user, wherein the basic speech synthesis information characterizes variable quantities in the preset sound speed, preset volume, and preset pitch of the acoustic model; a reply text is determined; enhanced speech synthesis information is determined according to the reply text and context information, wherein the enhanced speech synthesis information characterizes variable quantities in the preset timbre, tone and preset rhythm of the acoustic model; and speech synthesis is performed on the reply text through the acoustic model according to the basic speech synthesis information and the enhancedspeech synthesis information, so that reply speech for the user can be obtained. With the speech synthesis method and related apparatus provided by the embodiments of the invention adopted, a device can provide a personalized speech synthesis effect to the user during a man-machine interaction process, and therefore, the speech interaction experience of the user can be improved.
Owner:HUAWEI TECH CO LTD

Virtual learning environment natural interaction method based on multimode emotion recognition

The invention provides a virtual learning environment natural interaction method based on multimode emotion recognition. The method comprises the steps that expression information, posture information and voice information representing the learning state of a student are acquired, and multimode emotion features based on a color image, deep information, a voice signal and skeleton information are constructed; facial detection, preprocessing and feature extraction are performed on the color image and a depth image, and a support vector machine (SVM) and an AdaBoost method are combined to perform facial expression classification; preprocessing and emotion feature extraction are performed on voice emotion information, and a hidden Markov model is utilized to recognize a voice emotion; regularization processing is performed on the skeleton information to obtain human body posture representation vectors, and a multi-class support vector machine (SVM) is used for performing posture emotion classification; and a quadrature rule fusion algorithm is constructed for recognition results of the three emotions to perform fusion on a decision-making layer, and emotion performance such as the expression, voice and posture of a virtual intelligent body is generated according to the fusion result.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Deep learning-based robot conversation interaction method and apparatus

The invention provides a deep learning-based robot conversation interaction method. The method comprises the steps of receiving a conversation statement input by a user; extracting semantic feature units in the conversation statement, and performing knowledge base retrieval according to the semantic feature units to obtain a similar conversation statement; generating sentence type vectors and sentence vectors corresponding to the input conversation statement and the similar conversation statement respectively in a learning model, and outputting the similarity between a combination of the sentence vectors and the sentence type vectors for the input conversation statement and a combination of the sentence vectors and the sentence type vectors for the similar conversation statement; and selecting and outputting a reply statement corresponding to the similar conversation statement with highest similarity. According to the method, a robot can judge not only semantics according to a single word of Chinese but also statement semantics consisting of the words and the similarity among different statements composed of the same words, so that an appropriate reply is found and output more accurately for an intention of a conversation object, and the man-machine interaction experience effect is greatly improved.
Owner:BEIJING GUANGNIAN WUXIAN SCI & TECH

Video processing method, device and system, terminal equipment and storage medium

The embodiment of the invention discloses a video processing method and device, terminal equipment and a storage medium. The method comprises the steps of obtaining a to-be-processed video; obtaininga target audio clip and a face image sequence in the to-be-processed video; performing emotion analysis on the face image sequence to obtain emotion features; performing voice analysis on the target audio clip to obtain statement features which are used for representing keywords in the target audio clip; based on the emotional characteristics and the statement characteristics, determining reply contents and expression behaviors of the virtual character; and generating and outputting a reply video for the to-be-processed video, wherein the reply video comprises voice content corresponding to the reply content and a virtual character for executing the expression behavior. According to the embodiment of the invention, the emotional characteristics and the statement characteristics of the character can be obtained according to the speaking video of the character, and the virtual character video matched with the emotional characteristics and the statement characteristics is generated as thereply, so that the sense of reality and naturalness of human-computer interaction are improved.
Owner:SHENZHEN ZHUIYI TECH CO LTD

Robot realization method, control method, robot and electronic device

The invention provides a robot realization method, a control method, a robot and an electronic device. The robot realization method comprises steps: the robot sends an acquired real scene image and synchronization information thereof, and a control end determines a target position according to the real scene image and the target position and the synchronization information are sent to the robot; according to the target position and the synchronization information, the robot determines the target position of the robot; according to the obstacle information of the current scene and the target position of the robot, a planning path is generated; and according to the planning path, the robot is moved to the target position. By adopting the technical scheme provided by the invention, a user only needs to specify a target point on the real scene image uploaded by the robot, the robot can be autonomously navigated to the target point according to the target point selected by the user, the motion control on the robot is more convenient, the control frequency is lower, the control instruction amount is reduced, and a friendly interactive experience can also be provided in a condition with a poor remote control network environment.
Owner:BEIJING DEEPGLINT INFORMATION TECH

Vote interaction method and device based on live broadcast room video stream bullet screens

ActiveCN106792229AGuaranteed time limit for votingEnsure fairnessSelective content distributionMultimediaInteraction method
The invention provides a vote interaction method and device based on live broadcast room video stream bullet screens. The method comprises the steps of receiving a vote request which is sent by a live broadcast room anchor user and comprises bullet screen information; synthesizing a drawn bullet screen layer into to-be-uploaded video stream of the anchor user according to the vote request, and displaying vote options corresponding to the bullet screen information and vote result information of live broadcast room member users associated with the vote options, in the bullet screen layer; determining the volte result information obtained by accumulating the vote information corresponding to the selected vote options sent by the live broadcast room member users in a live broadcast room, and correspondingly updating the vote result information to the bullet screen layer in to-be-uploaded video stream; and pushing the to-be-uploaded video stream into which the bullet screen layer is synthesized to the member users. According to the method and the device, the timeliness of the data is improved, the machine load of the live broadcast room member users is reduced, the interaction experience among the users is improved, and the great innovation is achieved.
Owner:GUANGZHOU HUYA INFORMATION TECH CO LTD

A three-dimensional display system and method based on cloud computing

The invention discloses a three-dimensional display system and method based on cloud computing. The system comprises a three-dimensional data acquisition module, a three-dimensional cloud model library, a rendering compression optimization module, an interactive display module and a big data analysis background. 3-D data of an object to be displayed is intelligently processed base on a cloud computing service, the original three-dimensional model and the corresponding link of the object to be displayed are obtained, and the original three-dimensional model and the corresponding link of the object to be displayed are mutually displayed according to the obtained link or the three-dimensional model of the object to be displayed after the optimization of rendering and compression, so that theobject to be displayed can interact with a user, and the interaction experience is good; by linking or interacting with the optimized three-dimensional model of the object to be displayed, the cross-platform display can be realized, and the multi-screen interaction and air imaging are also supported, so that the functions are more abundant; The system can analyze and count the user's behavior whena user viewing the interactive display content, so as to obtain the interesting points of different viewers for the convenience of business use, and to be more comprehensive. The system and method can be widely applied to the technical field of display.
Owner:GUANGDONG KANG YUN TECH LTD

Functional modeling method for indoor scene

The invention provides a functional modeling method for an indoor scene. The functional modeling method includes the first step of extracting the pose of a depth camera, integrating the pose of the depth camera into an ICP algorithm and carrying out real-time registration reestablishment on three-dimensional surfaces according to the ICP algorithm, the second step of carrying out segmenting and clustering to realize semantic segmentation of the scene, the third step of extracting point-cloud sets with the area larger than a preset threshold value, the fourth step of judging whether point-cloud sets perpendicular to the ground exist, the fifth step of extracting a handle, the sixth step of pulling out the handle, recording the opening process, the closing process and the internal structure and carrying out functional labeling, the seventh step of judging whether point-cloud sets parallel to the ground exist, the eighth step of carrying out Euclid clustering and segmenting to obtain a plurality of individual objects, the ninth step of moving away the individual objects, recording blocked parts and carrying out functional labeling, and the tenth step of carrying out summarizing to obtain a functional model of the indoor scene. According to the method, functional operation can be conducted on the parts with labels, so interactive experience of users is greatly promoted.
Owner:SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products