Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

84 results about "Multimodal interaction" patented technology

Multimodal interaction provides the user with multiple modes of interacting with a system. A multimodal interface provides several distinct tools for input and output of data. For example, a multimodal question answering system employs multiple modalities (such as text and photo) at both question (input) and answer (output) level.

Virtual robot multimodal interaction method and system applied to live video platform

The invention discloses a virtual robot multimodal interaction method and system applied to a live video platform. An application of the live video platform is provided with a virtual robot for associating the live video, the virtual robot has multimodal interaction ability, and the public opinion monitoring method comprises the following steps: information collection step: collecting public opinion information of live broadcast in the current specific direct broadcasting room, wherein the public opinion information comprises viewed text feedback information; public opinion monitoring step: calling text semantic understanding ability and generating a public opinion monitoring result of the specific direct broadcasting room; and a scene event response step: judging an event characterized by the public opinion monitoring result, calling the multimodal interaction ability, and outputting multimodal response data through the virtual robot. By adoption of the virtual robot multimodal interaction method and system disclosed by the invention, the viewing feedback information of audience can be monitored and prompted in real time, the live video operation is assisted by the virtual robot to keep the viscidity with the users consistently, and thus the user experience is improved.
Owner:BEIJING GUANGNIAN WUXIAN SCI & TECH

Instant generation and usage of HTTP URL based unique identity for engaging in multi-modal real-time interactions in online marketplaces, social networks and other relevant places

A System with mechanisms is provided that allows anyone or any entity to signup with basic identity information along with any service description resulting in an instant HTTP URL as a means for outside parties to contact this provider of the service and engage in multi-modal interactions involving voice, video, chat and media sharing. A single user or a group of users can signup with this system and provide optionally some contact information such as phone numbers. A HTTP URL is instantly generated that can be advertised as hyperlinks directly by the users to the outside environment such as on a website, email or any other means. The providers, who have signed-up for the service, can also remain signed-in into the system so as to engage with users wishing to contact them via the published URL. Any user with an internet connection upon clicking this URL will be directed to this proposed system and will be provided with a conversation window through which they can start engaging with the provider via chat, voice and video (webcam) and start sharing with each other any media information such as pictures and videos. The provider if not signed-in into the system can also engage with the user via SMS and email. The HTTP URL is being used as a starting point for users to engage with the providers of any service behind that URL. This URL can be advertised by the proposed system in various online marketplaces or social networks of the provider as well as on the web to be identified by search engines.
Owner:USHUR INC

Health information management television based on multimodal human-computer interaction

The invention provides a health information management television based on multimodal human-computer interaction. The health information management television comprises an information input module used for inputting health information of a user; an input interaction module used for acquiring image information, sound information, environment information, text input information and the like; a health monitoring module used for judging the health status of the user according to the acquired image information and sound information under a monitoring state, executing a corresponding operation according to the category of health status abnormalities, and giving corresponding hints according to the acquired environment information; and an output interaction module used for interacting with the user through images, texts and voice under an interaction state according to the acquired image information, sound information, environment information and the like, and transmitting health data to a health management center of the cloud. The health information management television based on the multimodal human-computer interaction provided by the invention can understand the behavior intention ofthe user, can realize the health information management between the television and the user through the way of multimodal interaction between the user and the television, and realize the supervision,judgment and recognition of the user behavior.
Owner:SHANGHAI JIAO TONG UNIV

Interaction method and system based on behavior standard of virtual human

The invention provides an interactive method based on the behavior standard of a virtual human. The virtual human starts the voice, emotion, vision and perception ability when the virtual human is inthe interactive state through the display of an intelligent device. The method comprises the following steps: acquiring the multi-modal interactive data, analyzing the multi-modal interactive data, and obtaining the interactive intention of the user; generating multi-modal response data and virtual human emotion expression data matched with multi-modal response data according to the interaction intention, wherein, the virtual human emotion expression data represent the virtual human's current emotion through the virtual human's facial expression and body movement; outputting multimodal response data with virtual human emotion expression data. The invention provides a virtual human, which can perform multimodal interaction with a user. Moreover, the invention can output the virtual human emotion expression data when outputting the multimodal response data, and express the current virtual human emotion through the virtual human emotion expression data, so that the user can enjoy the human-like interactive experience.
Owner:BEIJING GUANGNIAN WUXIAN SCI & TECH

Multi-mode interactive speech language function disorder evaluation system and method

The invention provides a multi-mode interactive speech language function disorder evaluation system. The multi-mode interactive speech language function disorder evaluation system comprises a user login module, a tested person management module, a scale selecting and testing module and a scale assessing and result generating module, wherein the user login module is used for providing entrances for a user to log in, register and retrieve passwords; the tested person management module is used for managing the information of the tested person; the scale selecting and testing module is used for selecting a scale and performing multi-mode interactive testing according to the scale, so that test data is obtained; the scale selecting and testing module comprises a visual function module, a listening function module, a writing module and a drawing module, wherein the visual function module is used for collecting data related to the visual function of the tested person, the listening function module is used for collecting data related to the listening function of the tested person, the writing module is used for collecting writing data of the tested person, and the drawing module is used for collecting drawing data of the tested person; and the scale assessing and result generating module is used for assessing the tested data, so that an assessment result is generated. The invention further provides a corresponding multi-mode interactive speech language function disorder evaluation method.
Owner:SHENZHEN INST OF ADVANCED TECH

A vibration tactile feedback device design method based on information physical interaction

The invention discloses a vibration tactile feedback device design method based on information physical interaction, which is characterized in that some information of a virtual-real interaction system is fed back in a non-visual mode, namely a vibration tactile feedback device, and the vibration tactile feedback device is coordinated with a visual feedback system. According to the vibration tactile feedback device, on the basis of real tactile feedback of the hand and through vibration superposition of collision feedback of a real object to a virtual object, collision interaction between thevirtual object and a physical object is simulated, so that a person can sense contact collision between the real object and the virtual object in a virtual-real fusion environment, more real interaction experience is generated, and multi-mode interaction between the person and an AR environment and between the person and the virtual object is improved. The visual and tactile combined multi-mode interaction is helpful for expanding an information feedback channel, improving the coordination of people and a system, enhancing the seamless fusion among participants, a real environment and a virtual environment in an information physical fusion system, and realizing natural harmonious man-machine interaction.
Owner:NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products