Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Multi-mode output method and apparatus applied to intelligent robot

An intelligent robot and output method technology, applied in the field of intelligent robots, can solve the problems of mismatching voice output and action output, poor robot intelligence and anthropomorphism, loss and other problems, and achieve the effect of improving intelligence and anthropomorphism

Inactive Publication Date: 2017-04-05
BEIJING GUANGNIAN WUXIAN SCI & TECH
View PDF12 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The actions of robots currently on the market when chatting with users are often fixed-pattern actions or random actions that are not related to the meaning expressed in the language, which brings a certain degree of interest. However, this setting is In the internal system processing of the robot, only the voice system and the action system are simply superimposed, and the voice output and the action output do not match, resulting in poor intelligence and anthropomorphism of the robot
This causes users to quickly become bored with meaningless repetition during chatting with the robot, thus losing interest in continuing the chat interaction

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-mode output method and apparatus applied to intelligent robot
  • Multi-mode output method and apparatus applied to intelligent robot
  • Multi-mode output method and apparatus applied to intelligent robot

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] figure 1 It is a schematic flow chart related to Example 1 of the multimodal output method applied to an intelligent robot of the present invention, and the method in this embodiment mainly includes the following steps.

[0031] In step S110, the robot receives multimodal input information.

[0032] Specifically, in the process of interaction between the user and the robot, the robot may receive multimodal input information through a video collection unit, a voice collection unit, a human-computer interaction unit, and the like. Among them, the video acquisition unit can be composed of an RGBD camera, the voice acquisition unit needs to provide complete voice recording and playback functions, and the human-computer interaction unit can be a touch input display screen through which the user inputs multimodal information.

[0033] It should be noted that the multimodal input information mainly includes audio data, video data, image data, and program instructions for enab...

Embodiment 2

[0051] figure 2 It is a schematic flow chart of Example 2 of the multi-modal output method applied to intelligent robots of the present invention. The method of this embodiment mainly includes the following steps, wherein the steps similar to those of Embodiment 1 are marked with the same symbols, and are not The specific content thereof will be described again, and only the distinguishing steps will be described in detail.

[0052] In step S110, the robot receives multimodal input information.

[0053] In step S120, the multimodal input information is analyzed, and it is judged whether there is corresponding speech text information according to the analysis result, if the judgment result is "yes", then step S130' is executed, otherwise, step S160 is executed according to the analysis result for processing.

[0054] In step S130', it is judged whether there is a specific vocabulary in the obtained phonetic text information, if yes, then step S140 is executed, otherwise, step...

Embodiment 3

[0066] image 3 It is a structural block diagram of a multimodal output device 300 applied to an intelligent robot according to an embodiment of the present application. Such as image 3 As shown, the multimodal output device 300 of the embodiment of the present application mainly includes: a multimodal information receiving module 310 , a text information generating module 320 , an action command generating module 330 and a multimodal output module 340 .

[0067] A multimodal information receiving module 310, which receives multimodal input information.

[0068] The text information generating module 320 is connected with the multimodal information receiving module 310, analyzes the multimodal input information, and generates corresponding speech and text information according to the analysis result.

[0069]The action instruction generation module 330 is connected with the text information generation module 320, extracts specific vocabulary in the speech text information, ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-mode output method and apparatus applied to an intelligent robot. The multi-mode output method comprises the steps of receiving multi-mode input information; analyzing the multi-mode input information, and generating voice text information corresponding to the multi-mode input information according to an analyzing result; extracting specific vocabulary from the voice text information, and generating an action instruction matched with the specific vocabulary; and completing voice output and intelligent robot action output according to the voice text information and the action instruction. By adoption of the multi-mode output method and apparatus, the intelligence and personification capability of the robot can be improved, so that interaction experience between a user and the robot can be improved.

Description

technical field [0001] The invention relates to the field of intelligent robots, in particular to a multi-modal output method and device applied to intelligent robots. Background technique [0002] With the continuous development of science and technology, the introduction of information technology, computer technology and artificial intelligence technology, the research of robots has gradually gone out of the industrial field, and gradually expanded to the fields of medical care, health care, family, entertainment and service industries. And people's requirements for robots have also been upgraded from simple and repetitive mechanical actions to intelligent robots with anthropomorphic question-and-answer, autonomy, and interaction with other robots. Human-computer interaction has become an important factor in determining the development of intelligent robots. [0003] The actions of robots currently on the market when chatting with users are often fixed-pattern actions or r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F3/01B25J9/16
CPCB25J9/16G06F3/011
Inventor 石琰郭家
Owner BEIJING GUANGNIAN WUXIAN SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products