Virtual human visual processing method and system based on multimodal interaction

A visual processing and virtual human technology, applied in the field of human-computer interaction, can solve the problems such as the inability to achieve realistic, smooth and anthropomorphic interaction effects, and the inability to carry out multi-modal interaction, so as to improve user experience and meet user needs.

Inactive Publication Date: 2018-03-06
BEIJING GUANGNIAN WUXIAN SCI & TECH
View PDF9 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The virtual robot in the prior art cannot perform multi-modal interaction, and always present

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual human visual processing method and system based on multimodal interaction
  • Virtual human visual processing method and system based on multimodal interaction
  • Virtual human visual processing method and system based on multimodal interaction

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0039] figure 1 It is a schematic diagram of the application scenario of the virtual human-based multi-modal interaction system according to the first embodiment of the present application. The virtual person A can be displayed to the user in the form of a holographic image or a display interface through the smart device carried on it, and the virtual person A can track the user's face during the multi-modal interaction with the user, and Further multi-modal interaction process or skill display. In this embodiment, the system mainly includes a cloud brain (cloud server) 10 and a smart device 20 for multimodal interaction with users. The smart device 20 can be a traditional PC personal computer, LapTop notebook computer, holographic projection device, etc., or a terminal device that can be carried around and can access the Internet through a wireless local area network, mobile communication network, or other wireless means. In the embodiment of this application, wireless term...

no. 2 example

[0073]In this example, the virtual person A can be displayed to the user in the form of a holographic image or a display interface through the smart device carried on it. Unlike the first embodiment, the virtual person A can not only track the user's face , and can also communicate with the user's gaze in a multi-modal interaction.

[0074] In this example, the description of the same or similar content as the first embodiment is omitted, and the description is focused on the content different from the first embodiment. On the cloud brain 10 side, the visual recognition interface 12 not only obtains the relative position information of the target face and the virtual person, but also obtains the binocular information of the target face. Specifically, the visual recognition interface 12 preprocesses the face image, and then locates the eye area of ​​the preprocessed face image. In the located eye area, the pupil is precisely located, and the center of the pupil is obtained in t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a virtual human visual processing method and system based on multimodal interaction. A virtual human runs in intelligent equipment. The method comprises the steps that when thevirtual human is in an awakened state, the virtual human is displayed in a preset display region, and the virtual human has specific personalities and properties; multimodal data is acquired, whereinthe multimodal data comprises data from surroundings and multimodal input data performing interaction with a user; a virtual human ability interface is called to analyze the multimodal input data, and multimodal output data is decided; relative position information is utilized to calculate execution parameters for turning of the head of the virtual human towards a target face; and the execution parameters are displayed in the preset display region. Through the embodiment, when the virtual human performs interaction with the user, it is guaranteed that the virtual human tracks the face of theuser all the time to perform face-to-face multimodal interaction with the user, the user demand is met, and user experience is improved.

Description

technical field [0001] The present invention relates to the field of human-computer interaction, in particular to a virtual human vision processing method and system based on multimodal interaction. Background technique [0002] With the continuous development of science and technology, the introduction of information technology, computer technology and artificial intelligence technology, the research of robots has gradually gone out of the industrial field, and gradually expanded to the fields of medical care, health care, family, entertainment and service industries. And people's requirements for robots have also been upgraded from simple and repetitive mechanical actions to intelligent robots with anthropomorphic question-and-answer, autonomy, and interaction with other robots. Human-computer interaction has become an important factor in determining the development of intelligent robots. [0003] At present, robots include physical robots with entities and virtual robots ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F3/01G06K9/00G06F3/16
CPCG06F3/011G06F3/16G06F2203/012G06V40/161G06V40/168G06V40/174G06V40/20
Inventor 尚小维李晓丹
Owner BEIJING GUANGNIAN WUXIAN SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products