Method and device of facial expression generation with data fusion and head-mounted display

A facial expression and data fusion technology, which is applied to the details of 3D image data, the input/output process of data processing, image data processing, etc., can solve the problem of limited facial expression, limited muscle movement, and differences between users and simulated users. Problems such as facial expressions not syncing

Inactive Publication Date: 2020-03-27
XRSPACE CO LTD
View PDF0 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the simulated user's facial expressions are limited by the head-mounted display, resulting in out-of-sync facial expressions between the simulated user and the user
The traditional way of expressing facial expressions is to use an external camera to capture the face of the user wearing the head-mounted display to collect a time-series of images, and extract facial features from them to recognize facial expressions. Wearable displays, where most of the user's face is covered and his / her muscle movements are restricted, making it difficult to use external cameras for facial recognition in virtual reality systems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device of facial expression generation with data fusion and head-mounted display
  • Method and device of facial expression generation with data fusion and head-mounted display
  • Method and device of facial expression generation with data fusion and head-mounted display

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.

[0024] Please refer to figure 1 , figure 1 It is a schematic diagram of a virtual reality system according to an embodiment of the present invention. It should be noted that the present invention is not limited to virtual reality systems, and it can be used in systems such as augmented reality / mixed reality / extended reality systems. The positional tracking mechanism in virtual reality systems (such as HTC VIVE) allows users to freely move and explore in the virtual reality environment. Specifically, the virtual reality system includes a head-mounted display (HMD) 100 , controllers 102A-102B, lighthouses 104A-104B, and a computing device 106 (such as a personal computer). The lighthouses 104A-104B are used to emit infrared rays, and the controllers 102A-102B are used to generate control signals to the computing device 106, so that the user ca...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A method and a device of facial expression generation with data fusion and a head-mounted display are disclosed. The method comprises obtaining facial information of a user from a plurality of data sources, wherein the plurality of data sources includes a real-time data detection and a data pre-configuration, mapping the facial information to facial expression parameters for simulating facial geometry model of the user, performing a fusion process according to the facial expression parameters, to generate fusing parameters associated to the facial expression parameters with weighting, and generating a facial expression of an avatar according to the fusing parameters.

Description

technical field [0001] The invention relates to a method and a related device for generating facial expressions through data fusion in a virtual reality / augmented reality / mixed reality / extended reality system. Background technique [0002] Most virtual reality / augmented reality / mixed reality / extended reality systems can track a user's actions in a room-sized area according to the user interface device worn by the user. User interface devices (such as game pads, controllers, touch panels, etc.) are used to provide interaction between users and system software. For example, system software packages include virtual reality games executed by computing devices. In addition, a head-mounted display (HMD) worn by the user is used to display interactive images generated by the computing device, allowing the user to experience virtual reality. [0003] In order to increase the user's willingness to immerse in virtual reality, the prior art proposes simulated users (avatar) with facia...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06F3/01
CPCG06F3/012G06V40/174G06V40/168G06F18/25A63F13/25A63F13/65A63F2300/8082G06T13/40G06T17/10G06T2200/04G06V40/171G06V40/176G06F18/241G06F18/254
Inventor 周永明朱峰森林鼎傑王铨彰
Owner XRSPACE CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products