Auxiliary judgment method based on deep learning and virtual reality

A virtual reality and deep learning technology, applied in neural learning methods, input/output of user/computer interaction, speech analysis, etc. The effect of meeting individual needs

Pending Publication Date: 2022-02-11
SHANDONG INSPUR SCI RES INST CO LTD
0 Cites 1 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0004] The diagnosis of some traditional diseases, such as depression, is mainly based on medical history, clinical symptoms, course...
View more

Method used

Step 117, continuously collect the data in the detection p...
View more

Abstract

The invention provides an auxiliary judgment method based on deep learning and virtual reality, and belongs to the technical field of auxiliary detection. The method comprises the following steps: constructing a virtual reality environment, and selecting a virtual dialogue character and scene; wearing a virtual reality VR device and an intelligent wearable sensor to realize interaction with a virtual reality environment and data acquisition; in combination with the data, using a model from the cloud data center for prediction and making auxiliary prompt; realizing real-time rendering of a virtual reality scene at an edge computing node or a workstation through voice recognition, character conversion, voice synthesis and expression generation; and assisting to finish the real-time interaction and judgment.

Application Domain

Input/output for user-computer interactionSpeech recognition +5

Technology Topic

Real-time renderingSpeech sound +10

Image

  • Auxiliary judgment method based on deep learning and virtual reality

Examples

  • Experimental program(1)

Example Embodiment

[0031] In order to make the objects, technical solutions, and advantages of the present invention more clearly, the technical solutions in the embodiments of the present invention will be described in contemplation in the embodiment of the present invention. It is an embodiment of the present invention, not all of the embodiments, based on the embodiments of the present invention, and those of ordinary skill in the art without making all other embodiments obtained without creative labor, belonging to the present invention Scope.
[0032] like figure 1 As shown in this, construct a virtual reality environment, based on the preferences and conditions of the survey, select the virtual conversation character and scene, and achieve the interaction and data collection of the virtual reality environment by wearing a virtual reality VR device and intelligent wear sensor. The measured data, using models from the cloud data center for predicting and making auxiliary tips, through voice recognition, text conversion, sound synthesis, expression generation, etc., real-time rendering of virtual reality scenes in the edge computing node or workstation, help completion Real-time interaction and detection of the person to be tested. in,
[0033] The virtual reality environment is a computer generated analog environment, including created virtual scenes and virtual characters, allowing the peers to be immersed in the environment to experience the virtual world; the virtual reality VR device includes a VR helmet, VR data gloves, VR cabins, VR data skin clothes, microphones, stereo speakers, cameras, etc. Among them, the VR helmet display is a visual system as the core. The microphone is input to the voice. The sound output is perceived by the stereo speaker, and the camera is used to capture the appearance of the person to be tested, the appearance, and the action. The waiting person needs to wear VR equipment, the remote Detector can choose to wear a VR device or use a display device to interact, and support remote mode; the smart wear sensor includes detecting health data such as heart rate, blood pressure, blood oxygen saturation, and compute nodes by edge computing nodes. Real-time processing, upload to the cloud data center as needed; the edge computing node or workstation is computing, stored, network function, and accelerates real-time rendering of virtual reality environments via GPU hardware, completing virtual reality environments on the edge side of the VR device. And handling, visual system processing, audible system processing, interactive feedback system, wearing data acquisition, etc. Prediction and other services; the cloud data center aggregates a large number of calculations, storage, network resources, unified management of edge computing nodes, and workstations, providing database, data storage, etc., model training services, speech recognition, semantic analysis, text Conversion, voice generation, speech action generation, etc. Model, personalized virtual character generating model, virtual environment scenario and character recommendation model, predictive model, and prompt model, and push personalized model and service according to edge computing node resources and business conditions; the text generated speech model adopts neural network model According to the text input to synthesize the personalized voice of the output virtual character; the semantic expression conversion text model uses depth learning from the encoder (Auto-Encoder) and generates the anti-network (GAN) technology, the text is subjected to semantic information identification analysis, according to the text Set the characters' words and language expressions, generate texts that meet their terms; the core of the personalized virtual character generating model is a neural network model, and the characteristics of the real world are characterized, and generate virtual reality Virtual characters in the environment; the virtual environment scenes and characters recommended models are recommended for this detected virtual environment scenario and character based on the basic information of the peers, personal preferences such as this test. The data of interest and virtual reality, the data is used to determine the prediction; the core of the prompt model is the self-focus mechanism sequence generation model, according to the information data of the peers and the actual phase of the detection, recommended problems.
[0034] The method of the present invention will be described in detail below in conjunction with specific embodiments.
[0035] The auxiliary judgment includes the following steps:
[0036] Step 101 Design VR virtual reality scenarios and characters, generate environmental scenes and character models, saved to the cloud data center;
[0037] Step 102: Designing a virtual character sound characteristic, building a text generating speech model and semantic expression conversion text model based on the artificial intelligent service of the cloud data center;
[0038] Step 103, based on the character image model and sound characteristics, collect the characteristics of the real world, construct a personalized virtual character generation model;
[0039] Step 104, based on the collection of large amounts of personal basic information, personal preferences, etc., design and train virtual environmental scenes and character recommendation models;
[0040] Step 105, collect a large number of related materials, combined theory, design and introduce a computational prediction model and prompt model based on the artificial intelligence service of the cloud data center;
[0041] Step 106: The edge computing node or a working station requests model and services to the cloud data center, which combines its resources and services based on the request of the edge computing nodes, and pushes personalized models and services;
[0042] Step 107, in the edge calculation node or workstation, combine the VR device, and construct a virtual reality environment;
[0043] Step 108: Wear VR devices and intelligent wear devices, enters the virtual reality environment, and dialogue;
[0044] Step 109, (optional) Based on the actual situation of the peers, select the relevant real person, and generate a dialogue by using the personalized virtual character to generate a model.
[0045] Step 110, the detector can choose to wear the VR device or use the display device to interact with the pending person;
[0046] Step 111, according to the basic case of the peers, the continuous input of the intelligent weaner detection data and the VR device, utilizing the prediction model and the prompt model, to provide prediction and prompt for the detector;
[0047] Step 112, the detector is based on the auxiliary prediction and prompt, by the microphone input dialogue, using the speech recognition function to identify the text, through the semantic expression conversion text model, converted into a text that conforms to the characteristics of the virtual character, and then generates a speech Model generates voice;
[0048] Step 113, synthesizing the generated voice with the virtual character image to the peers to wear the VR device;
[0049] Step 114: According to the received information, the person makes feedback through the VR device, captures the information such as sound, action, emotions, and collects its smart wear data, and stores the data to the edge compute node or workstation, and transmits To the detector side;
[0050] Step 115, repeat steps 111 to 114 until the detection determination is completed;
[0051] Step 116, forming this report, based on the result provided by the prediction model, by the detector based on reporting and detection results, determining the final judgment result.
[0052] Step 117, continuously collect data, optimization models during the detection process, and improve accuracy.
[0053] It is intended to illustrate the technical solutions of the present invention, and is not intended to limit the scope of the invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present invention are included in the scope of protection of the present invention.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Adaptive fault detection method for airplane rotation actuator driving device based on deep learning

InactiveCN104914851Aimprove accuracyReduce the false alarm rate of detection
Owner:BEIHANG UNIV

Video monitoring method and system

Owner:深圳辉锐天眼科技有限公司

Classification and recommendation of technical efficacy words

  • improve accuracy

Golf club head with adjustable vibration-absorbing capacity

InactiveUS20050277485A1improve grip comfortimprove accuracy
Owner:FUSHENG IND CO LTD

Stent delivery system with securement and deployment accuracy

ActiveUS7473271B2improve accuracyreduces occurrence and/or severity
Owner:BOSTON SCI SCIMED INC

Method for improving an HS-DSCH transport format allocation

InactiveUS20060089104A1improve accuracyincrease benefit
Owner:NOKIA SOLUTIONS & NETWORKS OY

Catheter systems

ActiveUS20120059255A1increase selectivityimprove accuracy
Owner:ST JUDE MEDICAL ATRIAL FIBRILLATION DIV

Gaming Machine And Gaming System Using Chips

ActiveUS20090075725A1improve accuracy
Owner:UNIVERSAL ENTERTAINMENT CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products