[0031] In order to make the objects, technical solutions, and advantages of the present invention more clearly, the technical solutions in the embodiments of the present invention will be described in contemplation in the embodiment of the present invention. It is an embodiment of the present invention, not all of the embodiments, based on the embodiments of the present invention, and those of ordinary skill in the art without making all other embodiments obtained without creative labor, belonging to the present invention Scope.
[0032] like figure 1 As shown in this, construct a virtual reality environment, based on the preferences and conditions of the survey, select the virtual conversation character and scene, and achieve the interaction and data collection of the virtual reality environment by wearing a virtual reality VR device and intelligent wear sensor. The measured data, using models from the cloud data center for predicting and making auxiliary tips, through voice recognition, text conversion, sound synthesis, expression generation, etc., real-time rendering of virtual reality scenes in the edge computing node or workstation, help completion Real-time interaction and detection of the person to be tested. in,
[0033] The virtual reality environment is a computer generated analog environment, including created virtual scenes and virtual characters, allowing the peers to be immersed in the environment to experience the virtual world; the virtual reality VR device includes a VR helmet, VR data gloves, VR cabins, VR data skin clothes, microphones, stereo speakers, cameras, etc. Among them, the VR helmet display is a visual system as the core. The microphone is input to the voice. The sound output is perceived by the stereo speaker, and the camera is used to capture the appearance of the person to be tested, the appearance, and the action. The waiting person needs to wear VR equipment, the remote Detector can choose to wear a VR device or use a display device to interact, and support remote mode; the smart wear sensor includes detecting health data such as heart rate, blood pressure, blood oxygen saturation, and compute nodes by edge computing nodes. Real-time processing, upload to the cloud data center as needed; the edge computing node or workstation is computing, stored, network function, and accelerates real-time rendering of virtual reality environments via GPU hardware, completing virtual reality environments on the edge side of the VR device. And handling, visual system processing, audible system processing, interactive feedback system, wearing data acquisition, etc. Prediction and other services; the cloud data center aggregates a large number of calculations, storage, network resources, unified management of edge computing nodes, and workstations, providing database, data storage, etc., model training services, speech recognition, semantic analysis, text Conversion, voice generation, speech action generation, etc. Model, personalized virtual character generating model, virtual environment scenario and character recommendation model, predictive model, and prompt model, and push personalized model and service according to edge computing node resources and business conditions; the text generated speech model adopts neural network model According to the text input to synthesize the personalized voice of the output virtual character; the semantic expression conversion text model uses depth learning from the encoder (Auto-Encoder) and generates the anti-network (GAN) technology, the text is subjected to semantic information identification analysis, according to the text Set the characters' words and language expressions, generate texts that meet their terms; the core of the personalized virtual character generating model is a neural network model, and the characteristics of the real world are characterized, and generate virtual reality Virtual characters in the environment; the virtual environment scenes and characters recommended models are recommended for this detected virtual environment scenario and character based on the basic information of the peers, personal preferences such as this test. The data of interest and virtual reality, the data is used to determine the prediction; the core of the prompt model is the self-focus mechanism sequence generation model, according to the information data of the peers and the actual phase of the detection, recommended problems.
[0034] The method of the present invention will be described in detail below in conjunction with specific embodiments.
[0035] The auxiliary judgment includes the following steps:
[0036] Step 101 Design VR virtual reality scenarios and characters, generate environmental scenes and character models, saved to the cloud data center;
[0037] Step 102: Designing a virtual character sound characteristic, building a text generating speech model and semantic expression conversion text model based on the artificial intelligent service of the cloud data center;
[0038] Step 103, based on the character image model and sound characteristics, collect the characteristics of the real world, construct a personalized virtual character generation model;
[0039] Step 104, based on the collection of large amounts of personal basic information, personal preferences, etc., design and train virtual environmental scenes and character recommendation models;
[0040] Step 105, collect a large number of related materials, combined theory, design and introduce a computational prediction model and prompt model based on the artificial intelligence service of the cloud data center;
[0041] Step 106: The edge computing node or a working station requests model and services to the cloud data center, which combines its resources and services based on the request of the edge computing nodes, and pushes personalized models and services;
[0042] Step 107, in the edge calculation node or workstation, combine the VR device, and construct a virtual reality environment;
[0043] Step 108: Wear VR devices and intelligent wear devices, enters the virtual reality environment, and dialogue;
[0044] Step 109, (optional) Based on the actual situation of the peers, select the relevant real person, and generate a dialogue by using the personalized virtual character to generate a model.
[0045] Step 110, the detector can choose to wear the VR device or use the display device to interact with the pending person;
[0046] Step 111, according to the basic case of the peers, the continuous input of the intelligent weaner detection data and the VR device, utilizing the prediction model and the prompt model, to provide prediction and prompt for the detector;
[0047] Step 112, the detector is based on the auxiliary prediction and prompt, by the microphone input dialogue, using the speech recognition function to identify the text, through the semantic expression conversion text model, converted into a text that conforms to the characteristics of the virtual character, and then generates a speech Model generates voice;
[0048] Step 113, synthesizing the generated voice with the virtual character image to the peers to wear the VR device;
[0049] Step 114: According to the received information, the person makes feedback through the VR device, captures the information such as sound, action, emotions, and collects its smart wear data, and stores the data to the edge compute node or workstation, and transmits To the detector side;
[0050] Step 115, repeat steps 111 to 114 until the detection determination is completed;
[0051] Step 116, forming this report, based on the result provided by the prediction model, by the detector based on reporting and detection results, determining the final judgment result.
[0052] Step 117, continuously collect data, optimization models during the detection process, and improve accuracy.
[0053] It is intended to illustrate the technical solutions of the present invention, and is not intended to limit the scope of the invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present invention are included in the scope of protection of the present invention.