Check patentability & draft patents in minutes with Patsnap Eureka AI!

VR-based binocular video fusion training method and device

A training method and technology of a training device, applied in the field of vision training, can solve problems such as failure to achieve training effects, difficulty in improving binocular vision fusion ability, etc., to improve spatial orientation, facilitate binocular vision fusion training, and improve training effects. Effect

Pending Publication Date: 2021-07-13
HANGZHOU SHENRUI BOLIAN TECH CO LTD +1
View PDF14 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In view of the above problems, the embodiment of the present invention is a VR-based binocular video fusion training method and device, which solves the problem that in the existing binocular video fusion training process, users tend to rely too much on one eye to complete the training, the training effect cannot be achieved, and it is difficult to improve Technical issues of binocular vision fusion ability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • VR-based binocular video fusion training method and device
  • VR-based binocular video fusion training method and device
  • VR-based binocular video fusion training method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] In order to make the purpose, technical solution and advantages of the present invention clearer and clearer, the present invention will be further described below in conjunction with the accompanying drawings and specific embodiments. Apparently, the described embodiments are only some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

[0038] Based on the shortcomings of the prior art, the embodiment of the present invention provides a specific implementation of a VR-based binocular vision fusion training method, such as figure 1 As shown, the method specifically includes:

[0039] S110: Display the first image and the second image independently in the same virtual training scene for both eyes of the user, the first image includes a controlled train...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a VR-based binocular video fusion training method and device, and the method comprises the steps: independently displaying a first image and a second image for the eyes of a user in the same virtual training scene, wherein the first image comprises a controlled training target, the controlled training target is allowed to move in the virtual training scene, and the second image comprises a fixed task target; receiving path feedback information that the user controls the controlled training target to move relative to the position of the fixed task target after observing the position of the fixed task target; and adaptively reconfiguring the controlled training target and the fixed task target according to the path feedback information, and displaying the reconfigured controlled training target and fixed task target. Through separated display training of a virtual reality scene, binocular video fusion training is more interesting, effective and convenient, compliance is enhanced to enable a user to actively participate in training, iteration of binocular video fusion training can be formed through self-adaptive configuration of visual training content, and therefore the training effect is improved.

Description

technical field [0001] The invention relates to the technical field of vision training, in particular to a VR-based binocular vision fusion training method and device. Background technique [0002] The binocular vision fusion function is based on the normal simultaneous visual perception of both eyes, through brain analysis and processing, it can synthesize two images with slight differences in the corresponding points of the retina of both eyes into a complete object image. Some people have weakened or severely lost binocular vision fusion function due to strabismus or pathological anisometropia. For these people, reasonable vision training is required to restore or improve the defect of binocular vision fusion function. The existing binocular vision fusion training focuses on the same training target with both eyes at the same time. Users tend to over-rely on one eye during the training process. Since there is no separate display content mechanism, it is difficult to empha...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): A61H5/00
CPCA61H5/00A61H5/005A61H2201/1604A61H2201/165A61H2201/50A61H2201/5007A61H2201/5023A61H2201/5043A61H2205/024
Inventor 袁进李劲嵘封檑李奇威李子奇任鸿伦哈卿李一鸣俞益洲乔昕
Owner HANGZHOU SHENRUI BOLIAN TECH CO LTD
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More