Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Mixed reality interactive guidance system and method based on adaptive visual differences

A visual difference, mixed reality technology, applied in the input/output, navigation, mapping and navigation directions of user/computer interaction, can solve the problems of unfriendly people with vision defects, and the differentiated presentation of people with different vision states is not considered. Improve user experience, improve indoor positioning accuracy, and improve the effect of immersive experience

Active Publication Date: 2022-02-25
HUBEI UNIV OF TECH
View PDF7 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although MR technology is currently applied to other fields, it does not consider the differential presentation of mixed reality images for people with different vision status, which is not friendly to people with visual impairment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mixed reality interactive guidance system and method based on adaptive visual differences
  • Mixed reality interactive guidance system and method based on adaptive visual differences
  • Mixed reality interactive guidance system and method based on adaptive visual differences

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0063] The mixed reality interactive guidance system of this embodiment can perform indoor wayfinding and navigation in public places, see figure 1 , the system of this embodiment includes an instruction receiving module, an image scanning module, a vision detection module, a storage module, a processing module, a virtual imaging module, a first data transmission module and a second data transmission module, an instruction receiving module, an image scanning module, a vision The detection modules send data to the processing module through the first data transmission module, and the processing module sends control instructions to the virtual imaging module through the second data transmission module. The processing module is also connected to the storage module, and can retrieve data from the storage module. The entity of the system in this embodiment is the head-mounted MR glasses, and each module included is embedded in the corresponding position of the frame of the MR glasses...

Embodiment 2

[0073] On the basis of Embodiment 1, the system of this embodiment also has the function of facility identification, that is, to identify the target facility and provide the operation method of the target facility. The target facility is the facility selected by the wearer. Facilities are generally smart facilities deployed in public places, such as ticket vending machines, vending machines, shared equipment, etc.

[0074] In this embodiment, the storage module also stores three-dimensional object models and operation methods of indoor facilities in public places; the processing module also includes a facility identification sub-module. When the image scanning module receives the facility identification instruction, it scans the target facility and extracts facility feature information, where the facility feature information refers to the depth image of the target facility, and transmits the facility feature information to the processing module through the first data transmissi...

Embodiment 3

[0077] This embodiment will provide a method for constructing the corresponding relationship between vision status and interface presentation scheme, where the interface presentation scheme includes layout and color scheme.

[0078] The specific construction process is as follows:

[0079] (1) Predefine several different vision states and the diopters corresponding to each vision state.

[0080] In this embodiment, the diopter is x D (-3≤ x ≤2), divided into six categories of vision states with diopters of -3D, -2D, -1D, 0D, 1D, and 2D, D is the diopter unit, 0D means normal vision, 1D, 2D represent presbyopia 100 degrees, 200 degrees; -1D, -2D, -3D represent 100 degrees, 200 degrees, 300 degrees of myopia respectively.

[0081] (2) Determine the interface layout suitable for people with various visual conditions.

[0082] In this embodiment, the interface is divided into 3 areas: Guidance Area 1, Prompt Area 2 and Selection Area 3. For details, see Figure 4 . The guida...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a mixed reality interactive guide system and method based on adaptive visual differences. The system mainly comprises an instruction receiving module, an image scanning module, a vision detection module, a storage module, a processing module and a virtual imaging module; the instruction receiving module is used for receiving an instruction and identifying a destination; the image scanning module is used for scanning an indoor scene and extracting scene feature information; the vision detection module is used for detecting refraction data of two eyes of a wearer; the storage module is used for storing an indoor three-dimensional scene model and interface presentation schemes of different vision states; the processing module is mainly used for planning a navigation path; and the virtual imaging module is used for superposing a virtual image of the navigation path in a real scene according to a corresponding interface presentation scheme. According to the method, the visual positioning technology and the mixed reality technology are combined to realize path-finding navigation and facility guiding identification, the virtual image and the reality scene can be precisely matched and integrated, and better immersion experience is brought to a user.

Description

technical field [0001] The invention belongs to the technical field of mixed reality, and in particular relates to a mixed reality interactive guide system and method based on adaptive visual difference. Background technique [0002] With the improvement of social modernization and the deepening of smart city construction, the spatial layout of commercial complexes, hospitals, subways, museums and other public places is becoming more and more complex, and the smart facilities arranged in public places are also more diverse. Although the mobile APP can be used for navigation, the GPS positioning module of the mobile phone has a large positioning error indoors, and the information display interface is small. MR (Mixed Reality) technology can accurately match and integrate virtual images and real scenes. If MR technology is applied to indoor wayfinding and navigation in public places, it can give users a better immersive experience. Although MR technology is currently applied ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F3/01G01C21/20
CPCG06F3/011G01C21/206G06F2203/012
Inventor 胡珊荣令达贾琦韩嘉林蓝峰张东王凯华孙锐奇
Owner HUBEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products