Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Virtual-real fusion rendering method and device based on monocular camera reconstruction

A virtual reality fusion, single-purpose technology, applied in the field of medical image processing, can solve the problem of lack of depth sense in display

Pending Publication Date: 2022-05-17
BEIJING INSTITUTE OF TECHNOLOGYGY
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to overcome the defects of the prior art, the technical problem to be solved in the present invention is to provide a virtual-real fusion rendering method based on monocular camera reconstruction, which can solve the problem of lack of depth sense when the virtual and real are mutually occluded, and improve the doctor's ability to enhance The depth perception in reality display can more accurately obtain the positional relationship between the virtual model and the real anatomical structure, which helps to expand the surgical field of view

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual-real fusion rendering method and device based on monocular camera reconstruction
  • Virtual-real fusion rendering method and device based on monocular camera reconstruction
  • Virtual-real fusion rendering method and device based on monocular camera reconstruction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] An effective way to solve the problem of invisibility caused by mutual occlusion between virtual and real objects is to obtain the depth information of the camera, the real target and the virtual model in real time, and render the virtual model in layers based on the depth information, so that the occluded virtual part only shows the edge outline. Unoccluded virtual parts display all point clouds normally. By replacing all the occluded point clouds with edge contours, the occluded part of the virtual model will not occlude the real target because it floats in front of the real target.

[0026] Such as figure 1 As shown, this virtual-real fusion rendering method based on monocular camera reconstruction includes the following steps:

[0027] (1) Obtain the 3D point cloud information of the real target through the real-time reconstruction of the real scene by the monocular camera;

[0028] (2) Determine whether the virtual model is blocked by calculating the distance fro...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a virtual-real fusion rendering method and device based on monocular camera reconstruction, and the method and device can solve the problem that display lacks depth perception when virtual and real are mutually shielded, improve the depth perception of a doctor in augmented reality display, can obtain the position relation between a virtual model and a real anatomical structure more accurately, and are conducive to the expansion of an operation field. The method comprises the following steps: (1) reconstructing a real scene in real time through a monocular camera to obtain three-dimensional point cloud information of a real target; (2) judging whether the virtual model is shielded or not by calculating the distance from the optical center of the monocular camera to the real target three-dimensional point cloud and the distance from the optical center to the virtual model; and (3) only rendering the edge contour of the sheltered part of the virtual model, and normally rendering and displaying the unsheltered part.

Description

technical field [0001] The present invention relates to the technical field of medical image processing, in particular to a virtual-real fusion rendering method based on monocular camera reconstruction and a virtual-real fusion rendering device based on monocular camera reconstruction, which are mainly used for target virtual-real occlusion processing in augmented reality display. Background technique [0002] At present, the medical-grade augmented reality display technology superimposes the virtual model on the real target, but the virtual-real superimposition does not deal with the depth problem, and only places the virtual model at a specific position in the space, which causes the virtual model to be blocked by the real target. , the virtual model is still floating in front of the target, lacking the sense of reality in the depth direction. [0003] In the medical field, when the 3D virtual model of important tissues is enhanced and fused with the real environment, if t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T15/20G06T19/00
CPCG06T19/006G06T15/205
Inventor 武潺杨健邵龙王涌天范敬凡艾丹妮
Owner BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products