Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Three-dimensional scene semantic analysis method based on HoloLens space mapping

A technology for 3D scene and semantic analysis, applied in the field of computer vision, can solve problems such as inability to analyze 3D scene semantics, and achieve the effect of improving spatial mapping capabilities

Active Publication Date: 2021-07-16
深圳清元文化科技有限公司
View PDF9 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to provide a 3D scene semantic analysis method based on HoloLens space mapping, which solves the problem that HoloLens cannot perform semantic analysis on 3D scenes through 3D data in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional scene semantic analysis method based on HoloLens space mapping
  • Three-dimensional scene semantic analysis method based on HoloLens space mapping
  • Three-dimensional scene semantic analysis method based on HoloLens space mapping

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0057] The present invention is based on the three-dimensional scene semantic analysis method of HoloLens space mapping, such as figure 1 As shown, the specific steps are as follows:

[0058] Step 1: scan and reconstruct the indoor real scene through HoloLens, and obtain the grid data a of the three-dimensional space mapping of the scene;

[0059] Step 2: Convert grid data a to point cloud data b, and complete the preprocessing and data labeling of point cloud data b;

[0060]Step 3: Repeat step 1 and step 2 continuously until the collection and labeling of the required indoor data is completed, and an indoor point cloud dataset and a category information lookup table are made. The category information lookup table is as follows: figure 2 As shown, the data set production process is as follows image 3 shown;

[0061] Step 4: Carry out model training on the 3D scene semantic neural network, and save the training model M;

[0062] Step 5: Create the HoloLens scene semantic...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional scene semantic analysis method based on HoloLens space mapping, which is specifically implemented according to the following steps of: scanning and reconstructing an indoor real scene through HoloLens to obtain grid data a of three-dimensional space mapping of the scene; converting the obtained grid data a into point cloud data b, and completing preprocessing and data labeling of the point cloud data b; continuously repeating the first two steps until required indoor data is acquired and labeled, and making an indoor point cloud data set and a category information lookup table; performing model training on the three-dimensional scene semantic neural network, and storing a training model M; and making a HoloLens scene semantic analysis toolkit, completing scene information labeling and spatial region division, and improving the cognitive ability of HoloLens to the space. According to the method provided by the invention, the space mapping capability of the HoloLens and the capability of directly observing the distribution and category of the space objects in the HoloLens can be improved, and the perception capability of the HoloLens to the space environment is improved.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and relates to a three-dimensional scene semantic analysis method based on HoloLens space mapping. Background technique [0002] With the innovation of hardware technology, virtual reality (VR), augmented reality (AR) and mixed reality (MR) have greatly improved in three-dimensional space cognition. Mixed reality technology combines real scenes and virtual scenes, and can interact with them, enhancing the user's realistic experience. [0003] HoloLens is a mixed reality device launched by Microsoft. After wearing HoloLens, users can see the real environment through the lens on the glasses, and at the same time, virtual digital models and animations will also be displayed through the lens. HoloLens can acquire the 3D scanning data of the surrounding real scene through the sensor, and use the HoloToolKit toolkit to perform spatial mapping processing on the 3D data, making it a grid data cl...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/73G06K9/62G06N3/04G06N3/08
CPCG06T7/73G06N3/04G06N3/08G06T2207/20081G06T2207/20084G06T2207/20028G06T2207/10028G06T2207/20024G06F18/2433G06F18/253
Inventor 吴学毅李云腾
Owner 深圳清元文化科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products