Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A large-scale scene 3D modeling method and device based on a depth camera

A technology of depth camera and modeling method, applied in the field of 3D modeling, can solve the problems of huge amount of data, high difficulty, poor flexibility, etc., and achieve the effect of low storage space, wide application field and flexible use.

Active Publication Date: 2021-07-20
HANGZHOU GUANGPO INTELLIGENT TECH CO LTD
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In the prior art, laser radar is used to scan the scene, and the obtained point cloud is reconstructed to model the scene. This method can directly obtain high-precision dense 3D point cloud data, but the cost of the equipment is too expensive, and the equipment is relatively expensive. It is relatively cumbersome and not suitable for portable measurement. In addition, the measurement time is long and the reconstruction complexity is relatively high; the other type uses multiple cameras to collect images at different viewpoints and then stitches them to generate the 3D structure of the environment. This method is simple and direct, but The amount of data processed is very large; and only fixed-point measurement can be performed, and dynamic measurement cannot be realized. In addition, due to the limitation of the viewing angle range of the camera, this method requires a large number of camera arrays to achieve 3D modeling of large-scale scenes, resulting in very high cost and more difficult to implement
[0004] Both of the above two solutions have the following two very big disadvantages. On the one hand, due to the need to process the data of each acquisition frame, the amount of data to be processed is very large, the calculation cost is very high, and the model reconstruction takes a very long time , which poses a considerable challenge to the hardware cost and real-time reconstruction; on the other hand, because the reconstruction result of the traditional method is described in the form of 3D point cloud, and the reconstruction of the point cloud is not carried out such as meshing, so the reconstruction gets The model is very large, and the flexibility is very poor, and it cannot support switching between multiple resolutions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A large-scale scene 3D modeling method and device based on a depth camera
  • A large-scale scene 3D modeling method and device based on a depth camera
  • A large-scale scene 3D modeling method and device based on a depth camera

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] The present invention will be described in detail below with reference to the accompanying drawings and in combination with embodiments.

[0036] refer to Figure 1-4 As shown, a large-scale scene 3D modeling method based on depth camera, such as figure 1 shown, including the following steps:

[0037] S1. Obtain the depth map information and pose information of the current frame, and use the depth camera to obtain the depth map information of the current frame at the current position. The pose information includes position information and attitude information. In an outdoor environment, differential GPS and IMU (Inertial Measurement Unit, inertial Measurement unit) sensor combination acquisition, and for the indoor environment, the pose information calculated from the depth image is fused with the IMU sensor information.

[0038] S2. Calculate the depth map to obtain the current frame 3D point cloud map, and use coordinate transformation to uniformly convert the depth...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a large-scale scene 3D modeling method based on a depth camera. The steps include obtaining depth map information and pose information of the current frame, solving the depth map to obtain a 3D point cloud image of the current frame, and calculating the motion amount of the current frame relative to the key frame, The motion threshold is determined, the key frame 3D point cloud coordinates are transformed, and finally a scene 3D model is constructed; the invention also relates to a large-scale scene 3D modeling device based on a depth camera. The invention uses key frames to construct 3D models, and the modeling time and storage space consumption are very small; the combination of 3D point cloud and octree grid map is adopted, and the requirements for storage space in the modeling process are very low. It is flexible and can realize arbitrary and fast switching of multi-resolution; the invention adopts the method of combining one depth camera with other sensors, which is economical and practical; Large-scale scene 3D modeling equipment has wider application fields.

Description

technical field [0001] The invention relates to 3D modeling technology, in particular to a large-scale scene 3D modeling method and device based on a depth camera. Background technique [0002] With the development of computer vision technology and the emergence of depth cameras, 3D modeling technology, especially in large-scale scenes, has played a significant role in navigation, urban planning, and environmental observation. [0003] In the prior art, laser radar is used to scan the scene, and the obtained point cloud is reconstructed to model the scene. This method can directly obtain high-precision dense 3D point cloud data, but the cost of the equipment is too expensive, and the equipment is relatively expensive. It is relatively cumbersome and not suitable for portable measurement. In addition, the measurement time is long and the reconstruction complexity is relatively high; the other type uses multiple cameras to collect images at different viewpoints and then stitch...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T17/00G06T7/55
CPCG06T17/00
Inventor 余小欢钱锋白云峰符建姚金良
Owner HANGZHOU GUANGPO INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products