Large scale scene 3D modeling method and device based on depth camera

A technology of depth camera and modeling method, which is applied in the field of 3D modeling, can solve the problems of high complexity, difficulty, and huge amount of data in reconstruction, and achieve low time and space consumption, wide application fields, and low storage space Effect

Active Publication Date: 2017-08-01
HANGZHOU GUANGPO INTELLIGENT TECH CO LTD
View PDF9 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In the prior art, laser radar is used to scan the scene, and the obtained point cloud is reconstructed to model the scene. This method can directly obtain high-precision dense 3D point cloud data, but the cost of the equipment is too expensive, and the equipment is relatively expensive. It is relatively cumbersome and not suitable for portable measurement. In addition, the measurement time is long and the reconstruction complexity is relatively high; the other type uses multiple cameras to collect images at different viewpoints and then stitches them to generate the 3D structure of the environment. This method is simple and direct, but The amount of data processed is very large; and only fixed-point measurement can be performed, and dynamic measurement cannot be realized. In addition, due to the limitation of the viewing angle range of the camera, this method requires a large number of camera arrays to achieve 3D modeling of large-scale scenes, resulting in very high cost and more difficult to implement
[0004] Both of the above two solutions have the following two very big disadvantages. On the one hand, due to the need to process the data of each acquisition frame, the amount of data to be processed is very large, the calculation cost is very high, and the model reconstruction takes a very long time , which poses a considerable challenge to the hardware cost and real-time reconstruction; on the other hand, because the reconstruction result of the traditional method is described in the form of 3D point cloud, and the reconstruction of the point cloud is not carried out such as meshing, so the reconstruction gets The model is very large, and the flexibility is very poor, and it cannot support switching between multiple resolutions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Large scale scene 3D modeling method and device based on depth camera
  • Large scale scene 3D modeling method and device based on depth camera
  • Large scale scene 3D modeling method and device based on depth camera

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] The present invention will be described in detail below with reference to the accompanying drawings and in combination with embodiments.

[0036] refer to Figure 1-4 As shown, a large-scale scene 3D modeling method based on depth camera, such as figure 1 shown, including the following steps:

[0037] S1. Obtain the depth map information and pose information of the current frame, and use the depth camera to obtain the depth map information of the current frame at the current position. The pose information includes position information and attitude information. In an outdoor environment, differential GPS and IMU (Inertial Measurement Unit, inertial Measurement unit) sensor combination acquisition, and for the indoor environment, the pose information calculated from the depth image is fused with the IMU sensor information.

[0038] S2. Calculate the depth map to obtain the current frame 3D point cloud map, and use coordinate transformation to uniformly convert the depth...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a large scale scene 3D modeling method and device based on a depth camera. The steps of getting current frame depth map information and pose information, resolving the depth map to get a current frame 3D point cloud, solving a movement amount of the current frame relative to a key frame, performing a movement amount threshold determination, performing the key frame 3D point cloud coordinate transformation, and finally establishing a scene 3D model. The invention also relates to a large scale scene 3D modeling device based on the depth camera. The key frame is used to perform 3D model construction, and modeling time and storage time consumption is low; the 3D point cloud is combined with an octree grid map, the requirement of a modeling process for the storage space is very low, and is very flexible, and multi-resolution random rapid switching is achieved. The large scale scene 3D modeling device is advantageous in that, one depth camera is combined with other sensors, providing economy and practicality; use flexibility and portability are provided, and the device can be placed onto various carriers, such as a vehicle-mounted, airborne, and hand-held device, so that the large scale scene 3D modeling device has a wider application field.

Description

technical field [0001] The invention relates to 3D modeling technology, in particular to a large-scale scene 3D modeling method and device based on a depth camera. Background technique [0002] With the development of computer vision technology and the emergence of depth cameras, 3D modeling technology, especially in large-scale scenes, has played a significant role in navigation, urban planning, and environmental observation. [0003] In the prior art, laser radar is used to scan the scene, and the obtained point cloud is reconstructed to model the scene. This method can directly obtain high-precision dense 3D point cloud data, but the cost of the equipment is too expensive, and the equipment is relatively expensive. It is relatively cumbersome and not suitable for portable measurement. In addition, the measurement time is long and the reconstruction complexity is relatively high; the other type uses multiple cameras to collect images at different viewpoints and then stitch...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06T7/55
CPCG06T17/00
Inventor 余小欢钱锋白云峰符建姚金良
Owner HANGZHOU GUANGPO INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products