Foreground reduction interference extraction method in three-dimensional reconstruction
An extraction method and three-dimensional reconstruction technology, which is applied in the field of foreground interference extraction in three-dimensional reconstruction, can solve the problems of not taking into account image, optimization filling, background interference, etc., and achieve the effect of perfect and reasonable steps, reducing error points, and excellent comprehensive performance
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0031] A method for extracting foreground interference in 3D reconstruction involved in this embodiment, the specific process steps are as follows:
[0032] Step 1, calculate the image depth map: such as figure 1 As shown, S represents the modeling object, P i (i=1, 2, 3) represent the images taken from different angles. The motion reconstruction algorithm is used to perform feature extraction, feature matching and cluster adjustment steps on the captured images, and the position of each image in space is calculated. After dense matching, it is The depth map of each image can be obtained;
[0033] Step 2. Divide the depth map into blocks according to the depth value: Since the surface of the object is continuous and the foreground object occupies a large proportion in the image, the depth value of the background object is relatively large under normal circumstances, so deleting the point with a large depth value can make the Larger background blocks are deleted. When there a...
Embodiment 2
[0039] In this embodiment, the foreground interference extraction method in 3D reconstruction described in Implementation 1 is used to process specific images; taking large-scale scene modeling as an example, the specific processing steps are as follows:
[0040] (1) Calculate image depth map: Get the original image, two of which are shown in Figure 6(a) and Figure 6(b), calculate the depth map of the original image, and process Figure 6(a) to obtain the image 7(a), Fig. 7(b) is obtained after processing Fig. 6(b);
[0041] (2) Further process the depth maps of all original images according to steps 2-4 of Embodiment 1 to obtain a depth map after removing interference, and obtain Fig. 8(a) after processing Fig. 7(a), and Fig. 7( b) Figure 8(b) is obtained after processing;
[0042] (3) According to step 5 described in Embodiment 1, the depth maps of all original images are fused to obtain a three-dimensional point cloud model, such as Figure 9 as shown, Figure 9 The image...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


