Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Underwater scene reconstruction method and storage medium based on motion restoration

A technology for scene reconstruction and motion recovery, applied in 3D modeling, 3D image processing, image analysis, etc., can solve the problems of reduced precision, increased operation difficulty and cost, and image mismatch, achieving good efficiency, precision, and accuracy The effect of sexual robustness enhancement

Active Publication Date: 2020-06-23
JILIN UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the existing algorithms also have many disadvantages: the binocular stereo vision algorithm based on feature point matching has high accuracy and low time complexity, but can only obtain the sparse point cloud of the scene, and the reconstruction effect is not ideal; Although the matching binocular stereo vision algorithm can obtain dense point cloud data, the accuracy is reduced, and the time complexity is greatly improved compared with feature point matching. When performing 3D reconstruction of large scenes, it takes too long
At the same time, because the traditional binocular stereo vision algorithm only considers the relationship between the corresponding frames, the reconstruction point cloud lacks the interconnection of different perspectives, and the reconstruction point cloud connection is unnatural, which affects the reconstruction effect.
[0009] During the reconstruction process of binocular stereo vision, similar or repeated parts of the image still have shortcomings such as mismatching caused by similar or repeated parts. Therefore, in response to these problems, multi-eye stereo vision reconstruction is proposed, and a new camera is further added on the basis of binocular to provide a more accurate image. More constraint information to improve the final reconstruction accuracy
Although multi-eye stereo vision can reduce mismatching and edge blurring to a certain extent during the reconstruction process, with the addition of additional cameras, the number of images that need to be processed in each viewing angle increases, and the device structure and physical relationship are further complicated. , the operation difficulty and cost are greatly increased, and the effect is not ideal

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Underwater scene reconstruction method and storage medium based on motion restoration
  • Underwater scene reconstruction method and storage medium based on motion restoration
  • Underwater scene reconstruction method and storage medium based on motion restoration

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0091] The underwater scene reconstruction method based on motion recovery of the present invention can be programmed with a computer language, and run on a development platform to realize the above-mentioned functions.

[0092] In Example 1, the underwater scene reconstruction method based on motion recovery was developed on the VisualStudio2010 platform using C++ language, and verified on a group of strictly calibrated binocular GoPro Hero2 cameras on the bottom of the seabed video.

[0093] Two videos in avi format with a duration of 7s are used, which are respectively shot by the left-eye camera and the right-eye camera. Each video contains a total of 210 frames of images. After removing redundant images, a set of images to be reconstructed containing 34 frames is obtained. Figure 6(a)-(d) shows four frames in the image after reconstruction of the image in the underwater video after removing redundancy, Figure 7 The 3D point cloud model of the seabed obtained by reconstruc...

Embodiment 2

[0095] In order to further evaluate the effect of the underwater scene reconstruction method of the present invention, the present invention respectively uses two sets of multi-view model images with laser scanning data to reconstruct with the method of the present invention, and conducts quantitative comparative analysis.

[0096] Fig. 9 is a comparison between the dinosaur model reconstructed by the reconstruction method of the present invention and the laser scanning model, wherein Fig. 9(a) is the laser scanning model, and Fig. 9(b) is the model reconstructed by the algorithm of the present invention.

[0097] Fig. 10 is a comparison between the temple model reconstructed by the reconstruction method of the present invention and the laser scanning model, wherein Fig. 10(a) is the laser scanning model, and Fig. 10(b) is the model reconstructed by the algorithm of the present invention.

[0098] It can be seen that the present invention can achieve better reconstruction resul...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an underwater scene reconstruction method based on motion recovery, and a storage medium. The reconstruction method comprises: introducing an improved motion recovery algorithm, extracting a motion matrix, establishing a mutual connection between video images, and after completing redundant image culling, performing feature point matching and point cloud generation in two steps: firstly performing feature point matching on the binocular image, generating patches based on the matched feature points in order to obtain more dense point cloud data; then spreading the patches to all angles of view, to complete the reconstruction of the scene model, and finally performing the color correction of on point cloud model according to the imaging characteristics of the underwater scene. The invention may still achieve better reconstruction results when only a few input images are obtained, has relatively good efficiency and precision, and at the same time certainly improvesthe accuracy and robustness of the reconstruction scene.

Description

technical field [0001] The invention relates to a three-dimensional reconstruction method, in particular to an underwater scene reconstruction method based on motion recovery and a storage medium, which can take both efficiency and precision into consideration. Background technique [0002] The real world is three-dimensional. In order to observe, analyze and even expand the real world, it is necessary to reconstruct a three-dimensional model in a computer environment. In recent years, with the rapid development of computer hardware technology and the rapid development of software, there are more and more methods for constructing 3D models. Related software is widely used in medical image processing, 3D printing, computer games, virtual reality, map drawing, simulated military training, Film and television entertainment and other fields. According to different methods of obtaining reconstruction data, 3D model construction technology is mainly divided into three directions:...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T17/00G06T7/246G06T7/33G06T7/80G06T7/90G06T15/50
CPCG06T7/248G06T7/337G06T7/85G06T7/90G06T15/50G06T17/00G06T2207/10021G06T2207/10028G06T2207/20021
Inventor 王欣杨熙
Owner JILIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products