Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion

An acquisition method and 3D vision technology, applied in image data processing, graphics and image conversion, instruments, etc., can solve problems such as lack of information, restrictions on the accuracy and quality of two-dimensional image matching mapping, and restrictions on the accuracy and reliability of three-dimensional visual information

Inactive Publication Date: 2014-11-05
HUNAN UNIV
View PDF4 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method has the following problems: 1) The error generated by the matching mapping model will be further enlarged after the interpolated depth image is mapped and projected; 2) The color scene information corresponding to the depth image of the 3D camera is based on the Approximate estimation of image information, resulting in the loss of some information of the original 2D camera image of the scene, which limits the accuracy and quality of the matching mapped 2D image
Therefore, the above methods restrict the accuracy and reliability of 3D visual information to a certain extent.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion
  • Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion
  • Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0054] Based on computer vision technology, the present invention proposes a fusion matching method of a three-dimensional camera and a two-dimensional camera, which can provide high-quality two-dimensional images of space scenes and obtain corresponding three-dimensional information.

[0055] Such as figure 2 As shown, the present invention discloses a method for acquiring 3D visual information based on fusion of 2D and 3D cameras. The basic flow of the method includes: 1) Synchronous imaging of the scene by a composite camera based on a 2D camera and a 3D camera, and by establishing a matching mapping model of the 3D camera depth image and 2D camera image, the pixels of the 3D camera depth image are mapped to Two-dimensional camera image area; 2) Decompose the mapped area of ​​the two-dimensional camera image, and construct several triangular interpolation areas with the mapping points as vertices; 3) Depth information based on vertices and adjacent vertices of each triangu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion. According to the method, a two-dimensional video camera and a three-dimensional video camera form a composite video camera to respectively image a scene synchronously; by establishing a matched mapping model of three-dimensional video camera depth images and two-dimensional video camera images, matched mapping points and mapping areas of the pixels of the three-dimensional camera video depth images in the two-dimensional video camera images are obtained; the mapping areas in the two-dimensional video camera images are processed through triangular decomposition, and a depth curved surface interpolation function of a triangulation area is established on the basis of depth information of adjacent mapping points; depth information of image pixels in the triangulation area is calculated through interpolation, and then depth images corresponding to high-resolution two-dimensional video camera images can be acquired. According to the method, the two-dimensional video camera image information and the three-dimensional video camera depth information are kept to the maximum degree, and the method has the advantages of being high in precision, little in information loss and the like and can be widely applied to the fields of industrial visual measurement, visual assembly, robot visual navigation and the like.

Description

technical field [0001] The invention relates to a three-dimensional visual information acquisition method based on fusion of two-dimensional and three-dimensional cameras. Background technique [0002] In recent years, 3D camera technology has made rapid progress, and 3D cameras represented by Microsoft Kinect and Swiss-ranger SR3000 / 4000 have appeared. Such devices can simultaneously acquire 2D images and depth images of scene objects. However, the current 2D images of 3D cameras generally have limitations such as low resolution and poor imaging quality. It is difficult to directly use the 2D images provided by 3D cameras for subsequent scene analysis and target recognition processing. At present, traditional 2D cameras have the advantages of clear imaging, high resolution, and small distortion, and are easy to obtain target texture and color features. Therefore, the information of the 3D camera and the traditional 2D camera has good complementarity. [0003] Document [1]...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T3/40
Inventor 余洪山赵科蔺薛菲王耀南孙炜朱江段伟代扬万琴段峰谢久亮周鸿飞
Owner HUNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products