Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for fusing two-dimensional video information and three-dimensional geographic information

A two-dimensional video and geographic information technology, applied in the field of geographic information systems, can solve problems such as poor fusion effect, failure to refer to spatial position and information, and failure to apply

Pending Publication Date: 2021-02-12
TIANJIN QISUO PRECISION ELECTROMECHANICAL TECH
View PDF3 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] Three-dimensional geographic information has the advantages of wide field of view and can be observed from any angle, but the real-time performance is poor. With the change of terrain and the construction of the city, the geographic information needs to be updated regularly, and the current geographic information cannot be dynamically displayed, such as the recently added Buildings, facilities, roads, etc.
The advantage of two-dimensional video information is that it can observe the scene in real time and capture the dynamic change information of the terrain. However, the video data is relatively scattered and cannot intuitively reflect the corresponding relationship between the current picture and the space, which may easily cause information omission, which is not conducive to Macroscopically control the dynamic information of the entire geographic space
[0003] The existing 2D video and 3D geographic information fusion technology simply embeds 2D video information on a 3D map, fails to refer to spatial position and information, and can only provide information on a fan-shaped area centered on a certain point , lacking the support of geographic information, the fusion effect is poor, and it cannot provide relatively complete field of view information, so it cannot be applied in practice

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for fusing two-dimensional video information and three-dimensional geographic information
  • Method for fusing two-dimensional video information and three-dimensional geographic information
  • Method for fusing two-dimensional video information and three-dimensional geographic information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0079] The present invention will be described in further detail below in conjunction with the drawings and specific examples. The following examples are only descriptive, not restrictive, and cannot limit the protection scope of the present invention.

[0080] A method for fusing two-dimensional video information and three-dimensional geographic information, the invention of which is as follows: for the target observation point area, the two-dimensional video information is obtained by shooting with a camera, and the camera parameters are used to construct a viewing frustum to represent the two-dimensional video information Visible range. Use the shadow map algorithm to judge the occlusion relationship within the line of sight, and judge the actual shooting range of each camera. The projected texture mapping technology is used to update the image projected on the three-dimensional geographic information to realize real-time display of video information. In addition, the fusi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method for fusing two-dimensional video information and three-dimensional geographic information. For a target observation point area, the two-dimensional video informationis shot by using a plurality of cameras at multiple angles, and a visual cone is constructed by using parameters of the cameras to represent a visual range of the two-dimensional video information. The method comprises the following steps: 1, obtaining two-dimensional video information shot by each camera, and solving a coverage area of each independent camera; 2, fusing the coverage areas of theplurality of cameras to obtain fused two-dimensional video information; 3, performing optimization processing on the fused two-dimensional video information, and cutting off two-dimensional data information which is not in a target observation point area; 4, mapping the optimized two-dimensional video information into three-dimensional map information; and 5, updating and mapping the two-dimensional video information in real time to realize real-time fusion of the two-dimensional video and the three-dimensional geographic information. According to the invention, the dynamic coherent scene is completely mapped to the three-dimensional map, so that the coherence and accuracy of the three-dimensional map are improved.

Description

technical field [0001] The invention belongs to the technical field of geographic information systems, and in particular relates to a method for fusing two-dimensional video information and three-dimensional geographic information. Background technique [0002] Three-dimensional geographic information has the advantages of wide field of view and can be observed from any angle, but the real-time performance is poor. With the change of terrain and the construction of the city, the geographic information needs to be updated regularly, and the current geographic information cannot be dynamically displayed, such as the recently added Buildings, facilities, roads, etc. The advantage of two-dimensional video information is that it can observe the scene in real time and capture the dynamic change information of the terrain. However, the video data is relatively scattered and cannot intuitively reflect the corresponding relationship between the current picture and the space, which ma...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/00G06T7/80G06F16/29
CPCG06T7/80G06F16/29G06T2207/10016G06T2207/20221G06T2207/30208G06T2207/20132G06T3/08
Inventor 刘经纬于潼
Owner TIANJIN QISUO PRECISION ELECTROMECHANICAL TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products