Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for fusing multiple paths of videos and three-dimensional GIS scene

A technology of multi-channel video and fusion method, which is applied in the field of fusion of multi-channel video and 3D GIS scene to achieve the effect of improving rapid positioning

Active Publication Date: 2020-01-07
CHINESE ACAD OF SURVEYING & MAPPING
View PDF4 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to provide a fusion method of multi-channel video and three-dimensional GIS scene, thereby solving the foregoing problems existing in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for fusing multiple paths of videos and three-dimensional GIS scene
  • Method for fusing multiple paths of videos and three-dimensional GIS scene
  • Method for fusing multiple paths of videos and three-dimensional GIS scene

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0036] Such as figure 1 As shown, this embodiment provides a fusion method of multi-channel video and 3D GIS scene, including the following steps,

[0037] S1. Define the data structure in the video object, and assign initial values ​​to each parameter of each video object;

[0038] S2. Determine the spatial position information, attitude information, and observable area of ​​the camera of the video object in the scene, and abstract the video object into a frustum geometric object according to the above information;

[0039] S3. According to the attribute information of the camera, classify all the frustum geometric objects in the scene to form multiple video layers; the attribute information of the camera is such as the ownership attribute of the camera: public security, urban management, traffic, etc.;

[0040] S4. Establishing R-tree index information of all video objects under each video layer in the scene;

[0041] S5. Enter the visible range of the 3D scene, store the ...

Embodiment 2

[0064] Such as Figure 2 to Figure 7 As shown, in this embodiment, the fusion effect of the present invention is compared and illustrated in combination with the present invention and the original virtual-real fusion method based on video projection.

[0065] In this embodiment, the original virtual-real fusion method based on video projection refers to projecting video frame images into a 3D scene using projection texture technology, which is similar to adding a slide projector to a 3D GIS scene, using frame georeferencing information to position and orient it, and then project images onto objects in the scene. It mainly includes 2 steps:

[0066] Step 1: The determination of objects to be fused and rendered within the video range is the most basic and critical step to realize the correct fusion of video and 3D GIS scene. (1) First, set a virtual depth camera at the position of the camera, and set the pose of the depth camera according to the coordinates of the camera, the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for fusing multiple paths of videos and a three-dimensional GIS scene, which comprises the following steps: defining a data structure in a video object, and endowing each parameter of each video object with an initial value; judging the spatial position information and the attitude information of the video object in the scene and the observable area information ofthe camera, and abstracting the video object into a view cone geometric object according to the information; according to the attribute information of the camera, classifying all the view frustum geometric objects in the scene to form a plurality of video layers; establishing r-tree index information of all video objects under each video layer in the scene; entering the visible range of the three-dimensional scene, storing rendering objects in the visible range in real time, and generating a view frustum of the three-dimensional scene in the visible range in real time. The method has the advantages that the topological information of the video and the three-dimensional scene is established, so that the quick positioning and fusion efficiency of the fusion object can be effectively improved, and the method is suitable for three-dimensional scene virtual fusion of multiple (more than 4-5) videos.

Description

technical field [0001] The invention relates to the technical field of cartography, in particular to a fusion method of multi-channel video and three-dimensional GIS scene. Background technique [0002] Virtual-real fusion technology is one of the key links in video enhancement of 3D virtual geographic scenes. It plays an important role in reducing the visual difference between GIS virtual scenes and real video pictures, realizing the seamless combination of virtual and real visual senses, and improving the real immersive visual experience. In the fusion method of video and 3D virtual scene, the virtual-real fusion method based on video projection has become the virtual fusion method of 3D scene due to its advantages of no need for manual intervention and offline fusion, no need to pre-specify vertex texture for the projected texture, and high degree of scene restoration. The most commonly used method in . For example: Stephen et al. of Sarnoff Company in the United States ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/71G06F16/74G06F16/787G06F16/29
CPCG06F16/29G06F16/71G06F16/74G06F16/787
Inventor 李成名刘振东赵占杰戴昭鑫王飞刘嗣超陈汉生
Owner CHINESE ACAD OF SURVEYING & MAPPING
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products