Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional video fusion method and system based on WebGL

A technology of 3D video and fusion method, applied in the field of video fusion, which can solve problems such as poor display effect, narrow adaptability to occasions, camera distortion correction, etc.

Active Publication Date: 2021-03-02
埃洛克航空科技(北京)有限公司
View PDF8 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the existing video fusion scheme realizes the fusion of models, it does not correct the distortion of the camera, and is suitable for a narrow range of scenarios. It is only suitable for scenes where the camera is installed high and the lens distortion is small, otherwise the display effect is poor.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional video fusion method and system based on WebGL
  • Three-dimensional video fusion method and system based on WebGL
  • Three-dimensional video fusion method and system based on WebGL

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0045] Embodiment 1 of the present invention provides a method for 3D video fusion based on WebGL, such as figure 1 shown, including the following steps:

[0046] Step 1: Access the video stream.

[0047]During specific implementation, access to video streams supports access to local video files and HTTP protocol network video data, and uses the HTML5 Video tag HTMLVideoElement to store video objects.

[0048] Step 2: Update the video texture object based on WebGL.

[0049] During specific implementation, based on the 3D drawing protocol WebGL standard, the efficiency of graphics rendering is high, and direct access on the browser is convenient and fast. For the video source connected in step 1, use the canvas to copy the single frame image of the HTMLVideoElement video every frame, and use the canvas value to update the video texture object rendered to the scene. A WebGL texture object for each frame of the video.

[0050] Step 3: Set the parameters of the observation cam...

Embodiment 2

[0072] Embodiment 2 of the present invention, such as figure 2 As shown, a WebGL-based three-dimensional video fusion system is provided, including: a video access and storage module, a video fusion processing module, a camera distortion correction module, and a mask display module. Video access and storage module, used to access video and store video objects; video fusion processing module, based on WebGL to process video and scene fusion; camera distortion correction module, used to process video fusion according to camera internal parameters and distortion parameters Distortion correction is performed on the post-distortion video; the mask display module is used to selectively display the fusion video after distortion correction based on the mask image according to the range of the area to be displayed, so as to realize the cropping display of the video area.

[0073] During specific implementation, the described video fusion processing module includes a texture data submo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a three-dimensional video fusion method and system based on WebGL. According to the invention, there is no need to process a video source, and the method comprises the steps: accessing an HTTP video stream, updating a video texture object based on WebGL, updating and setting a near cutting surface, a far cutting surface and a camera position and orientation of a view cone ofan observation camera, then updating the scene depth of an observation view angle, projecting and restoring to an observer camera coordinate system, fusing with the live-action model, performing distortion correction on the camera, and finally realizing a video area cutting effect by adopting masking. The problems in the prior art are solved, three-dimensional video fusion is achieved on the basis of WebGL, the projection area is cut, adjacent videos can be prevented from being displayed in an overlapped mode, distortion correction is conducted on the cameras, and therefore the good display effect can be achieved for the cameras with large distortion and the situation that the installation positions are low.

Description

technical field [0001] The invention relates to the technical field of video fusion, in particular to a WebGL-based three-dimensional video fusion method and system. Background technique [0002] At present, in the traditional video surveillance system, it is difficult for supervisors to correspond the camera video with its actual geographical location, and it is impossible to conduct global real-time monitoring of large scenes. Three-dimensional video fusion (also known as panoramic video fusion or full-time dynamic video fusion) technology, The real-time camera images can be projected onto the real scene model, which can better monitor the global scene, and can easily track vehicles or personnel across borders; since it is projected into the 3D scene, you can see each video in the The actual position in space, by rotating the three-dimensional scene, you can view the video from different angles, which brings convenience to actual combat command. Although the existing vide...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N7/18H04N5/232H04N5/262H04N13/271G06T5/00G06T15/00G06T15/04G06T15/20
CPCH04N7/18H04N13/271H04N5/2624G06T15/04G06T15/205G06T15/005H04N23/698G06T5/80Y02D10/00
Inventor 张帅平红燕
Owner 埃洛克航空科技(北京)有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products