Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Multi-camera Editing Method Based on Cluster Rendering

A multi-camera and editing technology, applied in the field of multi-camera editing based on cluster rendering, can solve problems such as video freezing and affecting editing efficiency, and achieve the effect of avoiding jamming

Active Publication Date: 2018-05-04
BEIJING DAYANG TECH DEV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

When playing multi-channel high-definition files (several to dozens of channels) in real time, especially when there are dozens of channels, a large amount of calculations are required for the selection and switching of super-multiple video lenses. The existing single computer rendering and processing systems have already This kind of computationally intensive rendering cannot be completed, causing the video to freeze during the editing process, which affects the efficiency of editing

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Multi-camera Editing Method Based on Cluster Rendering
  • A Multi-camera Editing Method Based on Cluster Rendering

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] This embodiment is a multi-shot editing method based on cluster rendering. The principle diagram of the system used in the method is as figure 1 Shown. figure 1 The computing rendering server cluster in, only draws 3 servers, in fact there can be 4, 5, 6, or even more, and front-end workstations can have 2, 3, 4 or more.

[0031] The steps of the method described in this embodiment are as follows, and the flow diagram is as follows figure 2 Shown:

[0032] Steps to prepare for multi-camera capture and storage: it is used to capture and save the video signals of multiple cameras at the same time. For each captured video, the time code is marked for each frame of video according to a unified time standard. The obtained original video files are stored in the storage server uniformly.

[0033] Collection and storage of multi-camera signals: collect and save the video signals of multiple cameras at the same time, and mark each frame of video according to a unified time standard ...

Embodiment 2

[0064] This embodiment is an improvement of the first embodiment, and is a refinement of the steps of the first embodiment regarding result transmission. In the result transmission step described in this embodiment, the calculation rendering server transmits to the front-end workstation in time every time a frame of video is completed. The front-end workstation analyzes the calculation results and the multiple rendered video screen information for each of the same time code frames. The channel video can be decoded and corrected in time, and can be sent to display after processing is completed, and the video picture information decoded, corrected, and processed in advance will be cached in the memory of the front-end workstation and sent to the display device for display at an appropriate time.

Embodiment 3

[0066] This embodiment is an improvement of the first embodiment, and is a refinement of the steps of the first embodiment regarding result transmission. In the result transmission step described in this embodiment, the front-end workstation judges whether the current network speed and bandwidth meet the transmission requirements of each channel of uncompressed video images, and if they are satisfied, they will ask the current computing rendering server responsible for decoding and processing the channel of image video. Compressed video data; if the network speed and bandwidth do not meet the transmission requirements of uncompressed video images, the front-end workstation will request the compressed video data from the current computing rendering server responsible for decoding and processing this channel of image video, and compress the received channel After decoding the video data.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a multi-camera editing method based on cluster rendering, comprising: preparing for multi-camera collection and storage; preparing for a computing environment; initiating multi-camera signal processing task operations; preprocessing video images to be edited; assigning tasks; computing and rendering; Result transmission; display; task completion and termination. The present invention renders and processes video images through the comprehensive utilization and scheduling of multiple computing rendering servers and front-end workstations by a task resource scheduling server. The task resource scheduling server reasonably allocates rendering tasks according to the idle conditions of each computing and rendering server, makes full use of the effective resources of each computing and rendering server, and completes rendering tasks in real time and quickly, effectively avoiding the jams that occur during the editing process of multi-camera programs resistance phenomenon.

Description

Technical field [0001] The invention relates to a multi-shot editing method based on cluster rendering, which is a computer video processing method, and a video editing processing method used in a network of a television station. Background technique [0002] Large-scale concerts or party recording sites generally use multi-camera shooting, usually with multiple cameras configured at different angles, so that a large number of video and audio files at the same time and at different angles will be generated. The general processing process is to synchronize the video signal to be played on the monitor, the director monitors multiple pictures through the monitor, and the staff switches the pictures during the final play. When editing multi-camera material, the director needs to select suitable shots from these video and audio files from different angles and different scenes to present them to the audience while ensuring timecode synchronization. In the process of monitoring multi-c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N5/262
Inventor 谷显峰骆萧萧王付生张术芬
Owner BEIJING DAYANG TECH DEV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products