Method and device for synchronous video playback
A technology of synchronous playback and video recording, applied in image communication, television, closed-circuit television systems, etc., can solve problems such as insufficient network bandwidth, and achieve the effect of solving data packet loss and improving synchronization.
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0040] Embodiment 1, such as selecting cameras A, B, C, and D for synchronous playback, the set recording start and end time is 2016-1-1 8:00-10:00, with 2016-1-1 8:00:00 Take video data decoding and synthesis as an example. Assume that the video resolution of camera A is 720P (1280*720 pixels) and the frame rate is 25fps; the video resolution of camera B is 720P (1280*720) and the frame rate is 20fps; the video resolution of camera C is 1080P (1920 *1080), the frame rate is 25fps, the recording resolution of camera D is CIF (352*288), and the frame rate is 5fps. According to the video frame rate corresponding to each camera, the corresponding decoding speed of each camera is obtained. The decoding speeds of cameras A, B, C, and D are 40ms / f, 50ms / f, 40ms / f, and 200ms / f respectively. The maximum frame rate of the video recording frame rate corresponding to each camera is 25fps, and the synthesis speed is 40ms / f.
[0041] The back-end server extracts the video data correspond...
Embodiment 2
[0062] Embodiment 2, such as selecting cameras A, B, C, and D for synchronous playback, the set video recording start and end time is 2016-1-1 8:00-10:00, assuming that the recording resolution and frame rate of each camera are the same , the frame rate is 25fps, assuming that the mosaic image at 2016-1-1 8:30:00 has been completed, that is, the image corresponding to the frame number 25 of each camera at 2016-1-1 8:30:00 is stored in the second cache data. Assume that the image data corresponding to each camera extracted by the backend server from the first cache next time is the image data of camera A and C at 2016-1-1 8:30:01 frame number 1, and that of camera B at 2016-1-1 1 Image data of frame number 1 at 9:00:00, image data of camera D at 2016-1-1 8:35:00 frame number 1. That is, the time information corresponding to the image data corresponding to each camera extracted from the first cache this time is different, so only cameras A and C with the smallest time informati...
Embodiment 3
[0067] Embodiment 3, such as selecting cameras A, B, C, and D for synchronous playback, the set recording start and end time is 2016-1-1 8:00:00-10:00:00, and the search finds cameras A, B, and C Contains the recording data of 2016-1-1 8:00:00-10:00:00, but camera D only contains the recording data of 2016-1-1 8:30:00-10:00:00. Assume that the video frame rate of each camera is 25fps. Initially, first obtain the video data of camera A, B, and C at 2016-1-1 8:00:00 frame number 1, and the video data of camera D at 2016-1-1 8:30:00 frame number 1 and decode them respectively The image data corresponding to each camera is obtained, and put into the first buffer. Based on step 104', since the time information of the image data corresponding to camera D extracted from the first cache is greater than the time information of the image data corresponding to other cameras, only cameras A, B, and C at 8:00 on January 1, 2016 are extracted. :00 The image data of frame number 1 is put i...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


