Video sequence loss frame prediction recovery method based on deep neural network

A deep neural network and video sequence technology, applied in the field of video sequence lost frame prediction and recovery based on deep neural network, can solve the problems of large amount of calculation pixel blocks, affecting the accuracy of lost frame recovery, etc., to achieve strong generalization ability, improve The effect of prediction accuracy and efficiency

Inactive Publication Date: 2018-06-19
安徽优思天成智能科技有限公司
View PDF4 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Most of the existing lost frame recovery methods use traditional methods such as Gaussian functions, op

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video sequence loss frame prediction recovery method based on deep neural network
  • Video sequence loss frame prediction recovery method based on deep neural network
  • Video sequence loss frame prediction recovery method based on deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

[0029] Such as figure 1 As shown, the present invention provides a method for predicting and recovering lost frames of a video sequence based on a deep neural network, specifically comprising the following steps:

[0030] Step S1, collecting a predetermined number of continuous video frame images in the video sequence to construct a data set.

[0031] Among them, since the image of the missing frame is only related to a small number of video frame images before it...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a video sequence loss frame prediction recovery method based on a deep neural network. The deep learning correlation theory is used, a deep convolution network is used for automatically extracting image features and the learning ability of a LSTM long-term memory network on a time sequence, a fixed number of video frame image data are used as training samples to train the network, and loss frame prediction recovery in the video sequence is performed, the intrinsic features of the video frame images and the similarity and coherence of the images between the frames are fully used to improve the prediction accuracy and efficiency, meanwhile, the video sequence loss frame prediction recovery method is high in generalization ability and has a certain social value and practical significance.

Description

technical field [0001] The invention belongs to the technical field of video processing, and relates to a method for predicting and restoring lost frames of a video sequence, in particular to a method for predicting and restoring lost frames of a video sequence based on a deep neural network Background technique [0002] Video often needs to be transmitted, such as browsing online videos and monitoring the transmission of images. In actual situations, due to transmission conditions, video transmission is sometimes accompanied by frame loss. In order to improve the quality of the obtained video, some methods are often sought to recover and reconstruct these lost frames by using unlost frames. [0003] Most of the existing lost frame recovery methods use traditional methods such as Gaussian functions, optical flow, and motion vectors to recover and predict lost frames in units of pixel blocks. restore accuracy. Contents of the invention [0004] Aiming at the deficiencies ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N19/65
CPCH04N19/65
Inventor 李泽瑞杨钰潇杜晓冬吕文君
Owner 安徽优思天成智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products