Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

No-reference video quality evaluation method based on deep learning

A video quality, deep learning technology, applied in the field of computer vision, can solve the problems of difficulty in training models, lack of versatility, insufficient quantity, etc., to solve technical difficulties and achieve the effect of evaluation

Pending Publication Date: 2021-09-17
HANGZHOU DIANZI UNIV
View PDF3 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Although the existing no-reference video quality evaluation methods have achieved certain achievements, they still face various challenges. For example, in some databases, the training samples are not balanced and the number is insufficient, which brings certain difficulties to the training model; There are various types of video distortion, and the standard no-reference method is designed for a specific type of distortion, which has limitations and lacks generality; for natural distortion video databases, it is difficult to achieve better evaluation results, etc.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • No-reference video quality evaluation method based on deep learning
  • No-reference video quality evaluation method based on deep learning
  • No-reference video quality evaluation method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] The present invention will be further described below in conjunction with accompanying drawing.

[0042] The method of the present invention includes a pre-trained convolutional neural network, a bidirectional GRU network and a video quality prediction network fused with time domain features. Assuming a video has T frames, the input to the model is parallel T frames of video frames. First, the pre-trained convolutional neural network extracts the content-aware features of each frame, processes the features with global pooling, discards redundant information, and preserves change information. Then, the fully connected layer is used to reduce the feature dimension, and the time domain features before and after are fused with the bidirectional GRU network. Finally, frame quality scores are computed using fully-connected layers, which are pooled over overall video quality to produce prediction scores. The network model provided by the method model fully and effectively co...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a non-reference video quality evaluation method based on deep learning. The method adopts a pre-trained convolutional neural network, a bidirectional GRU network fusing time domain features and a video quality prediction network. The method comprises the steps that: firstly, the pre-trained convolutional neural network extracts content sensing features of each frame, global pooling is used for processing the features, redundant information is discarded, and change information is stored. secondly, dimensionality reduction is carried out on the features by adopting a full connection layer, and the time domain features before and after are fused by using a bidirectional GRU network; and finally, a frame quality score is calculated by using a full connection layer, and then the overall video quality is aggregated to generate a prediction score. The network model provided by the invention fully and effectively considers the content dependence and the time lag effect of the video to realize the evaluation of the video quality score.

Description

technical field [0001] The invention belongs to the field of computer vision, and in particular relates to a no-reference video quality evaluation method based on deep learning. Background technique [0002] Video quality evaluation refers to the perception, measurement and evaluation of the changes and distortions of two pieces of video information with the same main content through specific methods. It has important applications in the fields of video processing, video quality monitoring and multimedia video applications. There are two types of video quality evaluation methods: subjective evaluation and objective evaluation. Subjective evaluation means scoring by means of human visual observation. It can be said to be the method that best reflects the audience's perception of video quality, and it is also the ultimate goal of other objective evaluation methods. However, the subjective evaluation method consumes a lot of manpower and time, and is not suitable for large-sca...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N17/00G06K9/62G06N3/04G06N3/08
CPCH04N17/00G06N3/08G06N3/045G06F18/213G06F18/253G06F18/214
Inventor 周晓飞费晓波张继勇颜成钢
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products