No-reference video quality evaluation method based on deep spatio-temporal information

A technology of video quality and evaluation method, which is applied in video processing and image fields, and can solve the problems that the time memory model does not consider the frame rate, the result is inaccurate, and the information is not comprehensive enough.

Pending Publication Date: 2021-05-11
HANGZHOU DIANZI UNIV
View PDF0 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

As a result, the information obtained during the evaluation is not comprehensive enough
[0008] 2. For global timing information, only one-way GRU is considered
Therefore, the results obtained by simply considering the one-way GRU are inaccurate
[0009] 3. The window sliding in the temporal memory model does not consider the frame rate, and only uses a fixed window

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • No-reference video quality evaluation method based on deep spatio-temporal information
  • No-reference video quality evaluation method based on deep spatio-temporal information
  • No-reference video quality evaluation method based on deep spatio-temporal information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0066] Specific embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings. It should be understood that the specific embodiments described here are only used to illustrate and explain the present invention, and are not intended to limit the present invention.

[0067] Such as Figure 1-4 As shown, the focus of the no-reference video quality assessment method based on deep spatiotemporal information is the quality assessment problem of real videos. Since humans are end users, leveraging knowledge of the Human Visual System (HVS) can help build objective approaches to our problems. Specifically, human perception of video quality is mainly influenced by single-frame image content and short-term memory.

[0068] The present invention is mainly divided into the following modules: content perception feature extraction and time memory model. Among them, the content-aware feature extraction module uses the Resnst-50 pre-tra...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a non-reference video quality evaluation method based on deep spatio-temporal information, and the method comprises the following steps: S1, extracting content perception features, extracting semantic layer features of a top layer through a Resnet-50 pre-trained deep neural network, carrying out the aggregation, and carrying out the mean aggregation and standard deviation aggregation of a feature map; S2, carrying out modeling on the time memory effect, wherein in the aspect of feature integration, modeling is carried out on the long-term dependency relationship by adopting a GRU network, and in the aspect of quality aggregation, a time pooling model of subjective inspiration is proposed and embedded into the network. An existing NR-VQA method cannot well model a long-term dependency relationship in a VQA task; in order to solve the problem, GRU which is a recurrent neural network model with gate control and can integrate features and learn the long-term dependency relationship is used, and the GRU is used for integrating a content awareness function and predicting a frame-by-frame quality score.

Description

technical field [0001] The invention relates to the technical field of image and video processing, in particular to a no-reference video quality evaluation method based on a deep convolutional network. Background technique [0002] With the popularization of wearable devices, smart phones and tablet computers with camera and video recording functions, the acquisition and storage of video information has become easier and easier. Greatly affect the audience's visual experience. In the entire video link, most modules can be accurately measured, such as capture, upload, preprocessing, transcoding, distribution, etc. However, the unknown part is precisely the most critical part, that is, how the user's video viewing experience is. At present, video quality assessment methods in the industry are divided into two categories: objective quality assessment and subjective quality assessment. The subjective evaluation method is evaluated by the human visual system, which is more acc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/049G06N3/08G06N3/084G06V20/46G06V20/41G06N3/045
Inventor 殷海兵刘银豪周晓飞王鸿奎
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products