Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

No-reference video quality evaluation method based on three-dimensional spatial-temporal feature decomposition

A technology of video quality and spatiotemporal features, applied in the fields of image processing and video processing, can solve the problems of insufficient spatiotemporal feature extraction of distorted video, insufficient representation of distorted semantic information, neglect of temporal modeling, etc., to achieve optimization effectiveness and accuracy , high practicability, accurate results

Active Publication Date: 2020-12-15
XIDIAN UNIV
View PDF12 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method can solve the problems of insufficient spatio-temporal feature extraction of distorted video, insufficient representation of distorted semantic information and ignoring temporal modeling in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • No-reference video quality evaluation method based on three-dimensional spatial-temporal feature decomposition
  • No-reference video quality evaluation method based on three-dimensional spatial-temporal feature decomposition
  • No-reference video quality evaluation method based on three-dimensional spatial-temporal feature decomposition

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] Attached below figure 1 The specific steps of the present invention are further described in detail.

[0053] Step 1, construct the spatio-temporal distortion feature learning module.

[0054] Build a spatio-temporal distortion feature learning module, the structure of which is as follows: rough feature extraction unit → 1st residual subunit → 1st pooling layer → Non-Local unit → 2nd residual subunit → 2nd pooling layer → 3rd residual subunit → 3rd pooling layer → 4th residual subunit → global pooling layer → fully connected layer.

[0055] The structure of the rough feature extraction unit is: input layer→first convolution layer→first batch normalization layer→second convolution layer→second batch normalization layer→pooling layer.

[0056] The 1st, 2nd, 3rd, and 4th residual subunits are three-dimensional extensions of the residual network, and then decompose the 3×3×3 convolution kernel into 3×1×1 one-dimensional time convolution and 1 ×3×3 two-dimensional spatial...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a no-reference video quality evaluation method based on three-dimensional spatial-temporal feature decomposition, which comprises the following steps of: constructing a qualityprediction network consisting of a spatial-temporal distortion feature learning module and a quality regression module, generating a no-reference training data set and a no-reference test data set, training the spatial-temporal distortion feature learning module and the quality regression module, and outputting the quality evaluation score value of each distorted video in the test set. The invention is used for accurately and efficiently extracting the quality perception characteristics of the time-space domain content from the input distorted video, the corresponding prediction quality scoreis obtained at the output end of the network, and the method has the advantages that the result is more accurate and the application is wider when the quality of the non-reference video is evaluated.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to a no-reference video quality evaluation method based on three-dimensional spatiotemporal feature decomposition in the technical field of video processing. The invention can be used for extracting three-dimensional distortion features from distorted videos without original reference information in video collection, compression and transmission, and objectively evaluating video quality according to the extracted features. Background technique [0002] In the Internet information age, network multimedia technology and communication technology are developing rapidly, and people can obtain multimedia information through various channels more conveniently. Relevant studies have shown that, as the most intuitive and efficient information carriers, image and video information account for more than 70% of the information people receive. The explosive growth of terminal equ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06F17/18G06N3/04G06N3/08H04N17/00
CPCG06N3/049G06N3/08H04N17/004G06F17/18G06N3/044G06N3/045G06F18/253
Inventor 何立火高帆柯俊杰蔡虹霞路文高新波孙羽晟
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products