Video quality evaluation method based on visual saliency area and time-space characteristics

A technology of video quality and area, applied in the direction of television, electrical components, image communication, etc., can solve the problems of complex calculation, the evaluation results cannot better conform to the subjective evaluation, and cannot represent all the information of the video, and achieve the effect of accurate results.

Active Publication Date: 2017-11-03
XIDIAN UNIV
View PDF7 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this method has added the HVS feature, it still has the disadvantage that the floating filter proposed by this method can only be implemented in the spatial domain, the calculation is quite complicated, and the spatio-temporal statistical characteristics of the video are not considered, so the evaluation results cannot be better in line with The results of subjective evaluation, and can only be used for compressed and distorted videos, and cannot be widely used in practice
However, the shortcomings of this method are that the extracted simple features cannot represent all the information of the video, and the impact of human visual characteristics on quality evaluation is not considered, and the evaluation results cannot better conform to the results of subjective evaluation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video quality evaluation method based on visual saliency area and time-space characteristics
  • Video quality evaluation method based on visual saliency area and time-space characteristics
  • Video quality evaluation method based on visual saliency area and time-space characteristics

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0053] Refer to attached figure 1 , the concrete steps of the present invention are as follows.

[0054] Step 1, extract the video.

[0055] Randomly select a video from 160 videos in the video quality assessment database live.

[0056] Step 2, randomly select an image frame from the selected video.

[0057] Step 3, extract the visually salient regions of the image.

[0058] Select the maximum gray value and the minimum gray value from the selected frame image plane coordinate system respectively.

[0059] An optimal threshold was determined using the maximum between-class variance method OTSU.

[0060] The specific steps of the maximum between-class variance method OTSU are as follows:

[0061] Step 1, set the initial threshold of the gray value to 60.

[0062] In the second step, the area surrounded by all the gray values ​​in the selected frame imag...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video quality evaluation method based on a visual saliency area and time-space characteristics. The shortcomings that the human visual characteristics and extracted features are not considered in the prior art, so that the video information cannot be completely reflected. The video quality evaluation method comprises the following steps: (1) extracting a video; (2) randomly selecting a frame of image from the selected video; (3) extracting the visual saliency area of the image; (4) judging whether all frames of images are extracted; (5) synthesizing a video; (6) obtaining a three-dimensional discrete cosine transform 3D-DCT coefficient; (7) extracting features; (8) reducing the feature dimension; (9) judging whether all videos are extracted; (10) predicting a mass fraction; (11) calculating related video coefficients; and (12) outputting the related coefficients. The video quality evaluation method has the advantages that the distortion type is not limited, the eye focus and time-space characteristics of the videos are fully considered, and that the evaluation result better satisfies the subjective evaluation result.

Description

technical field [0001] The invention belongs to the technical field of image and video processing, and further relates to a video quality evaluation method based on visually significant regions and spatio-temporal characteristics in the technical field of image and video quality evaluation. The invention can be applied to video used in video coding and video conferencing. According to the different influences of human eyes on images, salient areas of the video can be extracted, and the temporal and spatial characteristics of the video can be considered to evaluate the video objectively. Background technique [0002] With the rapid development of multimedia technology and computer networks, video signals are widely used in video surveillance, video conferencing and other services. When people come into contact with more and more videos, the demand for video-related services is also increasing. On the other hand, from video generation to transmission to end users, every stage ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N17/00
CPCH04N17/00
Inventor 王俊平胡静张瑶梁刚明李勇倪洁郭佳佳白瑞雪
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products