Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Virtual reality video quality assessment method based on two-stream convolutional neural network

A convolutional neural network and video quality technology, applied in the field of virtual reality video quality evaluation, can solve the problems of lack of VR video, normative standards and objective evaluation system, to simplify the process of feature extraction, simple preprocessing method and easy operation Effect

Active Publication Date: 2020-08-18
TIANJIN UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Since virtual reality technology has just emerged in recent years, there is no standard and objective evaluation system for VR video[2]

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual reality video quality assessment method based on two-stream convolutional neural network
  • Virtual reality video quality assessment method based on two-stream convolutional neural network
  • Virtual reality video quality assessment method based on two-stream convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] A virtual reality video quality evaluation method based on dual-stream convolutional neural networks. Each distorted VR video pair is composed of the left video V l And right video V r Composition, evaluation method includes the following steps:

[0017] Step 1: Construct difference video V according to the principle of stereo perception d . First, grayscale each frame of the original VR video and the distorted VR video, and then use the left video V l With right video V r Get the difference video you need. Calculate the sum video V at the video position (x,y,z) d The value of is shown in formula (1):

[0018] V d (x,y,z)=|V l (x,y,z)-V r (x,y,z)| (1)

[0019] Step 2: According to the characteristics of virtual reality video projection and back projection, spatially compress video frames at different positions, that is, down-sampling, and down-sampling a video frame with a resolution of w×h by s times , Get a video frame with a resolution of (w / s)×(h / s). The present inventi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a virtual reality video quality evaluation method based on a double-flow convolutional neural network, which comprises the steps of video preprocessing, wherein a VR differential video is obtained through a left view video and a right view video of a VR video, spatial compression is carried out on video frames at different positions, the frames are uniformly extracted fromthe compressed differential video, each frame is cut into blocks in a non-overlapped manner, a VR video patch is formed by the video blocks at the same position of each frame, so that enough data canbe generated for training the convolutional neural network, and meanwhile, optical flow is extracted for each VR video; establishing two convolutional neural network models with the same configuration; respectively taking the VR video patch and the optical flow as input; and obtaining a final objective evaluation score, wherein the scores obtained by two passages of the video patch and the lightflow are averaged and summed, so that a final objective quality evaluation score is obtained. According to the method, the accuracy of the objective evaluation method is improved.

Description

Technical field [0001] The invention belongs to the field of video processing and relates to a virtual reality video quality evaluation method. Background technique [0002] As a new simulation and interactive technology-virtual reality (VR) technology is used in many fields such as architecture, games and military. It can create a virtual environment consistent with the rules of the real world, or create a simulation completely out of reality Environment, which will bring people more real audio-visual experience and on-site experience [1]. As an important carrier of virtual reality, VR video, also known as panoramic stereo video, plays a huge role. However, due to equipment and processing methods during the collection, storage, and transmission of VR videos, some distortions will inevitably be introduced, which will affect the quality of VR videos. Therefore, it is very important to study an evaluation method that can effectively evaluate the quality of virtual reality videos....

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N17/00H04N13/106
CPCH04N17/00H04N2013/0074
Inventor 杨嘉琛刘天麟
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products