Video denoising method based on voxel-level non-local model

A non-local, video technology, applied in the field of video processing, can solve problems such as unsatisfactory, inability to preserve video details, and poor results.

Pending Publication Date: 2020-09-04
TAISHAN UNIV +1
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the current video denoising methods are based on two-dimensional image block-level self-similarity, the popular video denoising method VBM3D [1] It is to perform two-dimensional block matching between frames, but some narrow and long signals in a video frame are often regarded as noise in a certain frame, and the video denoising method based on two-dimensional image blocks can easily be regarded as noise and was removed
VBM4D [2] Although the method performs the matching of three-dimensional cubes and a separable four-dimensional transformation, the first three-dimensional transformation is similar to the separable three-dimensional transformation in [1]. Although the video denoising results are generally improved compared with the VBM3D method, But it is still unsatisfactory, especially in the denoising process, the video details cannot be well preserved
[0004] In recent years, although the video denoising problem has been developed rapidly, the video denoising method based on two-dimensional block-level self-similarity seems powerless to signal-related noise in real video; although the method based on deep learning has been obtained in recent years Rapid development, but the method based on deep learning relies too much on the training data set. Generally, when training and testing on the same data set, the test results are ideal, but when switching to another data set for testing, the results are very poor

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video denoising method based on voxel-level non-local model
  • Video denoising method based on voxel-level non-local model
  • Video denoising method based on voxel-level non-local model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0065] refer to Figure 2 to Figure 5 As shown, the present invention uses MATLAB software to carry out the video dataset (comprising 8 sections of video) downloaded from the website http: / / www.cs.tut.fi / ~foi / GCF-BM3D / of reference 1 in the background technology Video denoising experiment of the method of the present invention,

[0066] It can be seen from the comparison of video frames that the method of the present invention can not only remove the noise in the video well, but also ideally preserve the details of the video.

[0067] Table 1 is the PSNR value comparison between the video denoising results of the method of the present invention and the video denoising results of the VBM3D method and the VBM4D method in the background technology. The data below each cell is the result of the present invention, the intermediate data is the result of the VBM4D method, and the above The data in is the result of the VBM3D method, and the better one among all the data is indicated ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video denoising method based on a voxel-level non-local model. The method comprises first-stage preliminary denoising and second-stage fine denoising. The method has the advantages that all blocks matched with three-dimensional blocks are scanned into column vectors, then row matching is carried out, and the most similar voxel group is obtained. Compared with an existingnon-local method based on two-dimensional image blocks, the method for executing image denoising on the similar voxel group has multiple advantages that firstly, cross-frame long and narrow signals cannot be regarded as noise when three-dimensional block matching is executed; 2, detail information in the video can be better reserved while noise is removed; and thirdly, Gaussian noise and signal-related noise in the video can be effectively removed.

Description

technical field [0001] The invention relates to the technical field of video processing, in particular to a video denoising method of similar voxel matching Haar transform. Background technique [0002] Even high-end cameras shoot videos under low-light conditions with more or less noise, and high-speed cameras using shorter exposure times will introduce stronger noise. In addition, cheap, low-quality sensors are still widely used in mobile phones or surveillance cameras, and such low-end camera equipment will inevitably introduce noise even in videos captured in well-lit environments, especially in difficult shooting conditions Videos obtained in low light conditions (low light, small sensors, etc.) contain more noise. Therefore, video denoising remains an important step in video processing and underlying machine vision research. [0003] The existing research literature on video denoising is much less than that on image denoising. Most of the current video denoising met...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00G06T5/10G06T5/20
CPCG06T5/002G06T5/10G06T5/20G06T2207/10016G06T2207/20016
Inventor 侯迎坤侯昊魏本征徐君郑元杰
Owner TAISHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products