Unlock instant, AI-driven research and patent intelligence for your innovation.

Video super-resolution reconstruction method and system based on temporal feature fusion

A technology of super-resolution reconstruction and temporal features, which is applied in the field of video super-resolution reconstruction based on temporal feature fusion, can solve the problems of missing image details, ignoring global features, and poor super-resolution reconstruction effects, etc. The effect of the reconstruction effect

Active Publication Date: 2022-07-15
NANKAI UNIV
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] However, the inventors found that the previous method only fused the features in the local time domain and ignored the global features. This shortcoming caused problems such as missing image details, which led to poor super-resolution reconstruction results.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video super-resolution reconstruction method and system based on temporal feature fusion
  • Video super-resolution reconstruction method and system based on temporal feature fusion
  • Video super-resolution reconstruction method and system based on temporal feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0035] Aiming at the problem that the previous video super-resolution reconstruction technology based on convolutional neural network fails to effectively utilize the local and global information in the video, which leads to poor video reconstruction quality, this embodiment proposes a method based on temporal feature fusion. The video super-resolution reconstruction method uses complementary features from the global time domain to improve the super-resolution reconstruction effect of each frame in the video sequence while filtering out the effective features in the local time domain.

[0036] refer to figure 1 , this embodiment provides a video super-resolution reconstruction method based on temporal feature fusion, which specifically includes the following steps:

[0037] S101: Acquire an image sequence of a video, extract features of the image sequence, and obtain an initial feature sequence.

[0038] In a specific implementation, for a set resolution video image sequence ...

Embodiment 2

[0047] This embodiment provides a video super-resolution reconstruction system based on time-domain feature fusion, which specifically includes the following modules:

[0048] an initial feature extraction module, which is used to obtain the image sequence of the video, extract the features of the image sequence, and obtain the initial feature sequence;

[0049] The local feature fusion module is used to fuse the features in the initial feature sequence with local time domain features to obtain a local feature sequence; wherein, the non-boundary features in the initial feature sequence are fused with their two nearest neighbors; for the initial feature sequence For the boundary features in the feature sequence, two of the boundary features and the one closest to them are fused;

[0050] The global feature fusion module is used to input the local feature sequence into the variable convolutional short-term memory network of bidirectional sampling, and perform global feature supp...

Embodiment 3

[0054] This embodiment provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method in the video super-resolution reconstruction method based on temporal feature fusion described in the first embodiment above. step.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a video super-resolution reconstruction method and system based on time-domain feature fusion. The method includes acquiring an image sequence of a video, extracting features of the image sequence, and obtaining an initial feature sequence; performing local time-domain feature fusion on the features in the initial feature sequence to obtain a local feature sequence; and combining the non-boundary features in the initial feature sequence with the most The two adjacent features are fused; for the boundary features in the initial feature sequence, the two boundary features and the nearest feature are fused; the local feature sequence is input into the bidirectional sampling variable convolutional long short-term memory network , perform global feature supplementation on each feature in the local feature sequence to obtain a global feature sequence; extract the super-resolution features of the global feature sequence, and add them correspondingly to the initial feature sequence, and then extract the high-resolution feature of the sequence after the features are added. Resolution upsampling features, and the final high-resolution reconstructed image sequence is obtained through a convolutional neural network.

Description

technical field [0001] The invention belongs to the field of video super-resolution reconstruction, and in particular relates to a video super-resolution reconstruction method and system based on time-domain feature fusion. Background technique [0002] The statements in this section merely provide background information related to the present invention and do not necessarily constitute prior art. [0003] In recent years, due to the rapid development of liquid crystal display (LCD) technology and light emitting diode (LED) technology, monitors on the market can already play 4K UHD (3840×2160) resolution or 8K (7680×4320) resolution UHD TVs video. However, currently available video is usually in Full HD resolution of 2K FHD (1920×1080). In order to play Full HD video on UHD TV, it is necessary to increase the spatial resolution of Full HD video to the broadcast standard of UHD TV. Therefore, a video super-resolution reconstruction technology is proposed to process low-res...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T3/40G06V10/42G06V10/44G06V10/62G06V10/771G06V10/80G06V10/82G06K9/62G06N3/04
CPCG06T3/4053G06T2207/10016G06V10/44G06N3/044G06F18/253
Inventor 徐君许刚程明明
Owner NANKAI UNIV