An image super-resolution reconstruction method driven by semantic segmentation

A super-resolution reconstruction and semantic segmentation technology, applied in image analysis, image enhancement, image data processing, etc., can solve problems such as difficult to simulate complex real scenes, achieve simple framework, easy implementation, and improve semantic segmentation accuracy Effect

Active Publication Date: 2019-01-11
FUDAN UNIV
View PDF4 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Early reconstruction methods based on interpolation are difficult to simulate complex real scenes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An image super-resolution reconstruction method driven by semantic segmentation
  • An image super-resolution reconstruction method driven by semantic segmentation
  • An image super-resolution reconstruction method driven by semantic segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] The embodiments of the present invention will be described in detail below, but the protection scope of the present invention is not limited to the examples.

[0043]Using VDSR as the super-resolution network and Deeplab-V2 as the semantic segmentation network, respectively perform 4 times and 8 times reconstruction, and the low-resolution image is obtained by down-sampling the high-resolution image. The specific steps are as follows:

[0044] (1) Independently train the super-resolution network VDSR and the semantic segmentation network Deeplab-V2. Train the super-resolution network with DIV2K and PASCALVOC 2012; train the semantic segmentation network with PASCAL VOC 2012;

[0045] (2) The super-resolution network and the semantic segmentation network of cascade independent training, initialize the parameter of the corresponding part in the cascade network with the parameter in the step (1);

[0046] (3) Driven by the semantic segmentation task, the super-resolution ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of digital image processing, in particular to an image super-resolution reconstruction method driven by semantic segmentation. The method of the inventionspecifically comprises the following steps: separately training a super-resolution network model and a semantic segmentation network model of an image; cascading the independently trained super-resolution network and semantic segmentation network; driven by the task of semantic segmentation, training the super-resolution network. After the low-resolution image is processed by the task-driven network, the accurate semantic segmentation results are obtained. The experiment result shows that the invention can make the super-resolution network better adapt to the segmentation task, provide clear and high-resolution input images for the semantic segmentation network, and effectively improve the segmentation accuracy of the low-resolution image.

Description

technical field [0001] The invention belongs to the technical field of digital image processing, and in particular relates to an image super-resolution reconstruction method, more specifically, an image super-resolution reconstruction method driven by semantic segmentation. Background technique [0002] Semantic segmentation is one of the basic tasks in the field of computer vision. It divides pixels into different categories according to different semantics, and has a wide range of applications in autonomous driving and image content understanding. In recent years, deep convolutional neural network (DCNN) has not only made great progress in image classification tasks, but also made breakthrough progress in some tasks with structured output, such as semantic segmentation. [0003] In 2015, Long et al. [1] FCN (fully convolutional neural network) is proposed, and DCNN is applied to the semantic segmentation task of pixel-level classification for the first time. In order to ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00
CPCG06T5/00G06T2207/20081
Inventor 颜波牛雪静谭伟敏
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products