Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semantic segmentation method for low-illumination scene

A semantic segmentation and low-light technology, which is applied in the field of computer vision, can solve problems such as single action scenes, decreased accuracy, and lack of data sets, and achieve the effects of promoting convergence, accelerating convergence, and optimizing experimental results

Active Publication Date: 2019-11-15
DALIAN UNIV OF TECH
View PDF3 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In recent years, based on deep learning technology, many semantic segmentation methods have emerged. However, due to the lack of data sets and other reasons, these methods have a single role in the scene and require ideal conditions of sufficient brightness and care. Once the brightness is insufficient, the accuracy will be severely reduced.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic segmentation method for low-illumination scene
  • Semantic segmentation method for low-illumination scene
  • Semantic segmentation method for low-illumination scene

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] (1) Network training

[0036] First import the parameters of the corresponding layers of the ResNet and DeepLabV3 networks to initialize a network to accelerate the training convergence after the network, that is, the pre-training of encoder C, encoder S and the final semantic segmentation part. Randomly group the collected data sets so that each group has an image of a low-light scene and an image of a normal scene, which are input to two encoders for corresponding feature extraction. This process is the retraining process after importing the ResNet pre-trained model. The encoder C part extracts the features in the low-light scene image and inputs them to the feature migration network part; the encoder S part extracts the features of the normal scene and passes through two multi-layer perceptrons (MLP), and the final features are combined with the encoder C Part of the output features are fused and migrated through the feature migration part. After the feature migrat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a semantic segmentation method for a low-illumination scene, and belongs to the technical field of computer vision. According to the semantic segmentation method, a semantic segmentation problem of a normal image is used as a source domain problem; the semantic segmentation problem of the low-illumination image is used as a target domain problem; a feature migration methodin migration learning is used; and the advantage of sufficient information of a normal scene image is brought into full play, and useful information in a normal scene is extracted, and the useful information is converted and combined with feature information of a low-illumination image to obtain more image information beneficial to semantic segmentation, so that a deep neural network is trained. Based on the idea, on the basis of the generative adversarial network, a network model for direct semantic segmentation of a low-illumination scene is designed and realized by utilizing a transfer learning method. By utilizing the model, the semantic segmentation task of the low-illumination picture can be effectively solved.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to image semantic segmentation technology based on deep learning, aiming at low-light scenes, which is the most common non-ideal scene, to obtain high-precision dark scene semantic segmentation results. Background technique [0002] Semantic segmentation is a classic computer vision problem, which takes raw data such as images as input, and outputs the corresponding region-of-interest mask based on the original input. Full-pixel semantic segmentation uses a single pixel as the basic unit of classification, which is very similar to the perception of human scene understanding. Compared with the early computer vision problems that only focus on image edges and gradients, it has great advantages. Semantic segmentation can gather pixels belonging to the same part of the picture together, which can well solve the problem of scene understanding. Compared with other imag...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/10G06N3/04
CPCG06T7/10G06T2207/10024G06T2207/20081G06N3/045
Inventor 杨鑫朱锦程王昊然魏小鹏张强尹宝才
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products