Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Scene semantic segmentation method based on deep learning

A semantic segmentation and deep learning technology, applied in neural learning methods, instruments, biological neural network models, etc., can solve the problem that the target contour cannot be divided and recognized in detail, and achieve the effect of ensuring stability

Inactive Publication Date: 2021-02-19
SOUTHWEST PETROLEUM UNIV
View PDF11 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Existing research methods can achieve image segmentation of target categories, but there are also some shortcomings. These shortcomings are mainly reflected in the fact that the target outline cannot be divided and recognized in detail. Therefore, in order to perform accurate semantic segmentation on the target scene, it is necessary to consider Different scene categories will have different deformation capabilities. How to better model these images with multiple deformation capabilities is the key to improving the accuracy of deep learning networks for scene segmentation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene semantic segmentation method based on deep learning
  • Scene semantic segmentation method based on deep learning
  • Scene semantic segmentation method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] The core idea of ​​the present invention is to provide a scene semantic segmentation method based on deep learning, which can effectively distinguish the recognition accuracy of the scene outline, thereby improving MIoU. In order to make the purpose, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail in conjunction with the accompanying drawings and embodiments. The specific embodiments described below are only used to explain the present invention and are not intended to limit the present invention. The main idea of ​​the invention.

[0061] The embodiment of the present invention and its implementation process are as follows. The overall implementation block diagram of its convolutional neural network is shown in Figure 1, which includes two processes, the training phase and the testing phase;

[0062] Step 1: Select the semantic segmentation training data set; in this embodiment, PASCAL VOC 2...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a scene semantic segmentation method based on deep learning. The method comprises a training stage and a testing stage; the training stage comprises the steps: employing Resnet101 for pre-training on a COCO data set to obtain a pre-training model, and then loading the pre-training model into a constructed convolutional neural network to extract a low-level feature image; performing high-level feature extraction and feature fusion on the low-level feature image through a feature enhancement network, an adaptive deformable cavity space convolution pooling pyramid networkand a feature attention network in sequence, and finally outputting a semantic segmentation Mask graph through an up-sampling operation, and obtaining a convolutional neural network semantic segmentation model weight; the test stage comprises the steps: inputting a PASCAL VOC 2012 or Cityscapes test data set into the weight of the convolutional neural network semantic segmentation model, and obtaining a predicted semantic segmentation Mask graph. According to the method, the boundary contour precision of the target image and the accuracy of scene semantic segmentation can be improved.

Description

technical field [0001] The invention relates to a computer vision technology, relates to the field of image semantic segmentation, and in particular relates to a scene semantic segmentation method based on deep learning. Background technique [0002] Hinton et al. proposed the basic concept of deep learning in 2006, and gradually applied it to the field of computer vision, such as image, sound and text, which accelerated the pace of solving complex tasks in the field of computer vision and improved the accuracy of various tasks. . [0003] In the image classification task, through the development of the following years, a series of classic networks such as AlexNet, VGG, GoogLeNet, Resnet, Inception, etc. are proposed. These networks are still active in the current convolutional neural network. For example, Resnet and Inception will be applied In tasks such as image segmentation and target detection, the features of the image are extracted in the form of a backbone network, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/34G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V10/267G06N3/045G06F18/253G06F18/214
Inventor 赵成明陈金令李洁何东王熙
Owner SOUTHWEST PETROLEUM UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products