Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Real-time semantic segmentation method with low calculation amount and high feature fusion

A feature fusion and semantic segmentation technology, applied in the field of computer vision, can solve problems such as huge computational load, and achieve the effect of improving feature utilization and satisfying computational accuracy.

Pending Publication Date: 2020-08-04
SOUTHEAST UNIV
View PDF4 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, for applications such as autonomous driving that require high real-time performance, the huge amount of calculation of semantic segmentation is a huge challenge for them.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real-time semantic segmentation method with low calculation amount and high feature fusion
  • Real-time semantic segmentation method with low calculation amount and high feature fusion
  • Real-time semantic segmentation method with low calculation amount and high feature fusion

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach

[0042] Step 1: Construct a semantic segmentation network based on an "encoder-decoder" multi-branch structure. figure 1 Given the network structure diagram of the semantic segmentation constructed in the present invention, in the "encoder" part, the network consists of three branches, and in each branch, as the network deepens, deeper features are extracted, but the features The size of the graph is continuously reduced; in the "decoder" part, the network combines the features of different stages extracted by the "encoder" in a splicing way and performs upsampling of corresponding multiples to restore the size of the feature map, and finally get the same as the original Semantic segmentation result map with same image size.

[0043] Step 2: Use images of different resolutions as the input of each branch of the multi-branch in the "encoder". figure 1 The given semantic segmentation network structure diagram of the present invention shows that the original size image, 2x downsa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a real-time semantic segmentation method with low calculation amount and high feature fusion. The network structure of the method is a multi-branch feature cross-layer fused encoding-decoding structure based on depth separable convolution. According to the method, original images with different resolutions are used as input of multiple branches, features are extracted through a residual error module formed by depth separable convolution, then feature transmission is carried out on different stages of the multiple branches, and the feature utilization rate of each stageis improved. According to the method, testing is carried out on a Cityscapes data set, and good experimental results are obtained, wherein the experimental results reach 112.3 Fps and 65.6% of mIOU. The real-time segmentation effect can be achieved while the calculation precision is met.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to a real-time semantic segmentation method with low calculation amount and high degree of feature fusion. Background technique [0002] In recent years, with the continuous breakthrough of parallel computing theory and hardware realization level, the field of computer vision has been greatly developed. Especially in the 2012 ILSVRC (ImageNet Large Scale Visual Recognition Challenge) competition, AlexNet based on the convolutional neural network won the championship of the classification project, which triggered an upsurge of deep learning, and deep learning technology began to shine. Currently, in the field of computer vision, deep learning, especially convolutional neural networks, is playing an increasingly important role in various visual recognition tasks. [0003] Semantic segmentation is to perform dense predictions on images, inferring classification labels for each pixel. M...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/34G06K9/62G06N3/04G06N3/08G06T3/40G06T9/00
CPCG06T3/4038G06T9/002G06N3/08G06T2200/32G06T2207/20081G06T2207/20084G06V10/267G06N3/045G06F18/253
Inventor 杨绿溪朱紫辉王路顾恒瑞李春国黄永明
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products