Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semantic segmentation method based on double-flow feature fusion

A technology of semantic segmentation and feature fusion, applied in image analysis, image enhancement, instrumentation, etc., can solve the problems of limited ability to reconstruct precise details and insufficient resolution, and achieve the effects of compensating losses, expanding receptive fields, and reducing image size

Active Publication Date: 2020-02-11
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF5 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the encoder's most important feature map may be highly semantic, it has limited ability to reconstruct precise details in the segmentation map due to insufficient resolution, which is very common in modern backbone models.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic segmentation method based on double-flow feature fusion
  • Semantic segmentation method based on double-flow feature fusion
  • Semantic segmentation method based on double-flow feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments.

[0039] A semantic segmentation method based on dual-stream feature fusion proposed by the present invention, its overall implementation block diagram is as follows figure 1 As shown, it includes two processes of training phase and testing phase;

[0040] The specific steps of the described training phase process are:

[0041] Step 1_1: Select the RGB image and depth image of N original images to form a training set, and mark the RGB image of the kth original image in the training set as The depth map of the original image is marked as The corresponding real semantic segmentation image is denoted as {G k (x, y)}; wherein, k is a positive integer, 1≤k≤N, 1≤x≤W, 1≤y≤H, W represents the width of the original image, and H represents the height of the original image, such as taking W= 640, H=480, R k (x,y) means The pixel value of the pixe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a semantic segmentation method based on double-flow feature fusion. The method comprises the steps of in a training stage, constructing a convolutional neural network comprising an input layer, a hidden layer and an output layer, and the hidden layer comprises an RGB image processing module, a depth image processing module, a fusion module and a first deconvolution layer; inputting the original image into a convolutional neural network for training to obtain a corresponding semantic segmentation prediction graph; calculating a loss function value between a set formed bysemantic segmentation prediction images corresponding to the original images and a set formed by one-hot coded images processed by corresponding real semantic segmentation images to obtain an optimalweight vector and a bias term of a convolutional neural network classification training model; and in a test stage, inputting an indoor scene image to be semantically segmented into the convolutionalneural network classification training model to obtain a predicted semantic segmentation image. According to the method, the semantic segmentation efficiency and accuracy of the indoor scene image are improved.

Description

technical field [0001] The invention is a semantic segmentation method based on a fully convolutional neural network, in particular a semantic segmentation method based on dual-stream feature fusion. Background technique [0002] Semantic segmentation is a fundamental technique for many computer vision applications, such as scene understanding, autonomous driving. With the development of Convolutional Neural Networks, especially Fully Convolutional Neural Networks (FCNs), many promising results have been achieved on benchmarks. FCN has a typical encoder-decoder structure—semantic information is first embedded into the feature map through the encoder, and the decoder is responsible for generating segmentation results. Typically, the encoder is a pre-trained convolutional model to extract image features, and the decoder contains multiple upsampling components to recover resolution. Although the encoder's most important feature map may be highly semantic, it has limited abili...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/11G06T7/90G06N3/04
CPCG06T7/11G06T7/90G06T2207/20081G06T2207/20084G06N3/045
Inventor 周武杰吕思嘉袁建中黄思远雷景生
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products