RGB-D image semantic segmentation method based on multi-modal adaptive convolution

A RGB-D, semantic segmentation technology, applied in the field of image semantic segmentation and deep learning, can solve the problem of low accuracy and achieve the effect of improving the accuracy of semantic segmentation

Pending Publication Date: 2020-06-26
BEIJING UNIV OF TECH
View PDF0 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to propose a new RGB-D image semantic segmentation method and system based on multimodal adaptive convolution in order to solve the problem of low accuracy of the existing RGB-D image semantic segmentation method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • RGB-D image semantic segmentation method based on multi-modal adaptive convolution
  • RGB-D image semantic segmentation method based on multi-modal adaptive convolution
  • RGB-D image semantic segmentation method based on multi-modal adaptive convolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in combination with specific implementation and accompanying drawings.

[0030] Such as figure 1 As shown, a RGB-D image semantic segmentation method based on multimodal adaptive convolution provided by the embodiment of the present invention, figure 2 For the specific structure of the RGB-D image semantic segmentation model based on multimodal adaptive convolution proposed by the present invention, it mainly includes the following steps:

[0031] 1) Send the paired RGB image and depth image into the encoding module, and use two identical encoding branches to extract the RGB features and depth features of the image, as follows:

[0032] Use the encoding module to extract the RGB features of the RGB image and the depth features of the depth image. The encoding module is a double-branch network, and each branch ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an RGB-D image semantic segmentation method based on multi-modal adaptive convolution. The method comprises the steps that an encoding module extracts RGB image features and depth image features; the RGB features and the depth features are sent to a fusion module for fusion; the method comprises the following steps: firstly, inputting multi-modal features into a multi-modal adaptive convolution generation module, and calculating two multi-modal adaptive convolution kernels with different scales; then, enabling the multi-modal feature fusion module to carry out depth separable convolution operation on the RGB features and the depth features and an adaptive convolution kernel to obtain adaptive convolution fusion features; splicing the fusion features with the RGB features and the depth features to obtain final fusion features; enabling the decoding module to perform continuous up-sampling on the final fusion feature, and obtaining a semantic segmentation resultthrough convolution operation. According to the invention, multi-modal features are interacted cooperatively through adaptive convolution, and convolution kernel parameters are dynamically adjusted according to an input multi-modal image, so that the method is more flexible than a traditional convolution kernel with fixed parameters.

Description

technical field [0001] The invention relates to the fields of image semantic segmentation and deep learning, in particular to a convolutional neural network-based RGB-D image semantic segmentation method. Background technique [0002] Image semantic segmentation is one of the basic tasks in the field of artificial intelligence and computer vision. Its purpose is to identify the semantic category of each pixel in the image according to the image content. As the basis of image and video understanding, semantic segmentation is widely used in intelligent fields such as autonomous driving and robot navigation. [0003] With the wide application of deep learning in computer vision, deep convolutional neural network has become the most effective method in the field of computer vision. In 2015, the fully convolutional neural network pioneered the use of deep learning for end-to-end image feature extraction and pixel semantic classification, which has greatly improved performance an...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06K9/62G06N3/04G06N3/08
CPCG06T7/10G06N3/084G06T2207/10004G06T2207/10024G06T2207/10028G06T2207/20081G06T2207/20084G06N3/045G06F18/241
Inventor 段立娟孙启超乔元华
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products