Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Convolutional neural network-based multispectral image semantic cutting method

A convolutional neural network and multi-spectral image technology, applied in biological neural network models, image enhancement, neural architecture, etc., can solve problems such as interfering image cutting, loss of high-resolution image space information, loss of computing time, etc., to achieve The effect of improving precision, improving work efficiency and ensuring precision

Active Publication Date: 2018-11-13
THE THIRD RES INST OF CHINA ELECTRONICS TECH GRP CORP
View PDF3 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

If all low-resolution images are forcibly interpolated and enlarged to unify them with high-resolution images, some convolution operations on the low-resolution image parts will be invalid, not only losing a lot of computing time, but also possibly interfering with The result of image cutting
If the high-resolution image is down-sampled to make it consistent with the low-resolution image, a large amount of spatial information of the high-resolution image will be lost.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolutional neural network-based multispectral image semantic cutting method
  • Convolutional neural network-based multispectral image semantic cutting method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0022] For multispectral images, firstly, the bands are separated according to the wavelength, and then independent convolution operations are performed on different bands, that is, each data channel of the multispectral image is independently convolved by using a convolutional neural network, and then each data channel is independently convolved. The feature maps after independent convolution are fused (concatenation, summation). When convolving each data channel of a multispectral image independently, convolution kernels of different sizes and numbers are selected according to different bands. When convolving each data channel of a multispectral image independently, different convolution layers are selected according to different bands. When implemented, the convolutional neural network adopts the U-NET neural network.

Embodiment 2

[0024] When the multispectral image has multiple resolutions, on the basis of using the multi-channel independent convolution in the first embodiment, the second implementation adopts the multi-channel independent convolution and multi-resolution input network. like figure 2 As shown, the U-NET network is transformed into a convolutional neural network that supports multiple resolution inputs. Similar to the traditional U-NET network, the network of the present invention consists of a scale shrinkage part and a scale expansion part, and the scale shrinkage part consists of a classical convolutional network. As the level of convolution increases, the image size is pooled with the convolution The number of convolution kernels increases with the increase of pooling times. The scale expansion part is the same as the scale expansion part of the U-NET network. For each upsampling step in the scale expansion part, the scale is doubled and the number of convolution kernels is halved...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a convolutional neural network-based multispectral image semantic cutting method. The method comprises the following steps of: respectively and independently convoluting eachdata channel of a multispectral image by utilizing a convolutional neural network; and fusing feature maps obtained after independent convolution of the data channels. According to the method, a network with multi-resolution input and multichannel independent convolution is utilized, so that the problem that standard U-NET networks only can accept one same-scale RGB-Gray image is effectively solved, the working efficiency of multispectral image semantic cutting is effectively improved and the image cutting precision is ensured.

Description

technical field [0001] The invention relates to a multispectral image semantic segmentation method based on a convolutional neural network. Background technique [0002] Currently, state-of-the-art semantic segmentation frameworks for RGB images commonly employ end-to-end deep convolutional neural networks (DCNNs). The current use of convolutional neural networks is to use some pre-trained models to classify objects. These commonly used models mainly include VGG, ResNet, etc. The DCNN for semantic segmentation often includes two parts, the first half is a commonly used DCNN network with better quality, and the second half is a network that maps feature maps to pixel labels. In order to save training samples, the pre-trained model parameters are directly used in the first half, and only the second half of the model parameters are fine-tuned. [0003] At present, the representative image semantic segmentation network is fully-convolutional network (FCN), and the initial vers...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06N3/04
CPCG06T7/10G06T2207/20084G06T2207/10036G06N3/045
Inventor 李含伦戴玉成张小博张晓灿唐文
Owner THE THIRD RES INST OF CHINA ELECTRONICS TECH GRP CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products