Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Division and identification method and device based on dense network image

A dense network and recognition method technology, applied in image analysis, image coding, image data processing, etc., can solve problems such as low resolution, no pooling operation, and inability to obtain high-resolution segmentation results

Active Publication Date: 2018-05-22
SHENZHEN UNIV
View PDF5 Cites 53 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

With the development of technology, the classic network structure used for object segmentation is fully convolutional neural network (Fully Convolutional Networks, FCN). Since then, many networks have been improved on the basis of it. The fully convolutional neural network is divided into decoding and FCN. There are two parts of encoding, decoding is used to extract convolutional features, and encoding is used to restore the size of the original image, thus largely avoiding the resolution limitation of conventional convolutional neural networks, but its output segmentation result map is only It can be restored to 1 / 32 of the original image, but the resulting resolution is still low, and a high-resolution segmentation result map cannot be obtained
In order to solve this problem, a network called DeepLab based on dilated convolution for image segmentation has been developed. Unlike the previous two structures, it has no pooling operation, which makes the original image size not decrease, and the resolution It will not be reduced, but due to the need for convolution operations on high-resolution and high-dimensional feature maps, a large amount of computing memory is required, which requires high GPU performance
[0004] In summary, there are many limitations in the use of deep learning in the medical image segmentation process, and there is no effective solution

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Division and identification method and device based on dense network image
  • Division and identification method and device based on dense network image
  • Division and identification method and device based on dense network image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0053] see figure 1 and figure 2 The segmentation and recognition method based on the dense network image proposed in this embodiment specifically includes the following steps:

[0054] Step S101: Use the convolutional layer as a feature extractor to capture the hierarchical features of dense network images, and input the obtained multi-scale feature maps into the encoding module. At the same time, use skip connections to combine the feature maps of different scales with the encoding module. The feature maps corresponding to the scales are connected to obtain multiple feature maps.

[0055] Step S102: use the encoding module to down-sample the feature map and extract semantic features, use the decoding module to up-sample the feature map and recover detailed information, and obtain a multi-scale feature map.

[0056] Step S103: Perform a chained residual pooling operation on the output feature map obtained by the dense deconvolution operation, wherein the output feature map...

Embodiment 2

[0075] see image 3 , Figure 4 and Figure 5 , the present embodiment provides a segmentation based on a dense network image, and the recognition device includes: a convolution module 1 is used to use a convolutional layer as a feature extractor to capture the hierarchical features of a dense network image, and input the obtained multi-scale feature map to the encoding module, and at the same time, use skip connections to connect feature maps of different scales with feature maps of corresponding scales in the encoding module to obtain multiple feature maps. The multi-scale feature map acquisition module 2 is used to use the encoding module to download the feature maps Sampling and semantic feature extraction, use the decoding module to upsample the feature map and restore detailed information to obtain multi-scale feature maps, the pooling operation module 3 is used to perform chain residual pooling on the output feature map obtained by the dense deconvolution operation op...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a division and identification method and device based on a dense network image and relates to the technical field of neural networks. The division and identification method based on the dense network image comprises the following steps: firstly, capturing layering features of the dense network image by taking a convolutional layer as a feature extracting device; inputting obtained multi-scale feature patterns into an encoding module; meanwhile, connecting feature patterns with different scales with the feature patterns with the corresponding scales in the encoding moduleby utilizing jumping connection; carrying out down-sampling and semantic feature extraction on the feature patterns by applying the encoding module; carrying out up-sampling on the feature patterns by applying a decoding module and recovering detailed information to obtain the multi-scale feature patterns; then carrying out chain type residual error pooling operation on an output feature patternobtained by dense deconvolution operation, so as to improve the accuracy of boundary division and capture local contextual information; adding depth monitoring into three boundary thinning blocks in the encoding module and carrying out steps above to improve the precision of medical image division.

Description

technical field [0001] The invention relates to the technical field of neural networks, in particular to a segmentation and recognition method and device based on dense network images. Background technique [0002] Deep learning is a machine learning method based on representational learning of data. Its motivation is to establish and simulate the neural network of the human brain for analysis and learning, that is, to imitate the mechanism of the human brain to explain data, such as images, sounds and texts. In practice, deep learning represents attribute categories or features by combining low-level features to form more abstract high-level features to discover distributed features of data. [0003] In recent years, with the amazing performance of deep learning in the field of object recognition, more and more researchers have applied deep learning to the field of medical image segmentation. The traditional multi-layer nonlinear convolutional neural network has good gener...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06T9/00G06K9/46
CPCG06T7/10G06T9/00G06V10/462
Inventor 雷柏英汪天富秦璟李航何鑫子倪东
Owner SHENZHEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products