Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image segmentation method based on convolutional network

A convolutional network and image segmentation technology, applied in biological neural network models, instruments, character and pattern recognition, etc., can solve the problems of ignoring real-time requirements, too much category information, and insufficiently clear semantic object contours.

Pending Publication Date: 2020-09-01
SUZHOU UNIV
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] At present, convolutional networks have achieved excellent results in the research of image semantic segmentation, but many methods often pursue segmentation accuracy too much and ignore real-time requirements
At the same time, there are many categories of information in complex scenes, and the outline of semantic objects is not clear enough, which affects the accuracy and adaptability of semantic segmentation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image segmentation method based on convolutional network
  • Image segmentation method based on convolutional network
  • Image segmentation method based on convolutional network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment approach 1

[0101] Step 1. Image dataset preprocessing.

[0102] Using the Cityscapes image dataset, the dataset contains 5000 pictures, 2975 training sets, 500 validation sets and 1525 test sets, with a resolution size of 1024×2048, subdivided into 34 different segmentation categories. Since some categories account for too little in the entire data set, the measurement index is calculated as 0 when testing the segmentation results, which affects the overall evaluation results. Therefore, only 11 categories are used in the training. Through calculation, the 11 categories The proportion of pixels has exceeded 90% of the total number of pixels, respectively road (Road), sidewalk (Sidewalk), building (Building), vegetation (Vegetation), sky (Sky), terrain (Terrain), person (Person) , Car (Car), Bicycle (Bicycle), Pole (Pole), Bus (Bus). At the same time, the images in the training set were folded left and right to expand the data set, and 5950 images were obtained, and then the image size w...

Embodiment approach 2

[0114] The difference between this embodiment and Embodiment 1 is that in the step 2, a convolutional network is designed and network training is performed. In this embodiment, a multi-resolution strategy is used for network training. First scale the data to three different resolution sizes, full

[0115] The resolution is 512×1024, the half resolution is 256×512, and the three-quarter resolution is 384×768. After that, the half-resolution data set is trained first, and the network parameters are used as the training initialization parameters of the three-quarter resolution data set. Finally train on the full resolution dataset. On the one hand, the data set is indirectly expanded through different resolutions, and at the same time, the same image area is encouraged to use the same label at different resolutions to strengthen the interaction between pixels. Other steps and parameters are the same as those in Embodiment 1.

Embodiment approach 3

[0117] The difference between this embodiment and Embodiments 1 and 2 is that the model optimization and improvement process in step 4, in this embodiment, the optimized network model parameters in step 4 are processed, the BN layer parameters are inferred, and merged into the volume In the stacking layer, the inference speed of the network model is optimized. The BN layer is often used in the training phase of the network. By performing batch normalization operations on the input data, the convergence rate of the network is accelerated and the problems of gradient disappearance and gradient explosion are avoided. The specific method of merging parameters is as follows: assuming that the training weight obtained in a certain convolutional layer of the network is W, and the deviation parameter is b, the convolution operation can be simplified as Y=WX+b, and X is the input of the previous layer of the network. Let the mean in the BN layer be μ, the variance be δ, the scaling fac...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image segmentation method based on a convolutional network. The image segmentation method based on a convolutional network comprises the steps of 1, preprocessing data; 2, designing a convolutional network model, wherein the convolutional network is called as an LBNet network and is mainly improved based on an ENet network; 3, carrying out model training and verification; 4, performing model optimization and improvement processing, and continuously adjusting hyper-parameters of the model according to a measurement result on the test set in the step 3 to realize parameter optimization of the convolutional network model established in the step 2; and 5, model use: carrying out test use according to the finally optimized model obtained in the step 4. The image segmentation method and the image segmentation process based on the convolutional network have the beneficial effects that the convolutional network is formed by improving an ENet network as a backbone network, and an original ENet network structure is modified in the implementation process.

Description

technical field [0001] The invention relates to the field of image segmentation methods, in particular to an image segmentation method based on a convolutional network. Background technique [0002] In computer vision, an image is a collection of distinct pixels. Image segmentation is to divide pixels with similar characteristics into several disjoint pixel blocks. Its purpose is not only to simplify the information expression of the image, but also to make the image easier to understand and analyze. Image segmentation plays a key role in analyzing and understanding image information. The current image segmentation has achieved a lot of results, and the methods commonly used in image segmentation include: edge segmentation, threshold method, clustering method and deep learning method. Image segmentation can greatly advance the development of new technologies such as automated medical diagnosis, autonomous driving, etc. For example, in medical image processing, it is nece...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/34G06K9/62G06N3/04
CPCG06V10/267G06N3/045G06F18/253G06F18/214Y02T10/40
Inventor 陈虹连博博
Owner SUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products