Image segmentation model training method and device, image segmentation method and device, equipment and medium

A technology of image segmentation and model training, applied in the field of image processing, can solve problems such as difficult training, unsatisfactory effect, too simple, etc., and achieve the effect of improving output resolution, improving segmentation accuracy, and detail information

Pending Publication Date: 2020-11-13
SPREADTRUM COMM (SHANGHAI) CO LTD
View PDF4 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The early semantic segmentation algorithm used FCN (full convolutional network), which is an end-to-end convolutional neural network architecture. This algorithm can generate segmentation maps for images of any size, but the existing problems are: in The convolution operation on the original image resolution is expensive
The traditional interpolation method is the simplest and commonly used method. It directly bilinearly upsamples the feature map to a specified multiple, but its disadvantage is that it is too simple and has no parameters to learn, and it is easy to introduce artificial errors. If the upsampling If there are many multiples, the segmentation effect will be poor; the advantage of transposed convolution is that it can be trained, but the disadvantage is that there is zero padding operation, and it is easy to introduce artificial errors, the effect is not very ideal, and it is more difficult to train
Therefore, regardless of whether the decoder of the image segmentation model adopts interpolation operation or transposed convolution operation for upsampling, satisfactory image segmentation effect cannot be achieved.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image segmentation model training method and device, image segmentation method and device, equipment and medium
  • Image segmentation model training method and device, image segmentation method and device, equipment and medium
  • Image segmentation model training method and device, image segmentation method and device, equipment and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0059] based on figure 1 The implementation environment shown, this embodiment provides a kind of image segmentation model training method, such as figure 2 As shown, the method includes the following steps:

[0060] S11. Acquire a sample data set, the sample data set includes several training images, and each of the training images is labeled with a corresponding segmentation label.

[0061] For example, when the trained image segmentation model is mainly used for portrait segmentation, the sample data set is EG1800, a well-known data set in the field of portrait segmentation. Pixel-level manual labeling, these images are mainly from selfies of mobile phone cameras, 1800 pictures are divided into two parts, 1600 training sets and 200 test sets.

[0062] Of course, the sample data set in this embodiment is not limited to the use of the data set EG1800, and other existing image data sets can also be used as needed, or the sample data set can be obtained by pixel-level labeli...

Embodiment 2

[0077] With the widespread use of mobile terminals, more and more applications require image segmentation on mobile terminals. Mainstream segmentation networks such as U-Net (classic network in medical image segmentation, using encoding and decoding structure and skip connection), DeepLab series (a series of models proposed for semantic segmentation tasks, mainly using deep convolution, probabilistic graph model and hole Convolution for segmentation), Mask-RCNN (a network model based on the convolutional network candidate area extraction mask) only pays attention to accuracy, but not efficiency, so it is not suitable for fast segmentation on the mobile terminal. The emergence of MobileNets (a lightweight network that can be trained on the mobile terminal, mainly using depth-separable convolution to improve efficiency) model provides an efficient neural network model for mobile vision applications, which is very important for machine learning. Successful applications on mobile ...

Embodiment 3

[0089] This embodiment provides an image segmentation method, such as Figure 10 As shown, the method specifically includes the following steps:

[0090] S21, acquiring an image to be segmented;

[0091] S22. Process the image to be segmented based on the image segmentation model trained by the method described in embodiment 1 or embodiment 2, to obtain a target segmentation result of the image to be segmented.

[0092] In this embodiment, by using the image segmentation model trained in Embodiment 1 and Embodiment 2 to perform image segmentation, an image segmentation result with finer boundary information can be obtained.

[0093] It should be noted that, for the foregoing embodiments, for the sake of simple description, they are expressed as a series of action combinations, but those skilled in the art should know that the present invention is not limited by the described action sequence, because according to this According to the invention, certain steps may be performed...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an image segmentation model training method and device, an image segmentation method and device, equipment and a medium, and the training method comprises the steps: obtaining asample data set which comprises a plurality of training images, wherein each training image is marked with a corresponding segmentation label; and training a pre-established image segmentation modelaccording to the sample data set, the image segmentation model comprising an encoder and a decoder, and the decoder performing up-sampling by using a sub-pixel convolution network. According to the invention, sub-pixel convolution is used to replace interpolation operation or transposed convolution operation commonly used in a decoder of an image segmentation model to carry out up-sampling, the problem that too many artificial errors are introduced into traditional interpolation operation or transposed convolution operation can be solved, and the image segmentation precision is improved.

Description

technical field [0001] The present invention relates to the field of image processing, in particular to an image segmentation model training method, image segmentation method, device, equipment and medium. Background technique [0002] The image semantic segmentation task refers to assigning a semantic label to each pixel in the image. The early semantic segmentation algorithm used FCN (full convolutional network), which is an end-to-end convolutional neural network architecture. This algorithm can generate segmentation maps for images of any size, but the existing problems are: in The convolution operation on the original image resolution is expensive. In order to solve this problem, FCN adopts down-sampling and up-sampling processing, but due to operations such as pooling, a large amount of information will be lost, causing FCN to generate a rough segmentation map. In order to obtain a more efficient segmentation effect, a U-shaped framework including an encoder and a de...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10
CPCG06T7/10G06T2207/20081G06T2207/20084
Inventor 宋苗张海涛
Owner SPREADTRUM COMM (SHANGHAI) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products