Semantic segmentation model training method and device, electronic equipment and storage medium

A semantic segmentation and training method technology, applied in the field of deep learning, can solve the problems of unfavorable semantic segmentation model training and high labeling costs, and achieve the effect of good weak supervision model performance

Pending Publication Date: 2021-10-29
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the existing image-level category labeling method requires a lot of target spatial location information, which is not conducive to the training of the semantic segmentation model, while the target box labeling method needs to label the same category of objects multiple times, and the labeling cost is high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic segmentation model training method and device, electronic equipment and storage medium
  • Semantic segmentation model training method and device, electronic equipment and storage medium
  • Semantic segmentation model training method and device, electronic equipment and storage medium

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0076] Such as figure 2 As shown, the training method of a semantic segmentation model provided by the embodiment of the present application includes:

[0077] Step 201: Construct an image training set of sparse point annotations for semantic segmentation model training, including image I and its corresponding sparse point labels where |Y| is the number of sparse point labels corresponding to the image. Each sparse point label f ​​includes a two-dimensional coordinate (h k ,w k ) to describe the position of the point, and a class label L k The class used to describe the point. An image can contain more than one arbitrary number of sparse point labels, such as image 3 shown;

[0078] Step 202: Construct a semantic segmentation model, which can adopt any deep model architecture based on gradient backpropagation for parameter update and learning. In this embodiment, the semantic segmentation model used for training includes a basic network model f=F(I; θ f ), used to m...

no. 2 example

[0087] In this embodiment, optionally, the step 204 in the above embodiment can also calculate the superpixel point set according to the original input image I Among them, |R| is the number of superpixels, each superpixel r i is a containing|r i |A collection of pixels According to the output prediction s of the semantic segmentation model in step 202, the category of each superpixel is calculated according to formula 4, and then the dense pixel-level pseudo-label is obtained, and the pseudo-label is used in combination with the segmentation loss function for model training, where Y'( r i ) for r i the category to which it belongs;

[0088]

[0089] In this embodiment, optionally, step 205 in the above embodiment can also be performed according to formula 3 using contrastive learning according to the middle layer feature expression e of the semantic segmentation model in step 202, combined with the pixel-level pseudo-label obtained in step 204 Model training, the par...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiment of the invention discloses a semantic segmentation model training method and device, electronic equipment and a storage medium, wherein the method comprises the steps: marking positions at sparse points in a semantic segmentation result, and updating model parameters of a semantic segmentation model for the first time based on a first loss function of the semantic segmentation model; based on the semantic segmentation result and the sparse point label corresponding to the image, obtaining a dense pixel-level pseudo label corresponding to the image, and based on the dense pixel-level pseudo label and a second loss function of the semantic segmentation model, updating model parameters of the semantic segmentation model for the second time; and updating the model parameters of the semantic segmentation model for the third time based on the feature data of the image, the dense pixel-level pseudo tag and the third loss function of the semantic segmentation model. According to the embodiment of the invention, information implied in sparse point labeling is fully utilized to train the depth model, so that relatively good weak supervision model performance is obtained at the labeling cost as low as possible.

Description

technical field [0001] The present application relates to the field of deep learning technology, in particular to a training method, device, electronic equipment and storage medium for a semantic segmentation model. Background technique [0002] Semantic segmentation is to divide the input image into multiple regions with semantic information to realize the perceptual understanding of the visual scene, which is a very valuable and difficult computer vision task. [0003] Traditional semantic segmentation model training relies on a large amount of manually annotated pixel-level label data. In order to alleviate the problem of high cost of pixel-level label data, weakly supervised semantic segmentation models try to use only coarse weakly supervised annotations for semantic segmentation model training. Common forms of weakly supervised labeling include image-level category labeling, target box labeling, etc. [0004] However, the existing image-level category labeling method...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/34G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/24
Inventor 张兆翔谭铁牛樊峻菘
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products