Image semantic segmentation method based on PU-Learning

A semantic segmentation and image technology, applied in the field of computer vision, can solve the problems of difficult to guarantee the quality of semantic segmentation, high cost, weak supervision, etc., to achieve good semantic segmentation quality and improve the effect of labeling speed.

Inactive Publication Date: 2020-07-24
FUDAN UNIV
View PDF4 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In the existing image segmentation technology based on supervised learning, people often need to provide pixel-level category labels for training samples, that is, they need to manually label each pixel in the image. The average time for level labeling is 15 minutes. It can be seen that the labeling process is time-consuming and expensive. Therefore, a method for image semantic segmentation based on weak supervision is proposed.
The training samples of this method do not require pixel-level annotations, but only use image-level annotations for training images or reference images for semantic segmentation. Compared with other systems that require heavy pixel-level annotations for training images, this image-level Rough annotation will be faster and easier to obtain. However, because there is no accurate pixel-level annotation as a reference for model learning, this type of weakly supervised semantic segmentation problem is very challenging, and the quality of semantic segmentation is difficult to guarantee.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image semantic segmentation method based on PU-Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0020] see figure 1 , the present invention provides a technical solution: a method for image semantic segmentation based on PU-Learning, comprising the following steps:

[0021] S1. Data preparation, and in the image database to be trained, there is at least one pixel-level image label for each category (it is necessary to uniformly set the label of unlabeled samples to 0, and label the labels of other samples of different categories from 1 );

[0022] S2. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an image semantic segmentation method based on positive sample and label-free sample learning. The invention belongs to the technical field of computer vision, the method comprises a data preparation step, a data preprocessing step, a deep convolutional neural network construction step, a PU-Learning-based loss function design step, a loss function optimization learning step, and an iterative execution training step until a training result of an image semantic segmentation model satisfies a predetermined convergence condition. According to the invention, a deep neural network is adopted to extract to-be-segmented image features; on the basis, the invention further discloses a preparation method, according to the invention, a cross entropy loss function based on PU-Learning is designed; according to the method, the semantic segmentation model can be trained and optimized under the condition that only part of pixel levels are labeled, the semantic segmentation model can be trained and optimized in an end-to-end mode, meanwhile, direct supervision of the pixel levels is reserved to a certain extent, and the data labeling speed is increased while the good semantic segmentation quality is guaranteed.

Description

technical field [0001] The invention relates to the technical field of computer vision, in particular to an image semantic segmentation method based on PU-Learning. Background technique [0002] With the continuous development of big data technology, fifth-generation mobile communication technology, Internet of Things technology and other technologies, the collection, aggregation and storage of multimedia resources such as images and videos are becoming more and more convenient. At present, in some application scenarios (such as automatic driving, In medical imaging, etc.), people need to perform semantic segmentation on the collected images. Image semantic segmentation is a classic problem in the field of computer vision. Its purpose is to let the computer predict the category of each pixel in the image, that is, for each Pixels are labeled with categories. [0003] In the existing image segmentation technology based on supervised learning, people often need to provide pix...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/32G06N3/04G06N3/08
CPCG06N3/08G06N3/084G06V10/25G06N3/045
Inventor 汪聪浦剑
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products