Semantic segmentation network training method, image semantic segmentation method and devices

A technology of semantic segmentation and training images, applied in the field of computer vision, can solve the problems of low accuracy of semantic segmentation and poor training recognition effect, and achieve the effect of improving the training recognition effect.

Inactive Publication Date: 2018-09-14
上海白泽网络科技有限公司
View PDF3 Cites 46 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Existing image semantic segmentation networks, such as FCN, CRF-RNN, etc., have poor training and recognition effects at the edge of the segmented area, and the accuracy of semantic segmentation is low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic segmentation network training method, image semantic segmentation method and devices
  • Semantic segmentation network training method, image semantic segmentation method and devices
  • Semantic segmentation network training method, image semantic segmentation method and devices

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0029] Please refer to figure 2 , figure 2 A flow chart of the semantic segmentation network training method provided by the first embodiment of the present invention is shown. The semantic segmentation network training method includes the following steps:

[0030] Step S101, acquiring images to be trained.

[0031] In the embodiment of the present invention, the image to be trained may be a picture downloaded by the user through the network, or a picture taken by a camera or other shooting device. The images to be trained include multiple objects of different sizes, for example, people, sky, vehicles, animals, trees, and so on.

[0032]In the embodiment of the present invention, while obtaining the image to be trained, it is also necessary to obtain the original label map of the image to be trained. The original label map is information provided in advance, which includes object category information, that is, the original label map annotation The object category to whic...

Embodiment approach

[0042] As an implementation, the processing process of the mask convolution feature extraction sub-network can be as follows: First, input the pre-extraction feature map containing the blocks and overall features of the image to be trained into the convolution layer and the Argmax layer in sequence to generate the pre-extraction The recognition map, the pre-extraction recognition map is the object category map marked according to the pre-extraction feature map, that is to say, the pre-extraction recognition map is marked with the object category to which each pixel in the pre-extraction feature map belongs; then, using the traditional feature extraction The pooling layer in the sub-network performs scale down processing on the pre-extracted feature map to obtain the downsampled feature map, and then the downsampled feature map is input into the convolutional layer, Argmax layer and upsampling layer in turn to generate the downsampled recognition map , the recognition map after ...

no. 2 example

[0086] Please refer to Figure 7 , Figure 7 A flow chart of the image semantic segmentation method provided by the second embodiment of the present invention is shown. The image semantic segmentation method includes the following steps:

[0087] Step S201, acquiring an original image to be segmented.

[0088] In the embodiment of the present invention, the original image to be segmented may be an image that requires image semantic segmentation, which may be a photo taken by a camera or other shooting device.

[0089] Step S202, input the original image into the semantic segmentation network trained by using the semantic segmentation model training method of the first embodiment, and obtain the semantic segmentation result of the original image.

[0090] In the embodiment of the present invention, the semantic segmentation result of the original image includes the object category to which each pixel in the original image belongs.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiments of the present invention belongs to the computer vision technological field and provide a semantic segmentation network training method, an image semantic segmentation method and corresponding devices. The semantic segmentation network training method includes the following steps that: a to-be-trained image is acquired; the to-be-trained image is inputted into a pre-established semantic segmentation network, the front network layer of the semantic segmentation network is adopted to extract the features of the to-be-trained image, and a feature map containing the block, global and edge features of the to-be-trained image can be obtained; the feature image containing the block, global and edge features of the to-be-trained image is inputted to the rear network layer of the semantic segmentation network so as to be subjected to image pixel classification, so that a semantic segmentation image containing segmentation pixel types can be obtained; and the parameters of the semantic segmentation network are update according to the semantic segmentation image. Compared with the prior art, the method of the invention separately extracts and restores the edge features of theto-be-trained image, thereby improving the training recognition effect of the edge of a segmentation region.

Description

technical field [0001] The present invention relates to the technical field of computer vision, in particular to a semantic segmentation network training method, image semantic segmentation method and device. Background technique [0002] Image semantic segmentation is one of the three core research issues of computer vision. It combines the traditional image segmentation and target recognition tasks. category of each region, and finally obtain an image with pixel-wise semantic annotations. Existing image semantic segmentation networks, such as FCN, CRF-RNN, etc., have poor training and recognition effects at the edge of the segmented area, and the accuracy of semantic segmentation is low. Contents of the invention [0003] The purpose of the embodiments of the present invention is to provide a semantic segmentation network training method, an image semantic segmentation method and device, so as to improve the accuracy of image semantic segmentation. [0004] In order to...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/72G06N3/04
CPCG06V30/274G06N3/045
Inventor 申晖
Owner 上海白泽网络科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products