Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Semantic segmentation network training method, image semantic segmentation method and device

A training image and semantic segmentation technology, applied in the field of computer vision, can solve the problems of low semantic segmentation accuracy and poor training recognition effect, and achieve the effect of improving the training recognition effect

Inactive Publication Date: 2020-07-31
上海白泽网络科技有限公司
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Existing image semantic segmentation networks, such as FCN, CRF-RNN, etc., have poor training and recognition effects at the edge of the segmented area, and the accuracy of semantic segmentation is low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic segmentation network training method, image semantic segmentation method and device
  • Semantic segmentation network training method, image semantic segmentation method and device
  • Semantic segmentation network training method, image semantic segmentation method and device

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0029] Please refer to figure 2 , figure 2 A flow chart of the semantic segmentation network training method provided by the first embodiment of the present invention is shown. The semantic segmentation network training method includes the following steps:

[0030] Step S101, acquiring images to be trained.

[0031] In the embodiment of the present invention, the image to be trained may be a picture downloaded by the user through the network, or a picture taken by a camera or other shooting device. The images to be trained include multiple objects of different sizes, for example, people, sky, vehicles, animals, trees, and so on.

[0032]In the embodiment of the present invention, while obtaining the image to be trained, it is also necessary to obtain the original label map of the image to be trained. The original label map is information provided in advance, which includes object category information, that is, the original label map annotation The object category to whic...

Embodiment approach

[0042] As an implementation, the processing process of the mask convolution feature extraction sub-network can be as follows: First, input the pre-extraction feature map containing the blocks and overall features of the image to be trained into the convolution layer and the Argmax layer in sequence to generate the pre-extraction The recognition map, the pre-extraction recognition map is the object category map marked according to the pre-extraction feature map, that is to say, the pre-extraction recognition map is marked with the object category to which each pixel in the pre-extraction feature map belongs; then, using the traditional feature extraction The pooling layer in the sub-network performs scale down processing on the pre-extracted feature map to obtain the downsampled feature map, and then the downsampled feature map is input into the convolutional layer, Argmax layer and upsampling layer in turn to generate the downsampled recognition map , the recognition map after ...

no. 2 example

[0086] Please refer to Figure 7 , Figure 7 A flow chart of the image semantic segmentation method provided by the second embodiment of the present invention is shown. The image semantic segmentation method includes the following steps:

[0087] Step S201, acquiring an original image to be segmented.

[0088] In the embodiment of the present invention, the original image to be segmented may be an image that requires image semantic segmentation, which may be a photo taken by a camera or other shooting device.

[0089] Step S202, input the original image into the semantic segmentation network trained by using the semantic segmentation model training method of the first embodiment, and obtain the semantic segmentation result of the original image.

[0090] In the embodiment of the present invention, the semantic segmentation result of the original image includes the object category to which each pixel in the original image belongs.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the present invention relates to the field of computer vision technology, and provides a semantic segmentation network training method, image semantic segmentation method and device, the semantic segmentation network training method includes: acquiring an image to be trained; inputting the image to be trained into a pre-established semantic segmentation Network, using the front network layer of the semantic segmentation network to perform feature extraction on the training image to obtain a feature map containing the block, overall and edge features of the image to be trained; the feature map containing the block, overall and edge features of the image to be trained The image is input into the back network layer of the semantic segmentation network to classify the image pixels, and the semantic segmentation map including the segmentation pixel type is obtained; the parameters of the semantic segmentation network are updated according to the semantic segmentation map. Compared with the prior art, the embodiment of the present invention separately extracts and restores the edge features of the image to be trained, which improves the training recognition effect at the edge of the segmented area.

Description

technical field [0001] The present invention relates to the technical field of computer vision, in particular to a semantic segmentation network training method, image semantic segmentation method and device. Background technique [0002] Image semantic segmentation is one of the three core research issues of computer vision. It combines the traditional image segmentation and target recognition tasks. category of each region, and finally obtain an image with pixel-wise semantic annotations. Existing image semantic segmentation networks, such as FCN, CRF-RNN, etc., have poor training and recognition effects at the edge of the segmented area, and the accuracy of semantic segmentation is low. Contents of the invention [0003] The purpose of the embodiments of the present invention is to provide a semantic segmentation network training method, an image semantic segmentation method and device, so as to improve the accuracy of image semantic segmentation. [0004] In order to...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/72G06N3/04
CPCG06V30/274G06N3/045
Inventor 申晖
Owner 上海白泽网络科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products