Edge-guided refined saliency object segmentation method and system and equipment

An object segmentation and edge-guided technology, applied in the field of image processing, can solve the problems of rough segmentation results, loss of spatial positioning information, and difficulty in retaining spatial details, so as to achieve refined segmentation results, accurate edge areas, and accurate edge area detail information. Effect

Active Publication Date: 2018-11-13
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF4 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, due to the inherent limitations based on the full convolutional neural network, the image salient object segmentation results are often rough, especially in the object edge area, and the segmentation results are difficult to preserve good detail information in the object edge area.
[0003] The fully convolutional neural network is composed of a series of fully convolutional layers and downsampling layers stacked on top of each other. While these downsampling layers aggregate semantic information, they also greatly reduce the original resolution and lose most of the spatial positioning. Information, although the last upsampling layer or deconvolution layer, it is difficult to accurately restore the lost spatial detail information, which is extremely unfavorable for image pixel-level classification tasks such as salient object segmentation that require accurate spatial position information, and the segmentation results Relatively rough, it is difficult to preserve spatial details, especially the edge area of ​​​​the object

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Edge-guided refined saliency object segmentation method and system and equipment
  • Edge-guided refined saliency object segmentation method and system and equipment
  • Edge-guided refined saliency object segmentation method and system and equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] Preferred embodiments of the present invention are described below with reference to the accompanying drawings. Those skilled in the art should understand that these embodiments are only used to explain the technical principle of the present invention, and are not intended to limit the protection scope of the present invention.

[0060] Considering the shortcomings of the existing methods of salient object segmentation based on fully convolutional neural networks, the present invention proposes to use edge information to guide refined salient object segmentation, so that the segmentation results can better retain edge area information. At the same time, the present invention also proposes to use the focus cross-entropy loss function to make the network focus on pixels that are easy to be misclassified, such as object edge areas, so as to further refine the salient object segmentation results. The overall network structure includes salient object segmentation mask subnet...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the field of image processing, and particularly to an edge-guided refined saliency object segmentation method and system and equipment, and aims to solve the problem that segmentation results in the prior art are relatively rough. The saliency object segmentation method of the invention comprises: obtaining different-scale segmentation mask features of an input image through forward propagation of a segmentation mask sub-network; obtaining different-scale edge detection features of the input image through forward propagation of an edge detection sub-network; and fusingthe segmentation mask features and the edge detection features through a backward fusion branch network to obtain a saliency object segmentation result and an edge detection result. In addition, a focus cross-entropy loss function is utilized to supervise a training process when the segmentation mask sub-network is trained, and the sub-network is enabled to focus on error-prone classification samples of object edge regions and the like. The saliency object segmentation result of the method is more refined, and retains more accurate edge region detail information.

Description

technical field [0001] The invention relates to the field of image processing, in particular to an edge-guided refined salient object segmentation method, system, and device. Background technique [0002] Traditional salient object segmentation methods rely on hand-designed features, but hand-designed features are difficult to represent complex image changes: such as deformation, occlusion, lighting condition changes, complex background effects, etc. Thanks to the rapid development of deep convolutional neural networks, the performance of image salient object segmentation methods based on fully convolutional neural networks has also been significantly improved. However, due to the inherent limitations based on the full convolutional neural network, the segmentation results of image salient objects are often rough, especially in the object edge area, and the segmentation results are difficult to preserve good detail information in the object edge area. [0003] The fully con...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/13G06T7/11G06N3/08G06N3/04
CPCG06N3/08G06T7/11G06T7/13G06N3/045
Inventor 赵鑫黄凯奇王裕沛
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products