Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Scene segmentation correction method and system fusing global information

A scene segmentation, global technology, applied in the field of machine learning and computer vision, can solve the problems of discontinuity, incoherence, inconsistent segmentation results, etc.

Active Publication Date: 2018-01-09
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF7 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This type of method mainly has the following problems: (1) Inconsistency and discontinuity often appear in the segmentation results, (2) The segmentation boundary of the target is often inaccurate and incoherent

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene segmentation correction method and system fusing global information
  • Scene segmentation correction method and system fusing global information
  • Scene segmentation correction method and system fusing global information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] In order to make the purpose, technical solution and advantages of the present invention clearer, the global residual correction network proposed by the present invention will be further described in detail below in conjunction with the accompanying drawings. It should be understood that the specific implementation methods described here are only used to explain the present invention, and are not intended to limit the present invention.

[0037]In order to better use the global residual correction network proposed by the present invention, the present invention adopts a cascade framework to correct the segmentation results of the front-end network. The framework consists of three parts: (1) using the currently popular full residual convolutional network as the front-end model; (2) using the global residual correction network and using the global content information for correction; (3) using the local boundary correction network, Local corrections are made to the segment...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a scene segmentation correction method. A global residual correction network is adopted, a completely residual convolutional network is used as a front-end model, a confidencemap and an original image of the front-end model are spliced according to a channel as an input of the global residual correction network so as to output a global correction residual, and the globalcorrection residual is added to the confidence map to obtain a correction result of scene segmentation; and the global residual correction network is trained by using a known scene segmentation data set. Meanwhile, the method further proposes serially connecting the global residual correction network and a local boundary correction network to form a cascade frame, and the cascade frame can be usedfor global correction and local correction on the segmentation result of the front-end model, thus obtaining a more accurate scene segmentation correction result.

Description

technical field [0001] The method belongs to the fields of machine learning and computer vision, and in particular relates to machine learning problems for scene segmentation in computer vision. Background technique [0002] Currently popular scene segmentation methods are mainly based on Convolutional Neural Networks (CNNs). Most of these methods utilize the framework of Fully Convolutional Networks (FCNs). Many methods make further improvements on the basis of FCNs, utilizing methods such as dilated convolutions, adding multiple deconvolution layers, and capturing features in the middle layers of the network. However, these methods mainly improve the segmentation accuracy based on the idea of ​​improving the network structure. [0003] Different from the above methods, some other methods aim to improve the current segmentation results. The more famous ones include the "fully connected conditional random field" method and the "multi-scale dilated convolution" method. Th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06K9/34G06K9/00
Inventor 唐胜张蕊李锦涛
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products