Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Foreground object image segmentation method based on deep convolutional neural network

A neural network and deep convolution technology, applied in the field of computer vision, can solve the problems of decreased positioning accuracy, achieve high precision, strong generalization ability, and solve the effect of decreased positioning accuracy

Pending Publication Date: 2020-06-12
BEIJING NORMAL UNIV ZHUHAI
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Compared with the existing better image region segmentation model based on depth features, it has higher accuracy and better solves the problem of decreased positioning accuracy due to the invariance of DCNNs

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Foreground object image segmentation method based on deep convolutional neural network
  • Foreground object image segmentation method based on deep convolutional neural network
  • Foreground object image segmentation method based on deep convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0033] The invention sets the segmentation task as a dense labeling problem, and proposes a pixel-level image segmentation model FOSeg model based on a deep convolutional neural network. The FOSeg model is a foreground object image segmentation model that supports end-to-end training and can predict the likelihood that each pixel is a foreground object.

[0034] The flow chart of FOSeg model segmentation is as follows: figure 1 As shown: first input the original image, after convolution, confluence, linear rectification and other operations of the deep convolutional neural network, and then input to...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a foreground object image segmentation method based on a deep convolutional neural network, sets segmentation tasks as a dense marking problem, and provides a pixel-level imagesegmentation model FOSeg model based on the deep convolutional neural network. The FOSeg model is a foreground object image segmentation model which can predict the possibility that each pixel is a foreground object and supports end-to-end training. The FOSeg model segmentation flow chart is shown in Figure 1. First, an original image is input, convolution, convergence, linear rectification and other operations of the deep convolutional neural network are carried out; the method comprises the following steps: obtaining a feature mapping score graph by using a bilinear interpolation algorithm,inputting the feature mapping score graph into a shunt aggregation module, obtaining a rough segmentation graph by using an up-sampling operation of the bilinear interpolation algorithm, and finallysending the obtained rough segmentation graph into a conditional random field model to further refine a segmentation result, thereby obtaining a fine segmentation image.

Description

【Technical field】 [0001] The invention relates to the technical field of computer vision, in particular to a foreground object image segmentation method based on a deep convolutional neural network. 【Background technique】 [0002] Foreground object segmentation models can be divided into two categories according to whether the segmentation is category-dependent or not. (1) Class-independent segmentation: No matter how many classes the object in the image has, a segmentation model that only extracts the mask of the foreground object; (2) Class-specific segmentation: Learn from class-labeled data and seek to segment Class-specific segmentation models for . [0003] (1) Class-independent segmentation [0004] According to the different segmentation strategies and purposes, the segmentation that does not depend on categories is divided into the following three types: [0005] Interactive image segmentation model: such as the GrabCut [10] model is to let people use borders or ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/194G06N3/08G06N3/04
CPCG06T7/194G06N3/08G06T2207/20084G06T2207/20081G06N3/045Y02T10/40
Inventor 杨戈吴彬
Owner BEIJING NORMAL UNIV ZHUHAI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products