Object contour extraction method based on mask-RCNN

A contour extraction and contour technology, applied in biological neural network models, image data processing, instruments, etc., can solve problems such as incomplete segmentation, inaccurate instance contour detection, and blurred images.

Active Publication Date: 2018-11-27
UNIV OF ELECTRONIC SCI & TECH OF CHINA
View PDF1 Cites 57 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The purpose of the present invention is to solve the problem of inaccurate and incomplete segmentation of instance contour detection and incomplete segmentation due to factors such as image blur and low resolution in the existing image contour detection segmentation and extraction, and propose an object contour extraction based on mask-RCNN Method, first obtain a mask-RCNN instance segmentation model through training, p

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Object contour extraction method based on mask-RCNN
  • Object contour extraction method based on mask-RCNN
  • Object contour extraction method based on mask-RCNN

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0110] A method for extracting object contours based on mask-RCNN provided by a preferred embodiment of the present invention, the process is:

[0111] Copy the RGB image of the outline of the object to be extracted to obtain image IM1 and image IM2, and IM1 and IM2 are exactly the same.

[0112] Step 1: Use the obtained ImageNet training samples to pre-train to obtain the mask-RCNN model, and perform semantic segmentation on the mask-RCNN model obtained by inputting IM1, to obtain binary mask images of each object in the scene. Assume that the mask image K1 of the object A is currently obtained, the mask image K1 is the same size as the IM1 image, the area where the object A is located is marked as 1, and the other areas of the mask image are marked as 0. Since the object A area of ​​the mask image obtained in the actual operation cannot completely overlap with the object A area of ​​the original RGB image, and the edges are rough, further edge refinement operations are perfo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an object contour extraction method based on mask-RCNN. The method comprises the steps: firstly obtaining a mask-RCNN model through trianing, inputting the RGB image under object contour extraction into mask-RCNN model for semantic segmentation, obtaining the corresponding binary mask image of the RGB image through processing of the mask-RCNN network, and inputting the RGBimage and the corresponding binary mask image into a contour refinement module. A contour feature description method is proposed to accurately represent the direction and angle information of the object contour, and the binary mask image contour obtained based on mask-RCNN is modified adaptively through the contour correction algorithm, and finally real-time accurate extraction of the image instance contour is realized under the condition of low image quality, low resolution, blurred object and low time and space complexity.

Description

technical field [0001] The invention belongs to the technical field of image object detection and segmentation of computer vision, and in particular relates to an object contour extraction method based on mask-RCNN. Background technique [0002] Forward-looking works on image detection and segmentation include the research on R-CNN (Regions with CNNfeatures) deep convolutional neural network proposed by Girshick et al. Kaiming et al. proposed the SPP-Net model to solve the above problems, and the processing speed is 30 to 170 times faster than R-CNN. In order to further reduce the time and space complexity of the instance segmentation algorithm, Girshick proposed the Fast-RCNN model to integrate feature extraction and classification into a classification framework, which improves the speed of training the model and the accuracy of target detection. Kaiming et al. added a branch network on the basis of Faster-RCNN to complete target pixel segmentation while realizing target ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/13G06T5/30G06T5/00G06N3/04
CPCG06T5/002G06T5/30G06T7/13G06N3/045
Inventor 张汝民刘致励崔巍魏陈建文王文一曾辽原
Owner UNIV OF ELECTRONIC SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products