Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Adaptive Color Image Segmentation Method Based on Binocular Parallax and Active Contour

A binocular parallax, active contour technology, applied in image analysis, image data processing, instruments, etc., can solve the problem of affecting the segmentation results, cannot accurately set the initial contour position adaptively, and cannot apply binocular color images well and other problems, to achieve the effect of fast evolution rate, reduced number of iterations, and improved accuracy

Inactive Publication Date: 2017-01-11
HARBIN NORMAL UNIVERSITY
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to propose an adaptive color image segmentation method based on binocular parallax and active contour, so that the existing active contour models are only limited to segmenting monocular grayscale images and cannot be well applied to binocular Color image; when determining the initial contour, it mainly relies on prior experience, and cannot accurately realize the adaptive setting of the initial contour position, thus affecting the segmentation results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Adaptive Color Image Segmentation Method Based on Binocular Parallax and Active Contour
  • Adaptive Color Image Segmentation Method Based on Binocular Parallax and Active Contour
  • Adaptive Color Image Segmentation Method Based on Binocular Parallax and Active Contour

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0020] Specific Embodiment 1: The adaptive color image segmentation method based on binocular parallax and active contour described in this embodiment is implemented according to the following steps:

[0021] Step 1. Adaptive initial contour setting based on binocular parallax;

[0022] Step 2, conversion of color space;

[0023] Step 3, establishing an energy functional based on the improved LCV model;

[0024] Step 4, the evolution of the contour curve;

[0025] Step 5, outputting the segmentation result. combine figure 1 understand this embodiment.

specific Embodiment approach 2

[0026] Specific embodiment 2: The difference between this embodiment and specific embodiment 1 is that the initial contour setting described in step 1 is realized according to the following steps:

[0027] Step 1 (1), using the left-view image in the binocular stereo image as the target image, and the right-view image as the reference image, using an adaptive weighted stereo matching algorithm to obtain the disparity map of the left-view image in the binocular stereo image;

[0028] Step 1 (2), performing threshold segmentation on the disparity map, extracting the target object area of ​​interest, and then using median filtering to suppress noise in the disparity map;

[0029] Step 1 (3), set the boundary of the obtained target region of interest as the initial contour of the active contour model, the specific process is: select the target object region, the surface of the object is generally smooth, so the points on the surface of the object on the image The projection is con...

specific Embodiment approach 3

[0032] Specific embodiment three: the difference between this embodiment and specific embodiment one or two is: the conversion of the color space described in step 2 is realized by the following RGB color space and YCbCr color space conversion formula:

[0033] Y=0.299R+0.587G+0.114B

[0034] Cb=0.564(B-Y)

[0035] Cr=0.713(R-Y), where Y, Cb, and Cr respectively represent the brightness, blue chroma and red chroma of the YCbCr color space; R, G, B represent the red, green, blue of the RGB color space respectively Three servings. combine Figure 4 To understand this embodiment, other steps and parameters are the same as those in the first or second specific embodiment.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a self-adaption color image segmentation method based on the binocular parallax and a movable outline, and belongs to the technical field of computer visual processing. The self-adaption color image segmentation method solves the problem that an existing movable outline model is limited for segmenting monocular gray level images and can not be well applied to binocular color images, and the problem that due to the fact that an initial outline is determined mainly relying on prior experience, the initial outline can not be accurately set in a self-adaption mode and the segmentation result is influenced. The self-adaption color image segmentation method based on the binocular parallax and the movable outline is achieved according to the following steps of setting the initial outline on the basis of the binocular parallax in a self-adaption mode, converting the color space, setting up an energy functional based on an improved LCV model, evolving an outline curve, and outputting the segmentation result. The self-adaption color image segmentation method is suitable for three-dimensional image segmentation or three-dimensional video compression preprocessing or target recognition or the like.

Description

technical field [0001] The invention relates to stereoscopic image processing, is an adaptive color image segmentation method based on binocular parallax and active contour, and belongs to the technical field of computer vision processing. Background technique [0002] Most of the information obtained by people comes from the visual system, but the things seen by human eyes have a three-dimensional sense, while ordinary images are two-dimensional. With the advancement of science and technology, binocular stereoscopic images gradually occupy a place in people's lives, and their applications in various fields are becoming more and more important. Such as object tracking, automatic navigation, medical aided diagnosis, virtual reality, map drawing, etc. Image engineering can usually be divided into three levels, image processing, image analysis and image understanding. As a key step in the process from image processing to image analysis, image segmentation has been the focus a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/00
Inventor 于晓艳冯金蕾荣宪伟尹燕宗励强华
Owner HARBIN NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products