Unlock instant, AI-driven research and patent intelligence for your innovation.

Human body video segmentation method fusing multi-cues

A technology of human body video and clues, which is applied in image analysis, image data processing, instruments, etc., and can solve problems such as affecting the segmentation effect of subsequent frames and erroneous segmentation results

Inactive Publication Date: 2013-12-25
NINGBO UNIV
View PDF3 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the frame processing process of this video segmentation method, if the segmentation process of the current frame has segmented wrong pixels, the impact of the wrong pixels will be amplified in the segmentation of subsequent frames, which will seriously affect the segmentation effect of subsequent frames.
Fan et al. (Fan et al. Tranductive segmentation of live video with non-stationary background. In IEEE Conference on Computer Vision and Pattern Recognition, 2010. Real-time segmentation of dynamic background, Institute of Electrical and Electronics Engineers, International Conference on Computer Vision and Pattern Recognition) A method combining the global dynamic color model and the fast local kernel density estimation model is proposed, which fuses the local color with the global color. This method is easy to cause errors in the segmentation results when processing frames with complex backgrounds.
Bai et al. (Bai et al. Dynamic Color Flow: A Motion-Adaptive Color Model for Object Segmentation in Video. In: 11th European Conference on Computer Vision. Dynamic Color Flow: A Motion-Adaptive Color Model for Object Segmentation in Video, No. 11 The European Conference on Computer Vision) proposed a video segmentation method that combines color information, motion information and shape information, which adds motion estimation to the color model, and adaptively adjusts the sampling window size of the local color according to the local characteristics of the motion. And add shape information; Price et al. (LIVEcut: Learning-based interactive video segmentation by evaluation of multiple propagated cues.In IEEE.In IEEE International Conference on Computer Vision, 2009. Institute of Electrical and Electronics Engineers, International Conference on Computer Vision) proposed LIVEcut method, which integrates information such as color, gradient, shape, space-time, and motion into the energy term of the graph cut in an adaptive manner. Although the methods proposed by Bai et al. and Price et al. Can achieve better segmentation results, but in the segmentation of subsequent frames, user interaction is required

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human body video segmentation method fusing multi-cues
  • Human body video segmentation method fusing multi-cues
  • Human body video segmentation method fusing multi-cues

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0068] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0069] This embodiment proposes a human body video segmentation method that integrates multiple clues, and its basic flow chart is as follows figure 1 shown, including the following steps:

[0070] ① if Figure 3a As shown, the initial frame is taken from a video sequence containing a human body, and the initial frame is detected by the HOG human detection method to obtain a rectangular human body detection frame, and all pixels outside the human body detection frame in the initial frame are marked Then, according to their own judgment, the experimenters mark the pixels that can be clearly judged as the foreground and background in the human body detection frame, and obtain part of the foreground pixels and part of the background pixels in the initial frame that are in the human body detection frame , all foreground pixels (only in the human...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a human body video segmentation method fusing multi-cues. The method involves obtaining the foreground and background pixel points of an initial frame through simple interaction, constructing a foreground model and a background model, marking the pixel points in the initial frame through the foreground model and the background model, and finally obtaining the segmentation result of the initial frame through an image segmentation method; and in the processing of a subsequent frame, obtaining the foreground model, the background model and the initial mark of the subsequent frame according to the segmentation result of the frame before the subsequent frame, and obtaining the segmentation result of the subsequent frame by using a binary image segmentation method through the fusion of the motion information of the subsequent frame and the accordingly obtained shape prior information of the subsequent frame. The method is characterized in that the color information, the motion information and the shape prior information which are fused are transmitted among subsequent frames, so that the influence exerted on a to-be-processed frame by a frame before the to-be-processed frame over a time domain can be reduced, the situation can be effectively prevented that the subsequent frame, when segmented, expands the influence of the segmentation error of the frame before the subsequent frame, and no user interaction is needed any more.

Description

technical field [0001] The invention relates to a video segmentation method, in particular to a human body video segmentation method which combines multiple clues. Background technique [0002] Video segmentation refers to distinguishing and separating the foreground object from the background environment in the video. It is the premise and foundation of many video applications, such as video cutout and pasting, video compression, human-computer interaction, and video understanding applications. In video segmentation, human body video segmentation is of special significance. Human body video is not only representative of many non-rigid objects, but also the core of many video applications, such as object tracking, pose estimation, human body recognition and Behavior analysis, etc. all rely on human body video segmentation methods. However, there are still many problems in the existing video segmentation methods. In addition to the common difficulties with the existing image...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/20
Inventor 肖波郭立君张荣赵杰煜
Owner NINGBO UNIV