Method and system for processing split online video on the basis of depth sensor

A depth sensor and video segmentation technology, which is applied in image data processing, instruments, image analysis, etc., can solve the problems of accuracy and real-time performance, online video segmentation is prone to errors, etc., to avoid video flicker, ensure timing consistency, Guaranteed consistent effect

Active Publication Date: 2013-03-27
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF3 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The purpose of the present invention is to solve the problem that the online video segmentation based on the depth sensor is easy to make mistakes at the boundary, and the existing technology has the problem of taking one thing and th

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for processing split online video on the basis of depth sensor
  • Method and system for processing split online video on the basis of depth sensor
  • Method and system for processing split online video on the basis of depth sensor

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0063] In recent years, as the size of the depth sensor has been gradually miniaturized and the cost price has been gradually reduced, it has become practical and feasible to use the depth information directly obtained by the depth sensor to assist video segmentation. The robustness of depth information to illumination changes and dynamic shadows will improve the quality of image segmentation. image 3 It is an example diagram of the online video segmentation result of a certain video frame obtained by using the scene segmentation API in OpenNI based on the Kinect depth sensor, where image 3 (a) is the video frame, image 3 (b) is the depth image corresponding to the video frame acquired by the depth sensor, image 3 (c) is the foreground segmented based on the depth image, image 3 (d) enlarged to show the image 3 Segmentation results of marked regions in (c), from image 3 It can be seen from (c) that the online video segmentation based on the depth sensor can obtain b...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method and a system for processing split online video on the basis of a depth sensor. The method includes: firstly, extracting the depth sensor features on the basis of a video frame and a depth image corresponding to the same, and subjecting the features to video frame foreground-background splitting to obtain binary images; secondly, subjecting foreground holes of the binary split images to detection and filling to obtain binary images with foreground holes filled; thirdly, subjecting the binary images with the foreground holes filled to border optimization to obtain optimized binary images; and fourthly, fusing the optimized binary images to a virtual background and the video frame to generate virtual-real fused images. The problems that the depth-sensor-based online video splitting easily errs at discontinuous depth positions and the prior art loses balance between accuracy and timeliness are solved. A depth-sensor-based online video post-splitting processing method and a virtual-real fusion system meeting the timeliness requirement and having high quality are provided.

Description

technical field [0001] The present invention relates to the fields of video content analysis, image processing and computer vision, in particular to a processing method and system for online video segmentation based on a depth sensor. Background technique [0002] With the development of ubiquitous computing technology, video coding technology and broadband network technology, it has become a new hot spot in the 21st century to realize common communication and communication between people in different places through remote video on the Internet, and it shows a broad application prospect. In addition to the application scope of traditional administrative meetings and office meetings, the application range of remote video interaction has been extended to telemedicine, distance education, distance business meetings and legal fields. In recent years, remote video interaction has gradually developed towards providing an immersive experience. The purpose is to make participants fe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/00G06T5/50
Inventor 黄美玉陈益强纪雯
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products