Video consistent fusion processing method

A processing method and consistent technology, applied in the direction of TV, color TV, color TV parts, etc., to achieve the effect of improving efficiency and simple user interaction

Active Publication Date: 2011-01-12
ZHEJIANG UNIV
View PDF3 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Aiming at the shortcomings of the existing video fusion technology, the present invention proposes a new video consistency fusion technology, which improves the efficiency of the existing video object extrac

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video consistent fusion processing method
  • Video consistent fusion processing method
  • Video consistent fusion processing method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] Each part is described in detail below according to the flowchart of the present invention:

[0028] 1. Interactive selection and automatic extraction of video objects

[0029] In order to improve the accuracy of video object extraction in complex scenes, multiple features need to be considered to guide the extraction of video objects: (1) Foreground extraction should use multiple features, such as color, texture, shape and motion features. Among them, shape, as an important factor for maintaining local consistency recognition, should be paid special attention to. (2) These features should be evaluated both locally and globally to improve the accuracy of object extraction. In the process of video object extraction, the system first extracts the foreground contours of the key frames through manual interaction, and then automatically generates the foreground contours of the remaining frames by means of forward propagation based on key frames. The main steps of the proces...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a processing method for carrying out consistent fusion on video images from different sources. The method is mainly characterized by not only ensuring no seam and nature near the boundary in the video fusion process, but also comprehensively considering the global illumination and hue information of a target scene, wherein the information can be extracted from the target scene by the simple manual interactions and expressed by generating a corresponding reference image so as to diffuse into an object to be fused in an image fusion mode, thereby finally generating a new video with high reality. The invention can be used in the fields of post production of film and television programs, design of special effects of game scenes, manufacture of advertisements, news media, multimedia educational technologies and the like, provides powerful theoretical and technical support for fast development, and can reduce the manufacturing cost, thereby obtaining great economic benefit.

Description

Technical field: [0001] The present invention relates to a method for consistent fusion of video images. Specifically, it relates to technologies such as automatic extraction of video foreground objects, seamless fusion between different source videos, and consistent processing of lighting and color tones in different video scenes. method. Background technique: [0002] Extracting foreground objects from still images or video sequences is a very important application of video image editing. The extraction of static images is the basis of video object extraction. Generally speaking, the user needs to interactively specify some foreground and background areas, and then use statistical methods to estimate the classification of the unknown area foreground and background from known information. Or use a simple boundary tracking method, and then use statistical methods to optimize the alphamatte. At present, the research on the matting technology of static images has been relati...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N5/262
Inventor 张赟童若锋唐敏
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products