Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Video Splicing Method Based on Image Semantic Segmentation

A technology of semantic segmentation and video splicing, which is applied in the field of image processing to achieve the effect of high-quality splicing

Active Publication Date: 2020-09-29
CHINESE ACAD OF SURVEYING & MAPPING
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to provide a video mosaic method based on image semantic segmentation, thereby solving the foregoing problems existing in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Video Splicing Method Based on Image Semantic Segmentation
  • A Video Splicing Method Based on Image Semantic Segmentation
  • A Video Splicing Method Based on Image Semantic Segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 2

[0103] Such as Figure 2 to Figure 4As shown, in this embodiment, video splicing is actually a technique of seamlessly splicing multiple video sequences with overlapping parts into wide-view or even panoramic videos. Among them, video splicing of fixed multi-cameras in static scenes is the most common, such as fixed-angle traffic surveillance cameras, indoor surveillance cameras, etc. The common method of static video stitching is to select matching feature points with the same characteristics in multiple video overlapping areas, and then use the feature points to perform video geometric transformation and fusion stitching. Therefore, the more accurate and more feature points, the better the matching and stitching effect. Well, a large overlapping area can better meet this requirement, so in this case, it is necessary to avoid the overlapping area being too small. However, under normal circumstances, static video images have their own characteristics, such as traffic surveill...

Embodiment 3

[0109] In this embodiment, in order to better illustrate the effects of the present invention, a comparison of splicing effects in the same environment is carried out between the traditional splicing method SIFT and the method of the present invention using actual data. The experimental environment is an Intel Core i7-6700K processor with a main frequency of 4.00GHz and a memory of 16GB. It is programmed in C++ and uses the Caffe deep learning framework.

[0110] In this embodiment, the experimental data selects 54 typical intersections in Linyi City, Shandong Province, and the video data of 132 high-definition cameras, and the video image size is 1920x1080 pixels; the earth observation remote sensing data selects high-resolution orthophoto images with a resolution of 0.1m. The video frames of 100 high-definition cameras and 36 orthophotos of the intersection area are used to make the training set, the video frames of 20 high-definition cameras and 10 orthophotos of the interse...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video mosaic method based on image semantic segmentation, which includes acquiring a video single-frame image collected by a certain video sensor; accurately acquiring a remote sensing image of a target area according to the video position information combined with the shape features of the ground objects in the single-frame image, as Stitching reference background; using fully convolutional neural network to semantically segment video single-frame images and splicing reference background images; combining the segmentation results with the matching method based on feature vector Euclidean distance, as the matching constraints of feature points, to match feature points The selection of the set; according to the selected matching feature point set, the matching of each frame image of the video and the mosaic reference background image is realized; the time series fusion of all the matching results is performed to obtain the final video mosaic result. The advantages are: to achieve more accurate feature point matching, and high-quality stitching of videos, which is suitable for multi-video stitching with large overlapping areas, and can well realize multi-video stitching with small overlapping areas.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a video splicing method based on image semantic segmentation. Background technique [0002] Video stitching is an extension of image stitching, which refers to the technology of seamlessly splicing several overlapping video sequences (multi-temporal, multi-angle, multi-sensor acquisition) into wide-view or even panoramic videos. The stitched panoramic videos can be widely used In the continuous tracking and monitoring of urban conditions such as public security and traffic. According to different camera settings and application scenarios, video stitching can be divided into three types: fixed multi-camera video stitching in static scenes, fixed camera video stitching in moving scenes, and non-rigid fixed camera video stitching in mixed dynamic and static scenes. Fixed multi-camera video stitching is most commonly used in static scenes. [0003] Video splicing in static...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T3/40G06T5/50G06T7/13G06K9/34G06K9/46G06K9/62
CPCG06T3/4038G06T5/50G06T7/13G06T2207/10016G06T2207/20221G06V10/267G06V10/462G06F18/22
Inventor 李成名刘嗣超赵占杰武鹏达王飞刘振东陈汉生
Owner CHINESE ACAD OF SURVEYING & MAPPING
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products