Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video stitching method based on image semantic segmentation

A technology of semantic segmentation and video stitching, applied in the field of image processing, to achieve the effect of high-quality stitching

Active Publication Date: 2020-01-07
CHINESE ACAD OF SURVEYING & MAPPING
View PDF11 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to provide a video mosaic method based on image semantic segmentation, thereby solving the foregoing problems existing in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video stitching method based on image semantic segmentation
  • Video stitching method based on image semantic segmentation
  • Video stitching method based on image semantic segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 2

[0103] Such as Figure 2 to Figure 4As shown, in this embodiment, video splicing is actually a technique of seamlessly splicing multiple video sequences with overlapping parts into wide-view or even panoramic videos. Among them, video splicing of fixed multi-cameras in static scenes is the most common, such as fixed-angle traffic surveillance cameras, indoor surveillance cameras, etc. The common method of static video stitching is to select matching feature points with the same characteristics in multiple video overlapping areas, and then use the feature points to perform video geometric transformation and fusion stitching. Therefore, the more accurate and more feature points, the better the matching and stitching effect. Well, a large overlapping area can better meet this requirement, so in this case, it is necessary to avoid the overlapping area being too small. However, under normal circumstances, static video images have their own characteristics, such as traffic surveill...

Embodiment 3

[0109] In this embodiment, in order to better illustrate the effects of the present invention, a comparison of splicing effects in the same environment is carried out between the traditional splicing method SIFT and the method of the present invention using actual data. The experimental environment is an Intel Core i7-6700K processor with a main frequency of 4.00GHz and a memory of 16GB. It is programmed in C++ and uses the Caffe deep learning framework.

[0110] In this embodiment, the experimental data selects 54 typical intersections in Linyi City, Shandong Province, and the video data of 132 high-definition cameras, and the video image size is 1920x1080 pixels; the earth observation remote sensing data selects high-resolution orthophoto images with a resolution of 0.1m. The video frames of 100 high-definition cameras and 36 orthophotos of the intersection area are used to make the training set, the video frames of 20 high-definition cameras and 10 orthophotos of the interse...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video stitching method based on image semantic segmentation. The video stitching method comprises the following steps: acquiring a video single-frame image acquired by a certain video sensor; accurately acquiring a remote sensing image of the target area as a splicing reference background according to the video position information in combination with surface feature shape features in the single-frame image; performing semantic segmentation on the video single-frame image and the spliced reference background image by adopting a full convolutional neural network; combining the segmentation result with a matching method based on a feature vector Euclidean distance to serve as a matching constraint condition of feature points, and selecting a matching feature point set; according to the selected matching feature point set, realizing matching of each frame of image of the video and the splicing reference background image; and carrying out time sequence fusion on all results obtained by matching to obtain a final video splicing result. The method has the advantages that more accurate feature point matching and high-quality video splicing are achieved, the method is suitable for multi-video splicing with a large overlapping area, and multi-video splicing with a small overlapping area can be well achieved.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a video splicing method based on image semantic segmentation. Background technique [0002] Video stitching is an extension of image stitching, which refers to the technology of seamlessly splicing several overlapping video sequences (multi-temporal, multi-angle, multi-sensor acquisition) into wide-view or even panoramic videos. The stitched panoramic videos can be widely used In the continuous tracking and monitoring of urban conditions such as public security and traffic. According to different camera settings and application scenarios, video stitching can be divided into three types: fixed multi-camera video stitching in static scenes, fixed camera video stitching in moving scenes, and non-rigid fixed camera video stitching in mixed dynamic and static scenes. Fixed multi-camera video stitching is most commonly used in static scenes. [0003] Video splicing in static...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06T5/50G06T7/13G06K9/34G06K9/46G06K9/62
CPCG06T3/4038G06T5/50G06T7/13G06T2207/10016G06T2207/20221G06V10/267G06V10/462G06F18/22
Inventor 李成名刘嗣超赵占杰武鹏达王飞刘振东陈汉生
Owner CHINESE ACAD OF SURVEYING & MAPPING
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products