Check patentability & draft patents in minutes with Patsnap Eureka AI!

A Real-time Semantic Video Segmentation Method

A video segmentation and semantic segmentation technology, applied in the field of computer vision, can solve the problems of high noise, time-consuming technology, and failure to mine the coherence between video frames, etc., and achieve the effect of high accuracy and efficiency improvement

Active Publication Date: 2022-04-19
ZHEJIANG UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The first type of method regards video as a sequence of image frames. They sacrifice a little segmentation accuracy in exchange for real-time semantic segmentation performance by reducing the scale of the input data or cutting the network.
Such methods do not exploit the inter-frame coherence inherent in videos
The second type of method extracts coherent features between frames through optical flow, 3DCNN, RNN and other technologies on the video, but these technologies themselves are time-consuming, and they themselves will become the bottleneck of semantic video segmentation
However, the inter-frame coherent information provided by compressed video is noisy compared with techniques such as optical flow. How to use compressed information while ensuring accurate segmentation has become a key problem to be solved by this method.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Real-time Semantic Video Segmentation Method
  • A Real-time Semantic Video Segmentation Method
  • A Real-time Semantic Video Segmentation Method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0070] The following simulation experiment is carried out based on the above method. The implementation method of this embodiment is as described above, and the specific steps will not be described in detail, and only the experimental results will be shown below.

[0071] This embodiment uses ICNet as a lightweight image semantic segmentation CNN model. And several experiments were carried out on the public semantic segmentation dataset Cityscapes, which contains 5000 short video clips, which proves that this method can significantly improve the efficiency of semantic video segmentation and ensure the accuracy. In the algorithm, the GOP parameter g is set to 12, and the B frame ratio β is set to 0.

[0072] Comparing the method of the present invention with the traditional frame-by-frame segmentation processing method through CNN, it can be seen from the algorithm flow that the main difference lies in whether to perform S3-S5 compression domain operations. The implementation ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a real-time semantic video segmentation method, which is used for greatly accelerating the video semantic segmentation algorithm. Specifically, it includes the following steps: 1) Obtain multiple sets of data sets for training semantic segmentation, and define algorithm goals; 2) Train a lightweight CNN model for image semantic segmentation; 3) Decode the original video to obtain a residual image, Motion vector and RGB image; 4) If the current frame is an I frame, send it to the segmentation model obtained in 2) to obtain a complete segmentation result; 5) If the current frame is a P frame, use the motion vector to The segmentation result of one frame is passed to the current one, and the sub-block of the current frame is selected for correction using the residual image; 6) Repeat steps 3)~5) until all video frames are segmented. The invention makes full use of the correlation of adjacent frames in the video, and the accelerated processing based on the compressed domain information can quickly complete complex segmentation tasks while maintaining a high accuracy rate, and the efficiency is more than ten times higher than that of common segmentation methods.

Description

technical field [0001] The invention belongs to the field of computer vision, in particular to a real-time semantic video segmentation method. Background technique [0002] Semantic video segmentation is a computer vision task that assigns each pixel of each frame of a video a semantic category. Real-time semantic video segmentation requires certain requirements on the segmentation speed, generally above 24 frames per second. The current advanced semantic video segmentation methods are all based on convolutional neural network (CNN) machine learning methods, which can be roughly divided into two categories based on continuous image frames and directly based on video. The first type of method regards video as a sequence of image frames. They sacrifice a little segmentation accuracy in exchange for real-time semantic segmentation performance by reducing the scale of the input data or cutting the network. Such methods do not exploit the inter-frame coherence inherent in video...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/10
CPCG06T7/10G06T2207/10016G06T2207/20084G06T2207/20081
Inventor 冯君逸李颂元李玺
Owner ZHEJIANG UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More