A Video Semantic Scene Segmentation and Labeling Method

A semantic scene and video technology, applied in the field of computer video processing, can solve the problem of video segmentation and labeling without multiple semantic scenes, and achieve the effect of improving experience and fun

Inactive Publication Date: 2020-06-30
BEIJING JIAOTONG UNIV
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the current scene recognition technology, there is no effective method to segment and label videos containing multiple semantic scenes according to semantics.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Video Semantic Scene Segmentation and Labeling Method
  • A Video Semantic Scene Segmentation and Labeling Method
  • A Video Semantic Scene Segmentation and Labeling Method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0051] This embodiment provides a method for semantic scene segmentation and labeling of video sequences, combining figure 1 The method is described in detail as figure 1 Shown:

[0052] Step 201: using a set of labeled scene images to train a deep convolutional neural network to construct a scene classifier, the scene classifier can predict the probability that an input image belongs to each scene category;

[0053] In this embodiment, the set of marked scene images can utilize existing image sets such as Places and SUN397, and can also collect images of scenes of interest to construct a set of scene images, which is used to train the set of marked scene images of the scene classifier. The scene category is the scene category that can be used for scene semantic annotation of the video;

[0054] The structure of the deep convolutional neural network can use classic network results such as VGG-Net or ResNet, etc. The output of the last layer of the network structure is the dist...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The video semantic scene segmentation and labeling method of the present invention includes the following steps: constructing a scene classifier based on offline training of a deep convolutional neural network based on a labeled scene image set; calculating the similarity between adjacent video frames in the video sequence Video frames are grouped according to similarity; adaptively adjust the similarity threshold to obtain video frame groups with uniform distribution of video frames; merge frame groups with too few frames, and split frame groups with too many frames to group the results of video frames Rescale; select representative video frames for each video frame grouping; use scene classifiers to identify scene categories for video frame groupings; perform semantic scene segmentation and labeling of video sequences. The present invention provides an effective means for solving the problem of video retrieval and management, and improves the user's experience and enjoyment of watching videos.

Description

technical field [0001] The invention relates to the technical field of computer video processing, in particular to a video semantic scene segmentation and labeling method. Background technique [0002] With the rapid development of digital multimedia and Internet technology, a large amount of digital video data is generated every day. Massive video data poses a huge challenge to the effective retrieval and management of videos. Segmenting and annotating videos according to semantic scenes plays an important role in solving video retrieval and management problems. In addition, segmenting and labeling video content according to semantic scenes can effectively improve user experience and enjoyment of watching videos. Currently, scene recognition mainly includes static image scene recognition and video scene recognition. Wherein, static image scene recognition refers to classifying static scene images into corresponding semantic scene categories. Video scene recognition refe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/46G06V20/49G06F18/2411G06F18/22G06F18/214
Inventor 白双
Owner BEIJING JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products