Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video semantic scene segmentation and labeling method

A semantic scene and video technology, applied in the field of computer video processing, can solve the problem of video segmentation and labeling without multiple semantic scenes, and achieve the effect of improving experience and fun

Inactive Publication Date: 2018-09-14
BEIJING JIAOTONG UNIV
View PDF7 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the current scene recognition technology, there is no effective method to segment and label videos containing multiple semantic scenes according to semantics.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video semantic scene segmentation and labeling method
  • Video semantic scene segmentation and labeling method
  • Video semantic scene segmentation and labeling method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0051] This embodiment provides a method for semantic scene segmentation and labeling of video sequences, combining figure 1 The method is described in detail as figure 1 Shown:

[0052] Step 201: using a set of labeled scene images to train a deep convolutional neural network to construct a scene classifier, the scene classifier can predict the probability that an input image belongs to each scene category;

[0053] In this embodiment, the set of marked scene images can utilize existing image sets such as Places and SUN397, and can also collect images of scenes of interest to construct a set of scene images, which is used to train the set of marked scene images of the scene classifier. The scene category is the scene category that can be used for scene semantic annotation of the video;

[0054] The structure of the deep convolutional neural network can use classic network results such as VGG-Net or ResNet, etc. The output of the last layer of the network structure is the dist...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The video semantic scene segmentation and labeling method of the present invention comprises the following steps: offline training a deep convolutional neural network based on a labeled scene image set, and constructing a scene classifier; calculating a similarity between adjacent video frames in a video sequence and grouping the video frames according to the similarity; adaptively adjusting a similarity threshold to obtain video frame packet groups with uniformly distributed number of video frames; merging the frame groups including too few frames, splitting the frame groups including too many frames to readjust a video frame grouping result; selecting a representative video frame for each video frame group; using the scene classifier to identify the scene categories of the video frame groups; and performing semantic scene segmentation and labeling on the video sequence. The method provides an effective means for video retrieval and management, and improves the experience and fun forthe users to watch the videos.

Description

technical field [0001] The invention relates to the technical field of computer video processing, in particular to a video semantic scene segmentation and labeling method. Background technique [0002] With the rapid development of digital multimedia and Internet technology, a large amount of digital video data is generated every day. Massive video data poses a huge challenge to the effective retrieval and management of videos. Segmenting and annotating videos according to semantic scenes plays an important role in solving video retrieval and management problems. In addition, segmenting and labeling video content according to semantic scenes can effectively improve user experience and enjoyment of watching videos. Currently, scene recognition mainly includes static image scene recognition and video scene recognition. Wherein, static image scene recognition refers to classifying static scene images into corresponding semantic scene categories. Video scene recognition refe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/46G06V20/49G06F18/2411G06F18/22G06F18/214
Inventor 白双
Owner BEIJING JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products