Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Collaborative filtering-based teaching video labeling method

A teaching video and collaborative filtering technology, applied in the field of image processing, can solve problems such as low labeling accuracy and insignificant differences in visual features, and achieve the effect of overcoming low labeling accuracy, low labeling accuracy, and high accuracy

Active Publication Date: 2015-06-10
山西恒奕信源科技有限公司
View PDF3 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the current video annotation method based on machine learning is based on the visual features of the video, such as color, shape, texture, etc., and the scene of the teaching video is uniform, and the difference in visual features is not obvious, so the video annotation method based on machine learning When labeling the teaching video, the labeling accuracy is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Collaborative filtering-based teaching video labeling method
  • Collaborative filtering-based teaching video labeling method
  • Collaborative filtering-based teaching video labeling method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0033] refer to figure 1 , the implementation steps of the present invention are as follows:

[0034] Step 1: Input the teaching video, and extract key frames of subtitles from the teaching video according to the subtitles, and obtain D key frames.

[0035] The instructional video input in this step is as follows: figure 2 as shown, figure 2 There are 12 screenshots of 2a-2l in total, and the following steps are used to realize the figure 2 Extraction of keyframes:

[0036] 1.1) Obtain an image in an educational video every 20 frames, and get Q frames of images, Q>0;

[0037] 1.2) Select the sub-region at the bottom 1 / 4 of each image frame, and calculate the sum Y of the absolute value of the pixel difference between the corresponding positions of the sub-region and other image frames a ;

[0038] 1.3) Set the threshold P a 1 / 10 of the number of pi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a collaborative filtering-based teaching video labeling method. The collaborative filtering-based teaching video labeling method mainly solves the shortcoming of the low accuracy of teaching video labeling in the prior art. The method is implemented through the steps of inputting a teaching video and performing caption key frame extraction on the teaching video according to captions to obtain D key frames; performing caption extraction on the D key frames through optical character software and performing text correction and deleting on obtain captions to obtain D text documents; performing shot segmentation on the teaching video by combining the D text documents with a Gibbs sampler to segment the teaching video into M shots; labeling a part of the M shots, computing the cosine similarity between the labeled shots and unlabeled shots through a collaborative filtering method, and selecting five words with the highest cosine similarity to label the unlabeled shots. The collaborative filtering-based teaching video labeling method takes the caption information in the teaching video into consideration, thereby effectively describing the teaching video, improving the labeling accuracy of the teaching video and being applicable to video teaching.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to a video labeling method in the technical field of pattern recognition, which can be used in network teaching. Background technique [0002] With the rapid development of Internet technology and multimedia technology, learning methods based on online learning platforms have gradually become an important way to effectively supplement traditional classroom learning. However, thousands of teaching videos are uploaded to the Internet every day and every hour. How to efficiently and quickly search for the videos that learners need in these massive teaching videos is an urgent research topic. The most commonly used method is to tag videos, which can effectively help online learning users find the desired videos quickly and efficiently. [0003] Existing video annotation methods are generally divided into three categories: manual annotation, rule-based annotation, and mac...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06F17/30
Inventor 王斌丁海刚关钦高新波牛振兴王敏宗汝牛丽军
Owner 山西恒奕信源科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products