Virtual slice method for teaching video

A teaching video and virtual slicing technology, which is applied in the field of virtual slicing of teaching videos, can solve problems that affect user experience and cannot provide users with slice positioning information, and achieve the effect of improving efficiency

Active Publication Date: 2018-08-17
创而新(北京)教育科技有限公司
View PDF6 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Therefore, applying the existing scene detection or image detection method to slice the teaching video cann

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual slice method for teaching video
  • Virtual slice method for teaching video
  • Virtual slice method for teaching video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0042] This embodiment discloses a method for virtual slicing of teaching videos, the steps are as follows:

[0043] Step S1, first extract the audio data from the teaching video, then convert the audio data to obtain each sentence text, and combine each sentence text to obtain the first text set, for example, the first text set ST={st 1 ,st 2 ,st 3 ,...,st m}, each element st in ST 1 to st m The 1st to m sentence texts in the first text collection respectively.

[0044] In this embodiment, the FFMPEG open source framework is used to extract the audio from the teaching video in MP4 format. When the teaching video is obtained, it is first judged whether the teaching video is a video format supported by FFMPEG. FFMPEG supports mainstream video formats on the market, but it is still possible It is an unsupported format. If this is the case, you need to convert the teaching video format first. In this embodiment, if there are multiple audio tracks in the teaching video extra...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a virtual slice method for a teaching video. The method comprises the following steps of firstly, extracting audio data from a teaching video, and combining sentence texts obtained through conversion of the audio data to obtain a first text set; determining the start time, the end time and the content information of each sentence text, and merging the sentence texts in thefirst text set in order to obtain an initial video slice set; obtaining key words of each slice in the initial video slice set; and computing the similarity of two adjacent slices according to the keywords, and finally determining whether the two adjacent slices need to be merged or not according to the similarity of the two adjacent slices, the time interval between the two adjacent slices, the respective time lengths of the two adjacent slices and the total number of the respective sentence texts of the two adjacent slices in order to obtain a final video slice set. According to the method,a teaching video slice list based on semantic similarity can be provided for a user, and the user can directly access specific knowledge point positions in the video according to the keywords.

Description

technical field [0001] The invention relates to the technical field of video processing, in particular to a method for virtual slicing of teaching videos. Background technique [0002] Teaching videos are a common type of video. When users watch teaching videos, they often want to quickly jump to a specific knowledge point position, but it is often difficult to locate it accurately. It requires multiple adjustments and even frame-by-frame observation to reach the ideal position. . [0003] In order to quickly locate the location of the content, the producer can manually slice the teaching video and mark it with keywords, allowing users to quickly locate the content location of interest based on the slice information. However, in the environment of massive video, the cost of manual slice labeling is huge and cannot be promoted. [0004] In the prior art, most of the video automatic slicing schemes are based on scene or image detection, such as film and television video, det...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N21/439H04N21/8405H04N21/845G10L15/26
CPCG10L15/26H04N21/439H04N21/4398H04N21/8405H04N21/845
Inventor 任光杰黄海晖张锐韩后林振潮许骏
Owner 创而新(北京)教育科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products