Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Conference video splitting method and system

A conference and video technology, applied in the field of video processing, can solve the problems of different conferences, large workload of human voice fingerprint database, and a lot of manpower, etc., to achieve the effect of improving accuracy, ensuring integrity, and reducing the amount of calculation.

Active Publication Date: 2020-03-13
新华智云科技有限公司
View PDF7 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] For the transition recognition technology, since the conference usually does not need to transition, it is impossible to split the conference video;
[0005] For the voice fingerprint technology, because the current voice fingerprint technology is not mature enough, it is impossible to accurately dismantle the bars in the scene with background noise and multi-person conversations, and there are usually several conference speakers in the meeting. heavy workload
[0006] For face recognition technology, nowadays, by calculating the distribution of the faces that appear in the video time and space, it is judged whether it is the host, but the conference is different from the news, and the splitting of the conference is to analyze the content of the speeches of the speakers in each conference. To break it down, because each conference speaker's speaking time is different, it is impossible to simply confirm the identity of the conference speaker based on the proportion of faces in the entire video
[0007] To sum up, the existing methods of dismantling news articles cannot be directly transferred to conference reports, and the method of dismantling articles used for conference reports today is usually manual dismantling, that is, manually previewing the conference video and performing dot dismantling. This method requires A large amount of manpower and low efficiency, so it is necessary to further improve the existing technology

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Conference video splitting method and system
  • Conference video splitting method and system
  • Conference video splitting method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0054] Embodiment 1. A method for splitting conference videos, such as figure 1 shown, including the following steps:

[0055] S100. Acquire the video to be processed, extract the voice text data and face data in the video to be processed, and map the face data to the voice text data according to time, and generate voice sentence data, the voice sentence The data includes time data and face identification, where the time data is the starting timestamp;

[0056] The start time stamp is the start time of the sentence corresponding to the voice sentence data.

[0057] S200. Judging whether the identity corresponding to each face identifier is a conference speaker, and obtaining a judging result;

[0058] S300. Generate split point data according to the judgment result and the time data, and split the video to be processed based on the split point data to generate split segments.

[0059] In this embodiment, the face data and the voice text data are mapped according to time, so...

Embodiment 2

[0091] Embodiment 2, change the time data in embodiment 1 from "start time stamp" to "end time stamp", and the rest are equal to embodiment 1;

[0092] In this embodiment, the end time stamp is the end time of the sentence corresponding to the speech sentence data. At this time, the end time stamp of the speech sentence data before the sentence appears is used as the split point to generate split point data.

Embodiment 3

[0093] Embodiment 3. Change the time data in Embodiment 1 from "start time stamp" to "start time stamp and end time stamp". The specific steps to split point data are:

[0094] Take the start time stamp of the sentence as the first start split point, and take the end time stamp of the speech sentence data before the sentence as the first end split point, according to the first start split point and the first end split point splitpoint generates splitpoint data.

[0095] Because there is a pause time between each sentence, the design of the first beginning split point and the first end split point in the present embodiment makes the silent segment not appear in the title / end of the strip segment of the gained, improving the user's viewing experience.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a conference video splitting method and system. The method comprises the following steps: obtaining a to-be-processed video, extracting voice text data and face data in the to-be-processed video, mapping the face data into the voice text data according to time, generating voice statement data , wherein the voice statement data comprise time data and face identification, and the time data comprise a starting timestamp and / or an ending timestamp; judging whether the identity corresponding to each face identifier is a conference speaker or not, and obtaining a judgment result; and generating splitting point data according to a judgment result and the time data, and splitting the to-be-processed video based on the splitting point data to generate splitting fragments. According to the invention, conference reports can be automatically stripped, the labor cost is saved, and the stripping efficiency is high.

Description

technical field [0001] The invention relates to the field of video processing, in particular to a conference video stripping method and system. Background technique [0002] With the development of the Internet, people can watch conference videos in addition to going to the conference site; however, most conferences last for a long time, and viewers are usually only interested in some of the conference clips in the entire conference. In view of this, the industry Usually, the conference video is divided into strips so that users can quickly find the video clips they are interested in. [0003] Nowadays, there are many ways to disassemble news, such as transition recognition, voice fingerprint and face recognition; [0004] For the transition recognition technology, since the conference usually does not need to transition, it is impossible to split the conference video; [0005] For the voice fingerprint technology, because the current voice fingerprint technology is not ma...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N7/15G06K9/00
CPCH04N7/15G06V40/172
Inventor 季学斌范梦真
Owner 新华智云科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products