Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video semantics labeling method and device based on bullet screen and electronic equipment

A technology of semantic labeling and barrage, applied in the field of video labeling, can solve the problems of not taking into account, ignoring the interactive characteristics of barrage, inaccurate video semantic labeling, etc., to achieve the effect of improving accuracy

Active Publication Date: 2018-06-01
BEIJING UNIV OF POSTS & TELECOMM
View PDF2 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the existing methods of using bullet chatting to semantically tag videos only determine the time boundary of the plot according to the distribution of bullet chatting in time, ignoring the consideration of the interaction characteristics of bullet chatting, and not taking into account the current time. The content discussed in the barrage may be related to the plot corresponding to the adjacent time, but has nothing to do with the plot corresponding to the current time, so the division of the plot is not accurate enough, so that the semantic annotation of the video is not accurate enough

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video semantics labeling method and device based on bullet screen and electronic equipment
  • Video semantics labeling method and device based on bullet screen and electronic equipment
  • Video semantics labeling method and device based on bullet screen and electronic equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0051]The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0052] In order to solve the problem that the method of using bullet chatting to semantically tag videos in the prior art ignores the consideration of the interaction characteristics of the bullet chatting, and does not consider that the content discussed in the bullet chatting that appears at the current time may be related to the plot corresponding to the adjacent time , but has nothing to do with the plot corresponding to the current time, so the division of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention provides a video semantics labeling method based on a bullet screen. The method includes: obtaining all words in the bullet screen of a target video and corresponding time stamps; averagely dividing the target video into a preset number of time slices; generating an initial topic set, which contains topics corresponding to all the time slices, and an initial plot set, which contains plots corresponding to all the time slices, according to preset probability correspondence relationships of words and the topics and the plots; generating a dictionary vocabulary setand a vocabulary distribution matrix; calculating temporal a priori information of the dictionary vocabulary set; using a preset total probability formula of bullet-screen vocabulary to calculate probability that each piece of the dictionary vocabulary corresponds to each topic and each plot; generating plot-topic distribution matrices of the time slices; merging adjacent similar time slices intoone time slice; determining plots corresponding to all time slices; and labeling the target video. By applying the scheme provided by the embodiment of the invention for video semantics labeling, labeling on video semantics is enabled to be more accurate.

Description

technical field [0001] The present invention relates to the technical field of video tagging, in particular to a bullet chat-based video semantic tagging method, device and electronic equipment. Background technique [0002] Online video occupies a huge amount of traffic on the Internet, and hundreds of videos are uploaded to the global Internet video platform every day, so the management of online video is very important. Video labeling can improve network traffic resource utilization and online video management efficiency, and is a key link in online video management. [0003] Video websites that have emerged in recent years have added a "barrage" function, which enables users to comment in real time on the episodes they are currently watching. Based on this, we can use the rich comment data in the bullet chat to cluster the bullet chat according to the density of the bullet chat, and then get the time slice of this type of bullet chat according to the earliest and latest...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/27G06F17/30H04N21/4788H04N21/488H04N21/84H04N21/845
CPCG06F16/7867G06F40/284G06F40/289H04N21/4788H04N21/4884H04N21/84H04N21/8456
Inventor 王瑞东田野马建王文东阙喜戎龚向阳
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products