Supercharge Your Innovation With Domain-Expert AI Agents!

Video annotation anchoring and matching method

A matching method and video technology, applied in the field of anchoring and matching of video annotations, can solve problems such as complex, unfavorable player-side fast processing, and large amount of calculation, and achieve the effect of low calculation amount, rich content, and accurate matching

Inactive Publication Date: 2016-11-09
GLOBAL TONE COMM TECH
View PDF5 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The existing audio hash technology is based on the calculation of complex features such as linear prediction and MFCC. It is designed to prevent tampering. It is complex and has a very large amount of calculation, which is not conducive to fast processing on the player side.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video annotation anchoring and matching method
  • Video annotation anchoring and matching method
  • Video annotation anchoring and matching method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0019] Embodiment one, such as figure 1 , figure 2 As shown, an anchoring method for video annotation, including:

[0020] Step 1, labeling video A;

[0021] Each annotation is a position and associated content in the video, including but not limited to video links, advertisements, barrage, and subtitles.

[0022] Video link For example, when video A is played to the 20th-22nd second, an icon is displayed on an object in the video screen, and the user clicks on the icon to display the specified text, picture, map location, or open the specified URL;

[0023] Ad annotation, for example, specifies that when video A plays to the 20th-22nd second, a graphic advertisement is displayed at a certain position on the video screen, and the user clicks on the advertisement to display more detailed content or jump to the purchase screen;

[0024] Subtitle annotation For example, when video A is played to the 20th-22nd second, a specified text subtitle is displayed below the video;

...

Embodiment 2

[0029] Embodiment 2, on the basis of Embodiment 1, more preferably, the hash characteristic value of the video file is specifically the hash characteristic value of the audio channel of the video.

[0030] The audio data in the video, after decoding, each channel data is a continuous number, which represents the sample value after sampling the sound every second, and the zero-crossing rate also has the feature of anti-editing. The voice sample is down-sampled, and the zero-crossing rate is calculated and matched at a lower sampling frequency. These subtle changes have little effect on the matching.

[0031] More preferably, the tagging information specifically includes but is not limited to the hash feature value of the audio channel of the video file, including a combination of one or more of the following: tagging content, tagging position, tagging start time and end time, Annotate the image feature value of the video frame corresponding to the start time. This standard inf...

Embodiment 3

[0045] Embodiment three, such as image 3 As shown, a matching method for video labeling, including:

[0046] A method for matching video annotations, characterized in that, when a video is played, the hash feature value of the video file is transferred to an annotation library, and the matching annotation information is retrieved from the annotation library, and when the video is played to a corresponding position, it can be displayed on the interface The corresponding label content is displayed on the The video played by the user can be a derivative video instead of the original video itself. Using the voice feature based on the one-way zero-crossing rate, it is possible to associate the derived video with the labeled video, and extract all the annotations of the labeled video and the time offset between the two videos in the association library. And it can accurately match the label information, the calculation amount is relatively small, and the speed is fast.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video annotation anchoring and matching method, and relates to the technical field of video annotation. According to the method, the technical problem that in the prior art, the calculation amount is high, and the calculation speed is slow. The technical scheme of the method comprises the steps of 1, annotating a video A; and 2, writing annotation information of a video file A in an annotation library, wherein the annotation information comprises hash feature values of the video file and annotation content information of the video file, and there are corresponding relationships between the hash feature values of the video file and the annotation content information of the video file.

Description

technical field [0001] The invention relates to the technical field of video tagging, in particular to an anchoring and matching method for video tagging. Background technique [0002] In the process of video production and editing, editing operations such as transcoding, compression, and cutting are usually performed to convert video file A into another video file B. The video file B is referred to as a derivative file of A here. After editing, the length, file size, resolution, encoding format, and sound clarity of the video file may change. [0003] Perceptual hashing is usually used to match multimedia content with similar content. Perceptual hashing can uniquely map multimedia numbers with the same perceptual content into a digital summary, which can maintain the robustness of the operation and the discrimination against content tampering, and meet the requirements of one-way and anti-collision. Perceptual hashing has been widely used in content-based multimedia iden...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N5/262
CPCH04N5/262
Inventor 程国艮王语
Owner GLOBAL TONE COMM TECH
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More