Supercharge Your Innovation With Domain-Expert AI Agents!

Video data processing method and device

A video data and video technology, applied in the field of video data processing methods and devices, can solve problems such as unsatisfactory video data representation learning effects, and achieve the effects of improving differentiation and improving effects

Active Publication Date: 2021-11-23
TENCENT TECH (SHENZHEN) CO LTD
View PDF10 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, the representation learning of video data can adopt a supervised training method. By obtaining the supervision information of video data, the supervision information can be used to guide the classification of video data features; however, the existing supervision information is usually a single label marked by humans. , the video data features learned by using the supervision information are often coarse-grained features, and the predicted classification results of the video data features may be different from the content of the video data itself, which makes the effect of video data representation learning unsatisfactory.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video data processing method and device
  • Video data processing method and device
  • Video data processing method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0075] The following will clearly and completely describe the technical solutions in the embodiments of the application with reference to the drawings in the embodiments of the application. Apparently, the described embodiments are only some of the embodiments of the application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of this application.

[0076] This application involves the following concepts:

[0077] Multi-Task Learning (Multi-Task): Multi-task learning is a training paradigm in which machine learning models can be trained using data from multiple tasks simultaneously, using shared representations to learn different tasks.

[0078] Video multimodality: Modality can refer to the way something happens or is experienced, and video data can include titles (or subtitles), video streams, audio, and other modal information.

[...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention provides a video data processing method and device, and the method relates to the technical field of network media, and can achieve the processing of video data through employing a deep learning algorithm. The method comprises the following steps: clustering video tags in an obtained video tag set to obtain K tag clusters; obtaining a sample video category and a sample video tag corresponding to the sample video data, and determining a sample cluster class identifier of the sample video data according to a tag cluster to which the sample video tag belongs; outputting sample multi-modal features corresponding to the sample video data through the initial video multi-modal model; inputting the sample multi-modal features into N classification components, and outputting N classification results corresponding to the sample video data through the N classification components; and training the initial video multi-modal model according to the N classification results, the sample video category, the sample video tag and the sample cluster identifier. By adopting the embodiment of the invention, the effect of multi-modal representation learning of the video can be improved.

Description

technical field [0001] The present application relates to the technical field of network media, in particular to a video data processing method and device. Background technique [0002] In the context of Internet big data, it is usually necessary to process and analyze specific data and extract useful information from the data, that is, to perform representation learning on the data. How to use the massive data on the Internet to mine effective information has attracted widespread attention. At present, the representation learning of video data can adopt a supervised training method. By obtaining the supervision information of video data, the supervision information can be used to guide the classification of video data features; however, the existing supervision information is usually a single label marked by humans. , the video data features learned by using the supervision information are often coarse-grained features, and the predicted classification results of the video ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06K9/00G06F40/289G06N3/04
CPCG06F40/289G06N3/044G06N3/045G06F18/23G06F18/24G06F18/253G06F18/214
Inventor 罗永盛
Owner TENCENT TECH (SHENZHEN) CO LTD
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More