Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Video classification method and device

A video classification and video technology, applied in the field of data processing, can solve the problems of low efficiency and accuracy of video classification methods, achieve the effect of solving low efficiency and accuracy and improving accuracy

Active Publication Date: 2020-03-13
BEIJING WEIBOYI TECH CO LTD
View PDF8 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In view of this, the main purpose of the present invention is to solve the problem of low efficiency and accuracy of existing video classification methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video classification method and device
  • Video classification method and device
  • Video classification method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0020] Such as figure 1 As shown, the present invention provides a video classification method, comprising:

[0021] Step 101, according to the key frames in the video to be classified, the feature vector of each key frame in the video to be classified is obtained.

[0022] In this embodiment, the key frame in step 101 is also called an I frame (Intra-coded frame), which is a frame in which the image data is completely preserved in the compressed video. When decoding the key frame, only this frame is needed The image data can be decoded. Since the similarity between each key frame in the video to be classified is small, multiple key frames can be used to fully characterize the video to be classified; by extracting the feature vector of the key frame, the accuracy of classifying the video image to be classified can be improved .

[0023] Specifically, the process of obtaining the feature vector through step 101 includes: extracting key frames from the video to be classified ...

Embodiment 2

[0032] Such as figure 2 As shown, the embodiment of the present invention provides a video classification method, including:

[0033] Step 201 to step 203, obtain visual classification vector and text classification vector, this process and figure 1 Steps 101 to 103 shown are similar and will not be repeated here.

[0034] Step 204, acquiring a plurality of video samples, and a visual classification vector, a text classification vector and a category value corresponding to each video sample.

[0035]In step 205, the initial classifier is trained according to the visual classification vector, text classification vector and category value corresponding to each video sample to obtain a classification model.

[0036] In this embodiment, the initial classifier in step 205 may use a convolutional neural network model, or other models, which are not limited here.

[0037] Step 206, substituting the visual classification vector and the text classification vector into the preset cl...

Embodiment 3

[0040] Such as image 3 As shown, the embodiment of the present invention provides a video classification device, including:

[0041] The feature acquisition module 301 is used to obtain the feature vector of each key frame in the video to be classified according to the key frame in the video to be classified;

[0042] The visual classification module 302 is connected with the feature acquisition module, and is used to obtain the visual classification vector of the video to be classified according to the feature vector of each key frame in the video to be classified;

[0043] The text classification module 303 is used for obtaining the text classification vector of the video to be classified according to the text contained in the image frame in the video to be classified;

[0044] The category acquisition module 304 is connected to the visual classification module and the text classification module respectively, and is used for substituting the visual classification vector an...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video classification method and a device, and relates to the field of data processing. The objective of the invention is to solve the problems of low efficiency and low accuracy of an existing video classification process. According to the technical scheme provided by the embodiment of the invention, the method comprises the steps of obtaining a feature vector of each keyframe in a to-be-classified video according to the key frames in the to-be-classified video; obtaining a visual classification vector of the to-be-classified video according to the feature vector ofeach key frame in the to-be-classified video; obtaining a text classification vector of the to-be-classified video according to texts contained in image frames in the to-be-classified video; and substituting the visual classification vector and the text classification vector into a preset classification model to obtain the category of the to-be-classified video. The method can be applied to the fields of video directional pushing and the like.

Description

technical field [0001] The invention relates to the field of data processing, in particular to a video classification method and device. Background technique [0002] In recent years, with the rapid development of Internet short video platforms, various videos such as film and television, food, technology, tourism, education, and games have shown explosive growth. These videos have a wide range of sources, low cost, huge daily volume, and extremely fast transmission speed, which brings great challenges to video classification. [0003] In the prior art, videos are generally classified manually or by extracting keywords from titles. However, the manual method requires a lot of manpower and material resources, and the efficiency is low; and the title may not be able to accurately summarize the content of the video, resulting in a low accuracy rate of video classification by extracting keywords; Categories that require semantic understanding such as workplace management and e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06F16/75G06F16/35
CPCG06F16/75G06F16/353G06V20/47G06V20/40G06F18/24
Inventor 邓积杰何楠林星白兴安徐扬
Owner BEIJING WEIBOYI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products