Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video classification method and device

A video classification and video technology, applied in the video field, can solve the problem that the video classification result is not close to the user, etc.

Active Publication Date: 2021-12-07
TENCENT TECH (SHENZHEN) CO LTD +1
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The embodiment of the present invention provides a video classification method and device to at least solve the technical problem that the video classification result is not close to the user because only the content of the video itself is considered when classifying the video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video classification method and device
  • Video classification method and device
  • Video classification method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0025] According to an embodiment of the present invention, a video classification method embodiment is provided.

[0026] Optionally, in this embodiment, the above video classification method can be applied to figure 2 In the hardware environment constituted by the server 102 and the terminal 104 as shown. Such as figure 2 As shown, the server 102 is connected to the terminal 104 through a network. The above-mentioned network includes but not limited to: a wide area network, a metropolitan area network or a local area network. The terminal 104 is not limited to a PC, a mobile phone, a tablet computer, and the like. The video classification method in the embodiment of the present invention may be executed by the server 102 , may also be executed by the terminal 104 , and may also be executed jointly by the server 102 and the terminal 104 . Wherein, the video classification method of the embodiment of the present invention executed by the terminal 104 may also be executed b...

Embodiment 2

[0082] According to an embodiment of the present invention, a video classification device for implementing the above video classification method is also provided. Figure 9 is a schematic diagram of an optional video classification device according to an embodiment of the present invention, such as Figure 9 As shown, the device may include:

[0083] The first obtaining unit 10 is configured to obtain the content category of the video to be classified, wherein the content category is a category obtained by classifying the content of the video itself.

[0084] The conversion unit 20 is configured to convert the content category of the video according to a preset conversion relationship to obtain the social attribute category of the video, wherein the preset conversion relationship is obtained by training sample data, and the sample data includes the content category of the video and the content category of the video. The social attribute category of the user who interacted.

...

Embodiment 3

[0095] According to an embodiment of the present invention, a server or terminal for implementing the above video classification method is also provided.

[0096] Figure 10 is a structural block diagram of a terminal according to an embodiment of the present invention, such as Figure 10 As shown, the terminal may include: one or more (only one is shown in the figure) processor 201, memory 203, and transmission device 205 (such as the sending device in the above-mentioned embodiment), such as Figure 10 As shown, the terminal may also include an input and output device 207 .

[0097] Wherein, the memory 203 can be used to store software programs and modules, such as the program instructions / modules corresponding to the video classification method and device in the embodiment of the present invention, and the processor 201 executes the various software programs and modules stored in the memory 203 by running the software programs and modules. A functional application and dat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video classification method and device. Wherein, the method includes: obtaining the content category of the video to be classified, wherein the content category is a category obtained by classifying the content of the video itself; converting the content category of the video according to a preset conversion relationship to obtain the social attribute category of the video, Wherein, the preset conversion relationship is obtained through sample data training, and the sample data includes video content category and social attribute category of users who interact with the video; the video social attribute category is used as the video category to be classified. The invention solves the technical problem that the video classification result is not close to the user because only the content of the video itself is considered when classifying the video.

Description

technical field [0001] The present invention relates to the video field, in particular to a video classification method and device. Background technique [0002] Video classification and content recognition systems center on the content of the video itself and rely on the selection and construction of features for video classification. Traditional methods use static visual features, sound features, and motion features to identify and classify video content. In recent years, with the increasing popularity of deep learning research, features obtained through learning based on CNN networks have also been used to identify and classify video content. No matter which feature is used, the existing video content classification technology is based on the content of the video itself. figure 1 is a framework diagram of video content classification in the prior art, such as figure 1 As shown, the video content is classified by artificially designed features or deep learning network le...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/78G06K9/62G06Q50/00
CPCG06Q50/01G06F18/2411
Inventor 聂秀山
Owner TENCENT TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products