Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Course classification method, device, equipment and medium based on multimodal feature representation

A classification method and multi-modal technology, applied in the direction of video data clustering/classification, character and pattern recognition, data processing applications, etc., can solve the problems of inaccuracy, insufficient utilization of video course classification information, etc., to avoid feature loss, Optimize feature representation and improve accuracy

Active Publication Date: 2022-02-15
PING AN TECH (SHENZHEN) CO LTD
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The embodiment of the present invention provides a course classification method, device, equipment and medium based on multimodal feature representation, aiming to solve the problem of inaccuracy caused by insufficient utilization of video course classification information

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Course classification method, device, equipment and medium based on multimodal feature representation
  • Course classification method, device, equipment and medium based on multimodal feature representation
  • Course classification method, device, equipment and medium based on multimodal feature representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0032] It should be understood that when used in this specification and the appended claims, the terms "comprising" and "comprises" indicate the presence of described features, integers, steps, operations, elements and / or components, but do not exclude one or Presence or addition of multiple other features, integers, steps, operations, elements, components and / or collections thereof.

[0033] It should also be understood that the terminology used ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to the field of artificial intelligence, and provides a course classification method, device, equipment and medium based on multi-modal feature representation, which can enhance weak-modal semantic features while retaining the characteristics of weak-modal semantic features, effectively avoiding feature Lost, adaptive learning of different modal weights is carried out according to the semantic strength of the modal, and the features are fused according to the weights in multiple dimensions of video, audio, and text, so that the obtained features have three dimensions of information at the same time, optimizing the The feature representation of video courses improves the accuracy of course category prediction. The fusion feature of each sample is used to train the preset classification network to obtain the video course classification model. The video course classification model is used to classify the video courses to be classified and the classification results are obtained. , to achieve accurate classification of courses. The present invention also relates to block chain technology, and the trained model can be stored on block chain nodes.

Description

technical field [0001] The present invention relates to the technical field of artificial intelligence, in particular to a course classification method, device, equipment and medium based on multimodal feature representation. Background technique [0002] With the rapid development of Internet technology, online education can break the boundaries of time and space, and is very popular among consumers. Therefore, in order to facilitate users to retrieve interested courses, it is more and more important to accurately classify video courses. [0003] The video modal features of most videos convey more information than audio and text, so it is necessary to focus on the video modal. At this time, the audio modal and text modal features should be enhanced. However, there are still a small number of video courses in which the information conveyed by each frame is limited, and users need to combine audio and text comments to obtain more knowledge. At this time, audio and text featu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06Q50/20G06F16/75
CPCG06Q50/20G06F18/214G06F18/241G06F18/253
Inventor 乔延柯栾雅理吴志成张茜李婧源
Owner PING AN TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products