Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Novel Video Semantic Extraction Method Based on Deep Learning Model

A technology of deep learning and extraction methods, applied in character and pattern recognition, instrumentation, computing, etc., to achieve the effect of improving accuracy

Active Publication Date: 2022-04-29
TROY INFORMATION TECHNOLOGY CO LTD
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The purpose of the present invention is to overcome the existing technical deficiencies and provide a new video semantic extraction method based on a deep learning model, which uses a three-dimensional convolutional neural network model and a continuous time series classification algorithm to perform semantic Extraction, which can solve the problem of semantic analysis of sports videos

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Novel Video Semantic Extraction Method Based on Deep Learning Model
  • A Novel Video Semantic Extraction Method Based on Deep Learning Model
  • A Novel Video Semantic Extraction Method Based on Deep Learning Model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] In order to have a clearer understanding of the technical features, purposes and effects of the present invention, the specific implementation manners of the present invention will now be described with reference to the accompanying drawings.

[0039] A schematic flow chart of a new video semantic extraction method based on a deep learning model proposed by the present invention is as follows: figure 1shown, including the following steps:

[0040] S1. Based on the physical structure of the video, the semantically structured video data is obtained by combining and segmenting the video frame sequence: the physical structure of the video data is from top to bottom: video, scene, shot, and frame. The schematic diagram of the structure is as follows figure 2 As shown; referring to the physical structure of video data, the semantic structure of video is defined from top to bottom: video, behavior, sub-action, frame, and its structural diagram is as follows image 3 shown; ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a new video semantic extraction method based on a deep learning model, which includes the following steps: based on the video physical structure, by combining and segmenting video frame sequences, semantically structured video data is obtained; by using a sliding window to extract semantically The structured video data is processed into the input data of the three-dimensional convolutional neural network; the three-dimensional convolutional neural network model is created, and the output data of the sliding window is used as the training data; the output result of the three-dimensional convolutional neural network is used as the input of the continuous time series classification algorithm , complete the training of the parameters of the 3D convolutional neural network through the backpropagation algorithm; use the trained 3D convolutional neural network-continuous time series classification algorithm as the sports video semantic extraction model to extract video semantics. The present invention combines the three-dimensional convolutional neural network and the continuous sequence classification algorithm through the proposed video semantic structuring method to better capture the connection between actions and improve the accuracy of sports video semantic extraction.

Description

technical field [0001] The invention relates to the technical fields of artificial intelligence and pattern recognition, in particular to a new video semantic extraction method based on a deep learning model. Background technique [0002] The concept of "semantics" originated at the end of the 19th century. It is the expression of the meanings represented by things in the real world corresponding to virtual data, and the relationship between these meanings. It is the interpretation and logical representation of virtual data in a certain field. . Moreover, "video semantics" is aimed at human thinking. When we want to use computers to understand the "semantics" in videos, computers can only recognize low-level features such as color and shape. Therefore, we need to use some methods to connect these low-level features to form some higher-level meanings, so as to better express the information to be displayed in the video. [0003] Video data is usually unstructured, so the se...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V20/40G06V10/774G06V10/82
CPCG06V20/41G06F18/2148
Inventor 姚易佳
Owner TROY INFORMATION TECHNOLOGY CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products