Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

New video semantic extraction method based on deep learning model

A technology of deep learning and extraction methods, applied in character and pattern recognition, instruments, computer parts, etc., to achieve the effect of improving accuracy

Active Publication Date: 2018-11-30
TROY INFORMATION TECHNOLOGY CO LTD
View PDF10 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The purpose of the present invention is to overcome the existing technical deficiencies and provide a new video semantic extraction method based on a deep learning model, which uses a three-dimensional convolutional neural network model and a continuous time series classification algorithm to perform semantic Extraction, which can solve the problem of semantic analysis of sports videos

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • New video semantic extraction method based on deep learning model
  • New video semantic extraction method based on deep learning model
  • New video semantic extraction method based on deep learning model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] In order to have a clearer understanding of the technical features, purposes and effects of the present invention, the specific implementation manners of the present invention will now be described with reference to the accompanying drawings.

[0039] A schematic flow chart of a new video semantic extraction method based on a deep learning model proposed by the present invention is as follows: figure 1shown, including the following steps:

[0040] S1. Based on the physical structure of the video, the semantically structured video data is obtained by combining and segmenting the video frame sequence: the physical structure of the video data is from top to bottom: video, scene, shot, and frame. The schematic diagram of the structure is as follows figure 2 As shown; referring to the physical structure of video data, the semantic structure of video is defined from top to bottom: video, behavior, sub-action, frame, and its structural diagram is as follows image 3 shown; ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a new video semantic extraction method based on a deep learning model. The new video semantic extraction method comprises the following steps: obtaining semantic structured video data by combining and segmenting a video frame sequence on the basis of a video physical structure; using a sliding window to process the semantic structured video data into the input data of a three-dimensional convolutional neural network; creating a three-dimensional convolutional neural network model, and using the output data of the sliding window as training data; using the output resultbased on the three-dimensional convolutional neural network as the input of the continuous time series classification algorithm, and completing the training of three-dimensional convolutional neural network parameters by the backpropagation algorithm; and using the trained three-dimensional convolutional neural network-continuous time series classification algorithm as a sports video semantic extraction model to extract video semantics. The proposed video semantic structuring method is combines with the three-dimensional convolutional neural network and the continuous time series classification algorithm, which can capture the connection between actions and improve the accuracy of sports video semantic extraction.

Description

technical field [0001] The invention relates to the technical fields of artificial intelligence and pattern recognition, in particular to a new video semantic extraction method based on a deep learning model. Background technique [0002] The concept of "semantics" originated at the end of the 19th century. It is the expression of the meanings represented by things in the real world corresponding to virtual data, and the relationship between these meanings. It is the interpretation and logical representation of virtual data in a certain field. . Moreover, "video semantics" is aimed at human thinking. When we want to use computers to understand the "semantics" in videos, computers can only recognize low-level features such as color and shape. Therefore, we need to use some methods to connect these low-level features to form some higher-level meanings, so as to better express the information to be displayed in the video. [0003] Video data is usually unstructured, so the se...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/41G06F18/2148
Inventor 姚易佳
Owner TROY INFORMATION TECHNOLOGY CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products