Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video action segmentation method based on hybrid time convolution and cycle network

A technology that mixes time and network, applied in biological neural network models, character and pattern recognition, instruments, etc., can solve problems such as extraction

Inactive Publication Date: 2017-12-01
SHENZHEN WEITESHI TECH
View PDF0 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Aiming at solving the problem of extracting action analysis in videos with different compression degrees, the purpose of the present invention is to provide a video action segmentation method based on hybrid temporal convolution and cyclic network, and propose a method based on hybrid temporal convolution and long-term short-term memory network A new framework for processing image features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video action segmentation method based on hybrid time convolution and cycle network
  • Video action segmentation method based on hybrid time convolution and cycle network
  • Video action segmentation method based on hybrid time convolution and cycle network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0035] figure 1 It is a system flowchart of a video action segmentation method based on a hybrid temporal convolution and recurrent network in the present invention. It mainly includes data input; model structure; model migration and variation; model parameter setting.

[0036] Among them, the model structure, including network architecture and action classification.

[0037] The network architecture consists of input, encoder L E , middle layer L mid , decoder L D Composed of and classifier: Among them, the input layer receives the original video frame data stream signal, and outputs the intermediate signal after being processed by the module composed of the convolution layer and ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A video action segmentation method based on a hybrid time convolution and cycle network presented by the invention mainly includes a model structure, model migration and variation, and model parameter setting. The method includes the following steps: an encoder composed of a convolutional layer, an activation function and a pooling layer, a decoder composed of an up-sampling layer and a long short term memory network, and a Sofmax classifier are designed; an original video frame signal is processed by the encoder to get an intermediate layer result; and the result is input to the decoder, processed and transmitted to the classifier to segment, identify and classify a video action. Video signals compressed at different degrees can be processed. A hybrid time network is provided to solve the problem of video action segmentation. The accuracy and efficiency of action content identification are improved.

Description

technical field [0001] The present invention relates to the field of video segmentation, in particular to a video action segmentation method based on hybrid temporal convolution and cyclic network. Background technique [0002] Video action segmentation and analysis is an important topic in the field of computer vision research, and it is also one of the major advances in understanding human activities, which has attracted widespread attention in recent years. It is a task that embodies the high-level understanding ability of machine learning. The goal is to learn and determine the type and attributes of human activities or actions performed in the video. A mature and easy-to-implement video action segmentation and recognition method will have great potential application value in monitoring, analysis and interactive control. In terms of monitoring, the automatic monitoring function can be generated under the condition of hardware self-starting to learn and understand the la...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04
CPCG06V20/49G06N3/048G06N3/045
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products