Video dense event description method based on generative adversarial network

A technology of event description and video, applied in the field of deep learning and image recognition, can solve the problems that the generated sentences do not have a good model, ignore the characteristics of temporal action detection, etc.

Active Publication Date: 2020-07-03
HUAZHONG UNIV OF SCI & TECH
View PDF9 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, most of the existing video description generation tasks only consider the characteristics of video timing, but ignore the features covered by temporal action detection; at the same time, the existing video description generation models do not have a good model for the generated sentences To judge whether the generated sentence is grammatical and appropriate to the event itself, so it is necessary to design a good network model to solve the above problems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video dense event description method based on generative adversarial network
  • Video dense event description method based on generative adversarial network
  • Video dense event description method based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0042] In order to achieve the purpose of the present invention, the technical solution adopted in the present invention is: combining the characteristics of video events and deep learning algorithms, designing a neural network model capable of describing video-intensive events. Three-dimensional convolutional network (Convolutional 3Dimension Networks, C3D) is used to extract the spatial and te...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video dense event description method based on a generative adversarial network, and belongs to the field of deep learning and image recognition. The method comprises the steps of constructing a video dense event description network; the network comprises a video feature extraction module for performing feature extraction on video frames to obtain video frame features; thetime sequence action feature extraction module is used for performing video frame feature learning by utilizing the characteristics of forward propagation and reverse propagation of the video to obtain each time sequence action feature; the natural language encoder fuses the video frame features and the time sequence action features by utilizing an attention mechanism to obtain natural statements; the discriminator is used for enhancing the accuracy of natural statements; and performing video dense event description by using the trained video dense event description model. According to the method, the characteristic of video two-way propagation is fully considered, video characteristics and time sequence action characteristics are fully utilized when the natural sentences are learned to be generated, and the grammar discriminator and the content discriminator are constructed, so that the accuracy of the natural sentences is effectively enhanced.

Description

technical field [0001] The invention belongs to the field of deep learning and image recognition, and more specifically, relates to a description method for video-intensive events based on a generative confrontation network. Background technique [0002] In recent years, with the gradual popularization of high-definition video surveillance and the rapid development of video apps such as short video social software and live broadcast software, video data has exploded. How to intelligently analyze these massive video data has also become a hot spot in the field of visual analysis. Generally speaking, the video-intensive event description algorithm is to perform multiple descriptions for a video, including three parts, one is video feature extraction, the other is video timing motion detection, and the third is video description generation. [0003] Among them, the task of video description generation is to generate corresponding natural language descriptions for videos. The c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/78G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06F16/7867G06N3/08G06V20/41G06N3/045G06F18/241Y02D10/00
Inventor 李玉华朱志杰李瑞轩辜希武
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products