Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Memory network video abstraction method based on multipath features

A network video and memory technology, applied in image communication, selective content distribution, electrical components, etc., can solve the problems of lack of real-time video, difficult to find content quickly, and inability to achieve long-term memory of information.

Inactive Publication Date: 2020-03-27
HEFEI UNIV OF TECH
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

B. Truong et al. proposed that the search and retrieval of a large number of videos can meet the effective demand for the required content, but this does not provide the specific meaning of the actual video content. The difficulty is that it is difficult to quickly find the required content. Based on content frequency or non-redundancy, although simple and effective, it lacks a direct connection with the real-time performance of video
[0003] In the general process of video summarization, most of the previous video summarization models used to directly send the framed video to the convolutional neural network (CNN), and take the data of a certain layer as the feature output, that is, The features of the extracted video frames, although this method is convenient and fast, it ignores the salient regions and objects in the video image, and many currently popular video summarization system models ignore the salient regions in two video frames, The difference between objects, although the general recurrent neural network (RNN) can alleviate such problems, it has its own functional limitations and cannot achieve long-term memory of information. Therefore, the specific problems we need to solve are:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Memory network video abstraction method based on multipath features
  • Memory network video abstraction method based on multipath features
  • Memory network video abstraction method based on multipath features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the drawings in the embodiments of the present invention.

[0030] Such as Figure 1-2 As shown, the present invention provides a kind of technical scheme: a kind of memory network video summarization method based on multi-channel feature, comprising:

[0031] A video input module, the video input module is used to record video frames to be processed;

[0032] Feature extraction module, the feature extraction module is used to extract the original feature x in the video image t , and each video is represented by a K*1024-dimensional vector, and the difference between two video frames is used as the difference feature x d , and then input the difference feature and the original feature into the RNN memory network at the same time. Due to the ability of RNN to capture long-term dependencies in the video frame, only the temporal memory ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a memory network video abstraction method based on multipath features, and the method comprises the steps: a video input module which is used for inputting a to-be-processed video frame; a feature extraction module which is used for extracting original features in the video image; each video is represented by a K * 1024-dimensional vector, and the K * 1024-dimensional vector is obtained; the difference between every two video frames is used as a difference feature; the difference features and the original features are input into an RNN memory network at the same time; the RNN has the capability of capturing the long-term dependence relationship in the video frame; updating only time memory networks, according to the invention, useful information of the image can bebetter contained; and the memory updating module is used for establishing a memory network, so that the relationship between salient regions and objects among video frames can be established, the information contained in the video frames can be effectively memorized for a long time, key frame extraction is effectively carried out in combination with difference information among multiple paths of characteristics, and the expected effect is achieved.

Description

technical field [0001] The invention relates to the technical fields of computer vision and natural language processing, in particular to a multi-channel feature-based memory network video summarization method. Background technique [0002] Video summarization is a learning task involving computer vision and natural language processing. Video summarization takes selected videos as input to generate a concise and compact video summarization as output. In general, video summarization is to perform feature extraction on input video frames to select the most representative subset of key frames. Benefiting from the rapid development of deep learning, neural network, and natural language processing technologies, a large number of research results on video summarization have appeared in recent years. B. Truong et al. proposed that the search and retrieval of a large number of videos can meet the effective demand for the required content, but this does not provide the specific mea...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N21/8549
CPCH04N21/8549
Inventor 赵烨李巧凤刘学亮郭艳蓉郭丹胡珍珍吴乐
Owner HEFEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products