Video summarizing method and device based on deep learning, and terminal equipment

A technology of deep learning and video, applied in the field of deep learning, can solve problems such as users' fast browsing obstacles and long video time span

Active Publication Date: 2018-05-25
SHENZHEN INST OF ADVANCED TECH
View PDF3 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] In today's Internet-developed era, online videos on video websites emerge in endlessly. At the same time, videos often have a relatively long time span, which creates certain obstacles for users to browse quickly. This is because users cannot fully browse a large number of networks in a limited time. video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video summarizing method and device based on deep learning, and terminal equipment
  • Video summarizing method and device based on deep learning, and terminal equipment
  • Video summarizing method and device based on deep learning, and terminal equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0065] Embodiments of the present invention provide a video summarization method, device, and terminal device based on deep learning, which are used to solve the problem of how to quickly implement video summarization for videos.

[0066] In order to make the purpose, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the following The described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0067] see figure 1 , an embodiment of a video summary method based on deep learning in the embodiment of t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video summarizing method based on intensive learning and deep learning, and the method comprises the steps: segmenting a target video, and obtaining a plurality of video segments; extracting a feature vector of all video frames of each video segment; taking the feature vector of the video frames of each video segments as the input, inputting the feature vectors into a pre-trained depth neural network, and obtaining the probability value corresponding to each outputted frame; calculating the importance value of each video segment according to the probability values corresponding to all video frames of the video segment; selecting a plurality of video segments with the bigger importance values from the video segments while guaranteeing that the proportion of the total duration of the selected video segments to the total duration of a target video is less than or equal to a preset proportion threshold value; carrying out the arrangement of the selected video segments, obtaining a video summary; defining an award function which represents the representativeness and the diversity of the video summary, and achieving the depth network training based on the non-supervision and supervision data through the intensive learning method.

Description

technical field [0001] The present invention relates to the technical field of deep learning, in particular to a deep learning-based video summarization method, device and terminal equipment. Background technique [0002] In today's Internet-developed era, online videos on video websites emerge in endlessly. At the same time, videos often have a relatively long time span, which creates certain obstacles for users to browse quickly. This is because users cannot fully browse a large number of networks in a limited time. video. In order to facilitate users to understand the general content of the video in a short period of time before watching the complete video, it is an important research topic for those skilled in the art to find a method for quickly summarizing the video. Contents of the invention [0003] Embodiments of the present invention provide a deep learning-based video summarization method, device, and terminal device, which can quickly summarize videos and grea...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06F17/30
CPCG06F16/739G06V20/40G06V20/46
Inventor 乔宇周锴阳
Owner SHENZHEN INST OF ADVANCED TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products