Video key frame self-adaptive extraction method under emotion encourage

A technology of video key frames and extraction methods, applied in special data processing applications, image data processing, instruments, etc., can solve the problems of lack of value and representativeness of video key frames, and achieve representative and effective effects.

Inactive Publication Date: 2014-08-27
FUZHOU UNIV
View PDF3 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

A very important feature of video keyframes is that they generally serve users. Failure to think about problems from the perspective of video viewers, and failure to locate video keyframes from the emotional semantics often lead to the lack of certain value and quality of the extracted video keyframes. representative

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video key frame self-adaptive extraction method under emotion encourage
  • Video key frame self-adaptive extraction method under emotion encourage
  • Video key frame self-adaptive extraction method under emotion encourage

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The technical solution of the present invention will be specifically described below in conjunction with the accompanying drawings.

[0029] A method for adaptively extracting video key frames under emotional stimulation of the present invention comprises the following steps,

[0030] Step S1: Extract all video frames from the video footage and calculate the visual emotional stimulation of each video frame ;

[0031] Step S2: Calculate the auditory emotional stimulation of the audio data synchronized with each video frame in step S1 ;

[0032] Step S3: Integrate the visual emotional stimulation degree and the auditory emotional stimulation degree by linear weighting, and the weights of the two are equal to obtain the video emotional stimulation degree of each video frame, and finally calculate it adaptively according to the video emotional stimulation of each shot The number of video key frames KN out of the shot;

[0033] Step S4: Obtain the video emotional arous...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a video key frame self-adaptive extraction method under emotion encourage. The video key frame self-adaptive extraction method comprises the steps of: thinking in terms of emotional fluctuation of a video looker, computing exercise intensity of video frames to serve as visual emotion incentive degrees of the video looker when looking a video, computing short-time average energy and tone as audition emotion incentive degrees, and linearly fusing the visual emotion incentive degree and the audition emotion incentive degree to obtain the video emotion incentive degree of each video frame and then generate a video emotion incentive degree curve of the scene; obtaining video key frame number KN shall be distributed to the scene according to the video emotion incentive change of the scene; at last taking the video frames corresponding to KN crests before the highest emotion incentive degree of the video emotion incentive degree curve as the scene key frames. The video key frame self-adaptive extraction method is simple and performed from the perspective of the emotional fluctuation of the video looker, and utilizes the video emotion incentive degree to semantically direct the extraction of the key frames; and the extracted video key frames are more representative and effective.

Description

technical field [0001] The invention relates to the field of video image processing, in particular to a method for adaptively extracting video key frames under emotional stimulation. Background technique [0002] In recent years, the development of multimedia technology and the popularization of portable video devices have spawned more and more video data. How to quickly browse and efficiently manage these data has become an urgent problem to be solved. Human time and energy are not directly proportional to the growth rate of videos. On the one hand, people have limited time and energy, and cannot browse all the videos they are interested in; on the other hand, the number of videos keeps skyrocketing. For example, for sports video lovers, they cannot browse all game videos in a period of time due to various factors. In fact, they may only focus on a few key moments of each game. In order to save time, it is necessary to analyze the video content to a certain extent, and e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30G06T7/20
CPCG06T7/20
Inventor 余春艳翁子林苏晨涵叶东毅陈昭炯
Owner FUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products