Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Key frame extraction method based on visual attention model and system

A technology of visual attention and extraction methods, applied in the field of video analysis, can solve problems such as heavy workload and different video understanding

Active Publication Date: 2014-05-28
SUN YAT SEN UNIV +1
View PDF4 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Faced with a large number of videos, this task is not only a lot of work, but also different people have different understandings of the video, and others cannot judge whether the video meets their own interests through the author's text annotation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Key frame extraction method based on visual attention model and system
  • Key frame extraction method based on visual attention model and system
  • Key frame extraction method based on visual attention model and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0025] A kind of video key frame extraction method based on visual attention model disclosed by the present invention, the specific implementation is as follows:

[0026] First, in the spatial domain, the binomial coefficient is used to filter the global contrast for saliency detection, and the adaptive threshold is used to extract the target area. The specific method is as follows:

[0027] (11) The binomial coefficient is constructed according to the Yang Hui triangle, and the normalization factor of the N layer is 2 N . The fourth layer is selected, so the filter coefficients B 4 =(1 / 16)[1 4 6 4 1];

[0028] (12) Let I be the original stimulus intensity, is the mean value of the surrounding stimulus intensity, for I and B 4 The convolution of the pixel point is measured by the vector form of CIELAB color space to measure the strength of the stimulu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a key frame extraction method based on a visual attention model and a system. In a spatial domain, the extraction method uses binomial coefficients to filter the global contrast for salience detection, and uses an adaptive threshold for carrying out extraction on a target region. The algorithm can well maintain the salient target region boundary, and the salience in the region is uniform. Then, in a time domain, the method defines the motion salience, motion of the target is estimated via a homography matrix, a key point is adopted for replacing the target for salience detection, data of salience in the spatial domain is converged, and a boundary extension method based on an energy function is brought forward to acquire a bounding box to serve as the salient target region of the time domain. Finally, the method reduces richness of the video through the salient target region and an online clustering lens adaptive method is adopted for key frame extraction.

Description

technical field [0001] The invention relates to the technical field of video analysis, in particular to a method and system for extracting key frames based on a visual attention model. Background technique [0002] With the rapid development of Internet technology, we have entered the era of information explosion, and various network applications and the rapid development of multimedia technology have been widely used. As a common network information carrier, video is vivid and intuitive, with strong appreciation and expressiveness, and thus has been widely used in various fields, resulting in a massive increase in video data. Taking the famous video website YouTube as an example, every There are about 60 hours of videos uploaded by users per minute (data taken from January 23, 2012), and it still maintains a growing trend. How to quickly and effectively store, manage and access massive video resources has become an important issue in the current video application field. D...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/00G06T7/20
Inventor 纪庆革赵杰刘勇
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products