Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video saliency detection method based on global motion estimation

A technology of global motion and detection method, which is applied in the direction of digital video signal modification, electrical components, image communication, etc., can solve problems such as hindering practical application, not considering the influence of global motion, and not being able to give full play to the advantages of detection results, etc., to achieve robustness The effect of high performance, reasonable design and strong scalability

Active Publication Date: 2017-11-24
北京牡丹电子集团有限责任公司数字科技中心
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Muthuswamy et al. (Karthik Muthuswamy and Deepu Rajan, "Salientmotion detection in compressed domain," IEEE Signal Processing Letters, vol.20, pp.996–999, 2013.) proposed a two-layer structure algorithm for distinguishing salient motion, but Does not solve the fusion problem of saliency maps under multiple feature conditions
Fang (Yuming Fang, Zhou Wang, and Weisi Lin, "Videosaliency incorporating spatiotemporal cues and uncertainty weighting," in Multimedia and Expo (ICME), 2013IEEE International Conference on.IEEE, 2013, pp.1–6.) proposed based on the partial The adaptive fusion method of deterministic metrics can achieve better detection results, but this method needs to know the real saliency map in advance when calculating the weight, which hinders the practical application of this method, and this method is not suitable for the existence of global motion scene
[0006] To sum up, among the existing video saliency detection methods, there are few compressed domain methods, which do not consider the impact of global motion on the detection results, and the saliency map fusion technology under multiple features is not perfect enough to give full play to each feature. Advantages of testing results under conditions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video saliency detection method based on global motion estimation
  • Video saliency detection method based on global motion estimation
  • Video saliency detection method based on global motion estimation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] Embodiments of the present invention are described in further detail below in conjunction with the accompanying drawings:

[0058] A video saliency detection method based on global motion estimation, such as figure 1 shown, including the following steps:

[0059] Step 1. Extract the spatial domain features and time domain features in the compressed code stream, and use the two-dimensional Gaussian weight function and the spatial domain features to obtain the spatial domain saliency map.

[0060] In this step, the original video is compressed by H.264 test version 18.5 (JM18.5), and each frame of image is divided into (4×4) blocks. For CIF sequences, each frame can be divided into 88×72 blocks. Extract the motion vector and DCT coefficient corresponding to each block, the motion vector represents the time domain information, and the DCT coefficient of each block includes a direct current component DC and fifteen alternating current components (AC 1 ~AC 15 ), extract t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a video saliency detection method based on global motion estimation, which is characterized in that it comprises the following steps: extracting spatial domain features and time domain features in a compressed code stream, and obtaining a spatial domain saliency map by using a two-dimensional Gaussian weight function and the spatial domain features; Use the cascade structure to filter out the background motion vectors belonging to the global motion, obtain a rough temporal saliency map based on the remaining motion vectors, and optimize the rough temporal saliency map according to the macroblock information; Feature-adaptive fusion of temporal and spatial saliency images based on saliency map to obtain image saliency regions. The invention has a reasonable design, and it has a complete range of features considered in the spatial saliency and time domain saliency detection, so that the final saliency map is more in line with the subjective perception quality of the human eye, and has high robustness, does not depend on changes in video content, and is effective. Stronger scalability, such as adding other features, can also use the fusion method of the present invention.

Description

technical field [0001] The invention belongs to the technical field of video detection, and in particular relates to a video saliency detection method based on global motion estimation. Background technique [0002] With the vigorous development of Internet technology and communication technology, people acquire and exchange more and more information in their daily life. The information includes text, images, audio and video, etc. Since video contains a large amount of information and rich content, video has become the main information carrier. However, such a huge amount of information will be limited by bandwidth and capacity when it is transmitted and stored, so it needs to be processed according to the visual characteristics of the information receptor human eyes to extract the parts that the human eyes pay attention to. Video saliency detection is an important mechanism for analyzing video information according to the visual characteristics of the human eye. It can be ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/51H04N19/107
Inventor 白旭徐俊任婧婧
Owner 北京牡丹电子集团有限责任公司数字科技中心
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products