Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth video significance detection method based on motion and memory information

A detection method and depth video technology, applied in the field of computer vision, can solve the problems of low-quality and dynamic video information, insufficient understanding of advanced semantic information, and inability to make full use of inter-frame information to achieve the effect of ensuring accuracy

Active Publication Date: 2018-07-03
TIANJIN UNIV
View PDF9 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The research on video saliency detection, especially the detection of eye positioning points, is currently mainly based on low-level manual feature extraction, which is not ideal for video processing with complex scenes, multiple moving objects, and high-level semantic information. Qualitative, complex, dynamic, and rich in semantic information, more in-depth research is needed to solve these difficult problems
[0006] Research on the eye positioning point detection technology of video, and found that there are two main problems: first, the understanding of high-level semantic information in a single video frame is not sufficient, and the eye positioning point of a single frame cannot be well predicted; The second is that the information between frames cannot be fully utilized, and there is a lack of collaborative processing of motion information and memory information between video frames, and it is impossible to use past salient information on the detection of the current frame while detecting moving objects.
[0007] Most of the existing video eye positioning point detection technologies directly decompose the video into multiple images, and use the image saliency detection method to process each frame separately, without using the motion information between frames, and the video conference Trigger the human memory mechanism and generate memory information; another part of the technology uses the optical flow algorithm, expecting to obtain motion information, but it also fails to consider the impact of memory information on the detection of video eye positioning points

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth video significance detection method based on motion and memory information
  • Depth video significance detection method based on motion and memory information
  • Depth video significance detection method based on motion and memory information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0050]The embodiment of the present invention is based on the fully convolutional neural network, and the depth video eye positioning point detection technology that considers motion and memory information cooperatively, analyzes and fully understands the original video data, see figure 1 and figure 2 , its main process is divided into the following five parts:

[0051] 101: Obtain a detection data set consisting of an image salient object detection data set and a video eye positioning point detection data set; perform ground truth map calculation on the video eye positioning point detection data set, and obtain the final eye positioning map of the current frame ;

[0052] 102: Construct four models for extracting local information and global information with different deconvolution layers;

[0053] 103: Pre-train the four models on the image salient object detection dataset, and then perform model fine-tuning on the pre-trained four models on the video eye positioning poin...

Embodiment 2

[0076] The scheme in embodiment 1 is further introduced below in conjunction with specific calculation formulas, accompanying drawings, examples, and Table 1-Table 3, see the following description for details:

[0077] 201: Data set production;

[0078] In order to improve the generalization ability of the model, this method selects 8 most commonly used data sets for image saliency detection and video saliency detection to make a data set suitable for this task. Among them, there are 6 image saliency objects The detection data set (see Table 1), 2 video eye positioning point detection data sets (see Table 2), and the introduction of the 8 data sets are shown in Table 1 and Table 2.

[0079] Table 1

[0080] data set

MSRA

THUS

THURS

DUT-OMRON

DUTS

ECSSD

size

1000

10000

6232

5168

15572

1000

[0081] Table 2

[0082]

[0083]

[0084] Among them, the six image salient object detection data sets of MSRA, THUS,...

Embodiment 3

[0185] Below in conjunction with concrete experimental data, the scheme in embodiment 1 and 2 is carried out feasibility verification, see the following description for details:

[0186] see Figure 7 , i) is the original data frame, (ii) is the model prediction probability map, and (iii) is the visualized heat map.

[0187] Among them, (ii) is the prediction result of the eye positioning point obtained by using the model SGF (E) in the present invention to detect the original data frame in (i), (iii) is the result obtained by the model detection (ii ) is the heatmap obtained after visualization using a color distribution matrix.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a depth video significance detection method based on motion and memory information; the method comprises the following steps: obtaining a detection data set formed by an imagesignificance object detection data set and a video eye portion setpoint detection data set; calculating a ground true graph of the video eye portion setpoint detection data set, and obtaining a finaleye portion location map of the current frame; building four models used for extracting local information and global information and with different anti-convolution layers; pre-training the four models on the image significance object detection data set, and fine-tuning the pre-trained four models on the video eye portion setpoint detection data set; using a significance moving object boundary detection algorithm to extract motion information between two video frames, using the detection result graph of the previous frame as memory information, and integrating the memory information and motioninformation into a depth model SGF (E), thus realizing point-to-point detection. The method can detect valid eye portion setpoints of a video.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to a depth video saliency detection method based on motion and memory information. Background technique [0002] Saliency detection based on visual attention mechanism is a very important research content in the field of computer vision. Saliency detection is very important for image / video analysis. According to biological research, visual attention mechanism and memory mechanism are two important psychological adjustment mechanisms in the process of human visual information processing. The vast majority of human information comes from visual information. The attention mechanism and memory mechanism can help humans effectively process resources, filter and screen when processing a large amount of visual information, that is, only focus on areas of interest and eliminate irrelevant information. When processing static visual information, the attention mechanism plays a leading role, a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V40/193G06V20/42
Inventor 孙美君周子淇王征
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products