Significance fusion method based on 3D fixation point predication of picture

A fusion method and gaze point technology, applied in the field of image processing and computer vision, can solve the problem of inconsistent saliency prediction of different modal features, and achieve the effect of speeding up the calculation speed, reducing the mutation of the salient value, and improving the performance.

Active Publication Date: 2018-12-07
HUAZHONG UNIV OF SCI & TECH
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] In view of the above defects or improvement needs of the prior art, the present invention provides a saliency fusion method based on graph-based 3D gaze point prediction, thereby solving the problem of different modal feature predictions in the process of multi-modal feature fusion in the prior art Significantly inconsistent or even contradictory technical issues

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Significance fusion method based on 3D fixation point predication of picture
  • Significance fusion method based on 3D fixation point predication of picture

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0029] A saliency fusion method for graph-based 3D gaze point prediction, including saliency map generation and graph-based fusion,

[0030] The generation of the saliency map includes obtaining the saliency map of each frame of the original picture from the original video sequence; the saliency map includes: 2D static saliency map, motion saliency map, depth saliency map and high-level semantic...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a significance fusion method based on 3D fixation point predication of a picture. The method comprises saliency map generation and picture-based fusion. The saliency map generation comprises acquiring the saliency map of each original picture frame from an original video sequence. The picture-based fusion comprises the steps of constructing an energy function of an originalpicture according to the saliency map through objects of minimizing significance smooth restriction and minimizing significance difference between the original picture and the adjacent original picture; solving the energy function in the original picture, and obtaining a target significance map. According to the significance fusion method, the significance smooth restriction between a super-pixeland an adjacent super-pixel and a significance difference between the original picture and the adjacent original picture are considered so that the significance fusion method realizes relatively highsignificance in predicating different modal characteristics in a multi-modal characteristic fusion process.

Description

technical field [0001] The invention belongs to the fields of image processing and computer vision, and more particularly relates to a saliency fusion method for 3D gaze point prediction based on graphs. Background technique [0002] In the field of visual attention, there are already quite a few models for 2D visual attention, which can be roughly divided into two categories: human gaze point prediction models and salient object detection models. The former computes salient intensity maps at the pixel scale, and the latter aims to detect and segment salient objects or regions in a scene. There are quite a lot of visual attention models for human eye gaze prediction, but the research on gaze prediction models for 3D videos has just begun in recent years. In a nutshell, the framework of most 3D gaze prediction models is extended from 2D gaze prediction models. The framework mainly includes two steps. The first step is to extract a series of feature maps from the original co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/50G06K9/46
CPCG06T5/50G06V10/462
Inventor 刘琼李贝杨铀喻莉
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products