Visual significance prediction method based on generative adversarial network

A prediction method, a remarkable technology, applied in the field of image analysis, can solve the problem of low stability

Inactive Publication Date: 2017-06-13
SHENZHEN WEITESHI TECH
View PDF0 Cites 102 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] For the problem of low stability, the object of the present invention is to provide a visual saliency prediction method based on generation confrontation network, based on two deep convolutional neural network (DCNN) modules Salience generation confrontation network (SalGAN) architecture, including A generator and a discriminator whose combination aims to predict a visual saliency map given an input image; filter weights in SalGAN are trained by perceptual losses resulting from combining content loss and adversarial loss

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual significance prediction method based on generative adversarial network
  • Visual significance prediction method based on generative adversarial network
  • Visual significance prediction method based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] It should be noted that the embodiments in the application and the features in the embodiments can be combined with each other if there is no conflict. The present invention will be further described in detail below with reference to the drawings and specific embodiments.

[0032] figure 1 It is a system flow chart of the visual saliency prediction method based on generating confrontation network of the present invention. It mainly includes the architecture and training of the salient generation confrontation network (SalGAN) based on two deep convolutional neural network (DCNN) modules.

[0033] The filter weights in SalGAN have been trained by the perceptual loss caused by combining content and adversarial loss; the content loss follows the classic method, in which the predicted saliency map is compared with the corresponding saliency image from the calibrated real data in terms of pixels ; The adversarial loss depends on the discriminator, relative to the actual / synthetic...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a visual significance prediction method based on a generative adversarial network. which mainly comprises the steps of constructing a saliency generative adversarial network (SalGAN) based on two deep convolutional neural network (DCNN) modules, and training. Specifically, the method comprises the steps of constructing the saliency generative adversarial network (SalGAN) based on two deep convolutional neural network (DCNN) modules, wherein the saliency generative adversarial network comprises a generator and a discriminator, and the generator and the discriminator are combined for predicating a visual saliency graph of a preset input image; and training filter weight in the SalGAN through perception loss which is caused by combining content loss and adversarial loss. A loss function in the method is combination of error from the discriminator and cross entropy relative to calibrated true data, thereby improving stability and convergence rate in adversarial training. Compared with further training of individual cross entropy, adversarial training improves performance so that higher speed and higher efficiency are realized.

Description

Technical field [0001] The present invention relates to the field of image analysis, in particular to a visual saliency prediction method based on a generative confrontation network. Background technique [0002] The process of processing and processing of external information by the human visual perception system is selective screening, rather than receiving all. This process is visual saliency. The rapid development of information technology has made the development of imaging equipment technology more and more rapid, and the image data that people are exposed to every day has increased. Therefore, the research on visual saliency prediction methods is particularly important. At present, its more mature applications mainly include: scene rendering, Image compression, active vision and image quality evaluation. However, the traditional image analysis technology is to perform global processing for the overall image information, and the processing of image information has no prior...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/20G06K9/62G06N3/08
CPCG06N3/084G06V10/22G06F18/214
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products