Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Image significance detection method based on confrontation network

An inspection method, a significant technology, applied in image enhancement, image analysis, image data processing, etc., to achieve the effect of improving accuracy

Inactive Publication Date: 2017-01-04
SHENZHEN INST OF FUTURE MEDIA TECH +1
View PDF3 Cites 97 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to solve the problem of using adversarial training to generate a convolutional neural network to detect image saliency

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image significance detection method based on confrontation network
  • Image significance detection method based on confrontation network
  • Image significance detection method based on confrontation network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The invention uses confrontation training to realize the regularization function of the convolutional neural network, thereby improving the accuracy rate when using the neural network to predict. Aiming at the specific problem—significance prediction, the present invention proposes a data-driven regression method. The learning process can be described as a cost function that minimizes the Euclidean distance between the saliency map and the ground truth. In order to avoid getting local minimum values, the present invention uses a smaller batch size -2, although the convergence is slower, the effect is better. The combination of stochastic gradient descent method and impulse unit is used in the training process, which helps to escape the local minimum during the training process, so that the network can converge more quickly. The learning rate is gradually reduced to ensure that the optimal solution is not skipped when the step is too large in the process of finding the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image significance detection method which uses confrontation training to generate a convolution neural network model, which belongs to the field of computer vision and image processing. The method comprises the steps of data preprocessing, network structure, suitable parameter selecting, and training with a random gradient descending method and an impulse unit. According to data preprocessing, a large amount of collected data and labels are preprocessed. According to network structure, a network structure and a specific kernel function are designed. Suitable parameters including learning rate, a momentum factor and the number of images inserted into the network each time are selected. The random gradient descending method and the impulse unit are used for training to reduce the possibility of network over-fitting. According to the invention, a significance map can be accurately acquired.

Description

technical field [0001] The invention relates to the fields of computer vision and digital image processing, in particular to an image saliency detection method based on an adversarial network. Background technique [0002] Saliency prediction is a process of predicting the gaze position of human eyes in static natural images. The results of image saliency prediction are widely used in the field of computer vision, including automatic image segmentation, object recognition, efficient image thumbnail and image retrieval, etc., and are very important image preprocessing steps. The traditional method of manually selecting features can be roughly divided into three steps: early feature extraction, feature difference inference, and feature difference combination. Saliency prediction is a bottom-up approach, driven by the lowest-level features. With the advent of the era of big data, data-driven learning methods are increasingly appearing in people's field of vision. The method ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/00
CPCG06T7/00G06T7/0002G06T2207/20081G06T2207/20084
Inventor 王好谦闫冰王兴政张永兵戴琼海
Owner SHENZHEN INST OF FUTURE MEDIA TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products