Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Convolutional Neural Network-Based Saliency Detection Method Based on Region- and Pixel-Level Fusion

A technology of convolutional neural network and detection method, which is applied in the field of saliency detection of regional and pixel-level fusion, can solve the problem of not being able to obtain accurate pixel-level saliency prediction results, and achieve good saliency detection performance.

Active Publication Date: 2018-11-02
HARBIN INST OF TECH
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these methods cannot obtain accurate pixel-level saliency prediction results.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Convolutional Neural Network-Based Saliency Detection Method Based on Region- and Pixel-Level Fusion
  • A Convolutional Neural Network-Based Saliency Detection Method Based on Region- and Pixel-Level Fusion
  • A Convolutional Neural Network-Based Saliency Detection Method Based on Region- and Pixel-Level Fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The technical solution of the present invention will be further described below in conjunction with the accompanying drawings, but it is not limited thereto. Any modification or equivalent replacement of the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention should be covered by the present invention. within the scope of protection.

[0040] The present invention provides a saliency detection method based on convolutional neural network-based regional and pixel-level fusion, and the specific implementation steps are as follows:

[0041] 1. Regional significance estimation

[0042] In the process of region-level saliency estimation, the first step is to generate a large number of regions from the input image. The simplest method is to use superpixels as regions for saliency estimation, making it difficult to determine the number of superpixels to segment. If the number of superpixel...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a saliency detection method based on convolutional neural network region and pixel-level fusion. The research object of this method is a static image, wherein the content of the image can be arbitrary, and the research goal is to find out Objects that draw the eye's attention and assign them different salience values. The present invention mainly proposes an adaptive region generation technology, and designs two CNN network structures, which are respectively used for pixel-level saliency prediction and saliency fusion. These two CNN network models take images as input, use the real results of images as supervisory signals for the training of network models, and finally output a saliency map with the same size as the input image. The present invention can effectively perform region-level saliency estimation and pixel-level saliency prediction, obtain two saliency maps, and finally use CNN for saliency fusion to fuse the two saliency maps and the original image to obtain the final saliency map .

Description

technical field [0001] The invention relates to an image processing method based on deep learning, in particular to a saliency detection method based on convolutional neural network region and pixel-level fusion. Background technique [0002] With the development and rise of deep learning, the saliency detection technology based on deep learning is also developing continuously. Saliency detection can be divided into two categories: bottom-up data-driven models and top-down task-driven models. Bottom-up saliency detection refers to finding an eye-catching object in a given image, which can be any type of thing. The top-down saliency detection method usually finds objects of a given category from a given picture and assigns different saliency values. Currently, bottom-up saliency detection methods are the most studied. [0003] Existing bottom-up saliency detection methods can be divided into two categories, methods based on manually designed features and methods based on c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/11G06K9/62G06K9/46G06N3/04
CPCG06N3/04G06T2207/20084G06T2207/20221G06V10/40G06F18/23
Inventor 邬向前卜巍唐有宝
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products