Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Region-level and pixel-level fusion saliency detection method based on convolutional neural networks (CNN)

A technology of convolutional neural network and detection method, which is applied in the field of saliency detection of regional and pixel-level fusion, can solve the problem of not being able to obtain accurate pixel-level saliency prediction results, and achieve good saliency detection performance.

Active Publication Date: 2016-11-23
HARBIN INST OF TECH
View PDF2 Cites 91 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these methods cannot obtain accurate pixel-level saliency prediction results.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Region-level and pixel-level fusion saliency detection method based on convolutional neural networks (CNN)
  • Region-level and pixel-level fusion saliency detection method based on convolutional neural networks (CNN)
  • Region-level and pixel-level fusion saliency detection method based on convolutional neural networks (CNN)

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The technical solution of the present invention will be further described below in conjunction with the accompanying drawings, but it is not limited thereto. Any modification or equivalent replacement of the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention should be covered by the present invention. within the scope of protection.

[0040] The present invention provides a saliency detection method based on convolutional neural network-based regional and pixel-level fusion, and the specific implementation steps are as follows:

[0041] 1. Regional significance estimation

[0042] In the process of region-level saliency estimation, the first step is to generate a large number of regions from the input image. The simplest method is to use superpixels as regions for saliency estimation, making it difficult to determine the number of superpixels to segment. If the number of superpixel...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a region-level and pixel-level fusion saliency detection method based on convolutional neural networks (CNN). The research object of the method is a static image of which the content can be arbitrary, and the research goal of the method is to find a target striking the eyes of a person from images and assign different saliency values to the target. An adaptive region generation technology is mainly proposed, and two CNN structures are designed and used for pixel-level saliency prediction and saliency fusion respectively. The two CNN models are used for training a network model with the image as input and the real result of the image as a supervisory signal and finally outputting a saliency map of the same size as the input image. By means of the method, region-level saliency estimation and pixel-level saliency prediction can be effectively carried out to obtain two saliency maps, and finally the two saliency maps and the original image are fused through the CNNs for saliency fusion to obtain the final saliency map.

Description

technical field [0001] The invention relates to an image processing method based on deep learning, in particular to a saliency detection method based on convolutional neural network region and pixel-level fusion. Background technique [0002] With the development and rise of deep learning, the saliency detection technology based on deep learning is also developing continuously. Saliency detection can be divided into two categories: bottom-up data-driven models and top-down task-driven models. Bottom-up saliency detection refers to finding an eye-catching object in a given image, which can be any type of thing. The top-down saliency detection method usually finds objects of a given category from a given picture and assigns different saliency values. Currently, bottom-up saliency detection methods are the most studied. [0003] Existing bottom-up saliency detection methods can be divided into two categories, methods based on manually designed features and methods based on c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06K9/62G06K9/46G06N3/04
CPCG06N3/04G06T2207/20084G06T2207/20221G06V10/40G06F18/23
Inventor 邬向前卜巍唐有宝
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products