Image saliency target detection method and system based on multi-depth feature fusion

A feature fusion and target detection technology, applied in instruments, biological neural network models, computing, etc., can solve problems such as the impact of detection results and subsequent processing operations accuracy, unclear target contours, and small targets to be measured, to meet the accuracy requirements and real-time requirements, high precision, and speed-up effects

Active Publication Date: 2020-12-25
SHANDONG UNIV
View PDF4 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At the same time, in the process of unmanned driving, there may be problems such as too small targets, complex backgrounds, and unclear target outlines, which will affect the accuracy of detection results and subsequent processing operations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image saliency target detection method and system based on multi-depth feature fusion
  • Image saliency target detection method and system based on multi-depth feature fusion
  • Image saliency target detection method and system based on multi-depth feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0039]The salient target detection method of the present invention can be applied to the fields of medical image segmentation, intelligent photography, image retrieval, virtual background, intelligent unmanned system and the like.

[0040]In this embodiment, the multi-depth feature fusion neural network refers to a saliency detection neural network that integrates multi-level, multi-task, and multi-channel deep features.

[0041]In this embodiment, an unmanned driving scene is taken as an example to describe the method of the present invention in detail:

[0042]A method of image saliency target detection based on multi-depth feature fusion, refer tofigure 1 ,include:

[0043]Obtain the image information to be detected in the set scene;

[0044]Input the image information into the trained multi-depth feature fusion neural network model;

[0045]The multi-depth feature fusion neural network model uses convolution for feature extraction in the encoding stage, and combines the upsampling method of convo...

Embodiment 2

[0084]In one or more embodiments, a multi-depth feature fusion image salient target detection system is disclosed, including:

[0085]A device for acquiring image information to be detected in a set scene;

[0086]A device for inputting the image information into the trained multi-depth feature fusion neural network model;

[0087]The multi-depth feature fusion neural network model uses convolution for feature extraction in the encoding stage, and combines the up-sampling method of convolution and bilinear interpolation in the decoding stage to restore the information of the input image, and output a feature map with saliency information s installation;

[0088]A device used to learn feature maps of different levels using a multi-level network, and to merge feature maps of different levels;

[0089]A device used to output the final salient target detection result.

[0090]It should be noted that the specific working mode of the above-mentioned device is implemented by the method disclosed in Embodime...

Embodiment 3

[0092]In one or more embodiments, a terminal device is disclosed, including a server, the server including a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor executes the The program implements the image saliency target detection method of multi-depth feature fusion in the first embodiment. For the sake of brevity, I will not repeat them here.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-depth feature fusion image saliency target detection method and system. The method comprises the steps of obtaining to-be-detected image information in a set scene; inputting the image information into a trained multi-depth feature fusion neural network model; wherein the multi-depth feature fusion neural network model adopts convolution to perform feature extraction in a coding stage, restores information of an input image in combination with an up-sampling method of convolution and bilinear interpolation in a decoding stage, and outputs a feature map with significance information; learning feature maps of different levels by adopting a multi-level network, and fusing the feature maps of different levels; and outputting a final saliency target detection result. According to the invention, a multi-depth feature fusion neural network is used to carry out saliency target detection on an image in a scene, the detection precision is guaranteed, and the speedof a subsequent processing process is accelerated; a contour detection branch is added, and the boundary details of the to-be-detected target by using contour features are refined.

Description

Technical field[0001]The present invention relates to the technical field of image saliency target detection, in particular to a method and system for image saliency target detection based on multi-depth feature fusion.Background technique[0002]The statements in this section merely provide background information related to the present invention, and do not necessarily constitute prior art.[0003]Saliency target detection refers to the use of computers to imitate the human visual attention mechanism to separate the people or things in the image that can most attract human visual attention from the background. An image is composed of many pixels. The brightness, color and other attributes of the pixels are different, and their corresponding salient feature values ​​will also be different. Different from traditional object detection and semantic segmentation tasks, salient object detection only focuses on the part that can attract visual attention without classifying it, and the general...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/46G06K9/62G06K9/00G06N3/04
CPCG06V20/56G06V10/462G06V2201/07G06N3/045G06F18/214G06F18/253Y02T10/40
Inventor 陈振学闫星合刘成云孙露娜段树超朱凯陆梦旭李明
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products