Check patentability & draft patents in minutes with Patsnap Eureka AI!

Collaborative visual saliency detection method based on double-encoder generative adversarial network

A detection method and generative technology, applied in biological neural network models, neural learning methods, instruments, etc., can solve the problem that the semantic consistency of images between groups cannot be well mined, the salient features between sample label groups are not good enough, and synergy The problem of insufficient saliency label samples, etc., achieves the effect of small model parameters, good versatility, and improved efficiency

Active Publication Date: 2021-04-13
ZHENGZHOU UNIVERSITY OF LIGHT INDUSTRY
View PDF9 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Aiming at the technical problem that the traditional co-saliency detection method has insufficient sample labels and the saliency features between groups are not good enough to mine the semantic consistency of images between groups, this invention proposes a dual-encoder-based generative adversarial network The collaborative visual saliency detection method, which jointly trains salient features between groups and single salient features, has strong versatility, has few model parameters, and has high detection accuracy. The problem of insufficient label samples

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Collaborative visual saliency detection method based on double-encoder generative adversarial network
  • Collaborative visual saliency detection method based on double-encoder generative adversarial network
  • Collaborative visual saliency detection method based on double-encoder generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031]Next, the technical solutions in the embodiments of the present invention will be apparent from the embodiment of the present invention, and it is clearly described, and it is understood that the described embodiments are merely embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, those of ordinary skill in the art will belong to the scope of the present invention without paying in the premise of creative labor.

[0032]Such asfigure 1 As shown, a synergistic visual significance detection method based on a dual-encoder generating type counterfeiting network is as follows:

[0033]Step 1: Build a biochemical counterfeit network model based on a dual-encoder: the generation of the generation of the two encoder contains the generator and the discriminator, with the generator including two encoders and decoders, two encoders including significance Encoders and group semantic encoders, and significant encoders, decoders, and...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a collaborative visual saliency detection method based on a double-encoder generative adversarial network. The method comprises the steps of: constructing and pre-training a double-encoder generative adversarial network model, wherein the pre-trained parameters are used for a generative adversarial network model; inputting the collaborative saliency data into a classification network module as a group of images, extracting multi-scale group-level image semantic category features, and enabling a multi-scale semantic fusion module to fuse the multi-scale group-level image semantic category features into inter-group saliency features; sequentially inputting the grouped input images into a saliency encoder one by one to obtain a single saliency feature; performing pixel-level addition on the single saliency feature and the inter-group saliency feature to obtain a collaborative saliency feature, and inputting the collaborative saliency feature into a decoder for decoding to obtain a detection image; and detecting the trained generative adversarial network model by utilizing the collaborative saliency data set. The method is small in model parameter, simple in training and detection operation and high in detection precision, and the efficiency is improved.

Description

Technical field[0001]The present invention relates to the technical field of synergistic significance detection, and more particularly to a synergistic visual significance detection method based on a dual encoder generating counterfeit network.Background technique[0002]With the continuous development of Internet and multimedia, a large number of images and video data are accompanied by our daily lives, how to use existing multimedia technologies to quickly and efficiently obtain useful information. Nowadays, a collaborative significance test technology is a computer visual technology that mimics human visual attention mechanism. It can find a similar significant goal from a set of images, and find each image in an associated image. Public significant goals. Such methods can effectively obtain the information you want to simultaneously filter redundancy information in the image, thereby achieving the efficiency of reducing computer storage and improving calculations.[0003]There are t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06K9/62G06N3/04G06N3/08
CPCG06T7/0002G06N3/08G06T2207/20081G06T2207/20084G06N3/045G06F18/241
Inventor 钱晓亮成曦岳伟超赵艺芳曾黎程塨姚西文吴青娥任航丽刘向龙王芳刘玉翠
Owner ZHENGZHOU UNIVERSITY OF LIGHT INDUSTRY
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More