Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multimodal image fusion classification method and system based on adversarial complementary features

A multi-modal image and classification method technology, applied in neural learning methods, biological neural network models, instruments, etc., can solve problems such as deep learning overfitting, ignoring complementary information for effective mining and fusion, and limiting fusion classification accuracy. Achieve the effect of strengthening information interaction and solving the uneven distribution of small samples

Active Publication Date: 2022-07-29
SHANDONG JIANZHU UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in some multimodal classification tasks, with less data, deep learning is prone to overfitting
In addition, the existing multi-modal fusion methods based on deep learning ignore the effective mining and fusion of complementary information during fusion, which limits the improvement of fusion classification accuracy.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multimodal image fusion classification method and system based on adversarial complementary features
  • Multimodal image fusion classification method and system based on adversarial complementary features
  • Multimodal image fusion classification method and system based on adversarial complementary features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0028] The present disclosure provides a multimodal image fusion classification method based on adversarial complementary features, which consists of figure 1 shown, including the following steps:

[0029] Step 1: collect and preprocess image data with multiple modes, select the mode to be fused from the multiple modes, and input the image data in each mode into the neural network model in groups;

[0030] Step 2: First perform low-level feature extraction to obtain the key feature information vector of the image, and then determine whether the first channel fusion can be performed, and at the same time, perform the first similarity calculation on the obtained key feature vector of the image;

[0031] Step 3: Then perform high-level feature extraction on the features extracted from the low-level features, extract the key feature information vector of the image again, and determine whether the second channel fusion can be performed. similarity calculation;

[0032] Step 4: Pe...

Embodiment 2

[0061] The present disclosure provides a multimodal image fusion classification system based on adversarial complementary features, including:

[0062] The data acquisition and processing module is used to collect and preprocess image data with multimodality;

[0063] A feature extraction module for selecting a modality to be fused from a plurality of modalities, inputting the image data in each modality into a neural network model in groups, and performing low-level feature extraction to obtain image key feature information vectors; and Loading the features obtained in the low-level feature extraction into another convolutional neural network to perform convolution operation for high-level feature extraction, extracting the key feature information vector of the image, and obtaining the feature map group of the image group;

[0064] The channel fusion module is used to judge whether the first channel fusion and the second channel fusion can be performed. If yes, use the bn lay...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a multi-modal image fusion classification method and system based on adversarial complementary features, and belongs to the technical field of image classification, and the method comprises the steps: selecting a to-be-fused mode from a plurality of modes, firstly carrying out the low-layer feature extraction, and obtaining an image key feature information vector; judging whether first channel fusion and first similarity calculation can be carried out or not; carrying out high-level feature extraction, and judging whether second channel fusion and second similarity calculation can be carried out or not; clustering and comparative learning are carried out on feature maps extracted from low-level and high-level features, complementary information is effectively mined and fused, complementarity between the features is enhanced, and image fusion precision is improved.

Description

technical field [0001] The present disclosure relates to the technical field of image classification, in particular to a multimodal image fusion classification method and system based on adversarial complementary features. Background technique [0002] The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art. [0003] Image classification is an important research direction in computer vision, and has a wide range of applications in many tasks such as object recognition, face recognition, video analysis, and disease diagnosis. Although existing image classification methods can achieve better performance under the condition of big data, they are less effective for some classification tasks with few images. In addition, only using single-modal information has certain limitations. For example, in the classification task using multi-view images, a single view does not fully describe the sce...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06V10/764G06V10/762G06V10/74G06V10/80G06V10/82G06N3/04G06N3/08
CPCG06V10/764G06V10/806G06V10/761G06V10/762G06N3/084G06V10/82G06N3/045
Inventor 袭肖明王可崧聂秀山尹义龙张光
Owner SHANDONG JIANZHU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products