Multi-modal foundation cloud atlas identification method based on depth tensor fusion

A ground-based cloud image and multi-modal technology, applied in character and pattern recognition, biological neural network models, instruments, etc., can solve problems such as ground-based cloud classification difficulties, and achieve the effect of improving the accuracy rate

Active Publication Date: 2019-11-29
TIANJIN NORMAL UNIVERSITY
View PDF9 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to solve the difficult problem of ground-based cloud classification. For this reason, the present invention provides a multi-modal ground-based cloud image recognition method based on depth tensor fusion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal foundation cloud atlas identification method based on depth tensor fusion
  • Multi-modal foundation cloud atlas identification method based on depth tensor fusion
  • Multi-modal foundation cloud atlas identification method based on depth tensor fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in combination with specific embodiments and with reference to the accompanying drawings. It should be understood that these descriptions are exemplary only, and are not intended to limit the scope of the present invention. Also, in the following description, descriptions of well-known structures and techniques are omitted to avoid unnecessarily obscuring the concept of the present invention.

[0039] figure 1 It is a flow chart of a multi-modal ground-based cloud image recognition method based on depth tensor fusion proposed according to an embodiment of the present invention, as shown in figure 1 As shown, said a kind of multimodal ground-based cloud image recognition method based on depth tensor fusion includes:

[0040] Step S1, preprocessing the input ground-based cloud samples to obtain the input of the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiment of the invention discloses a multi-modal foundation cloud picture recognition method based on depth tensor fusion, and the method comprises the steps: carrying out preprocessing of an input foundation cloud sample, and obtaining the input of a depth tensor fusion network; transferring the input to a deep tensor fusion network training model, and training to obtain a deep tensor fusion network; extracting fusion feature representation of each input foundation cloud sample; training a support vector machine classifier to obtain a foundation cloud classification model; and obtaining fusion feature representation of a test input foundation cloud sample, and inputting the fusion feature representation into the foundation cloud classification model to obtain a classification result. The method has the capability of learning the foundation cloud visual information and the multi-modal information in a combined manner; according to the method, the visual information and the multi-modal information can be fused at the tensor level, the space content of the visual information can be kept, the complementary information of the visual information and the multi-modal information isfully utilized, the correlation between the visual information and the multi-modal information is effectively mined, fusion features with higher discrimination are extracted, and the accuracy of foundation cloud classification is improved.

Description

technical field [0001] The invention belongs to the technical fields of pattern classification, meteorological science and artificial intelligence, and in particular relates to a multi-mode ground-based cloud image recognition method based on deep tensor fusion. Background technique [0002] Ground-based cloud classification has important implications for understanding weather conditions. Traditional ground-based cloud automatic classification methods mainly extract artificially defined features of ground-based cloud images, such as texture, structure, and color features. However, these artificially defined features are difficult to apply to large-scale databases. [0003] In recent years, convolutional neural networks (CNNs) have achieved remarkable results in wireless sensor networks, computer vision, remote sensing and other fields. These convolutional neural network-based methods can learn features autonomously according to the data distribution. In view of this featur...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/42G06N3/04
CPCG06V10/32G06N3/045G06F18/214G06F18/253Y02A90/10
Inventor 刘爽李梅张重
Owner TIANJIN NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products