Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Forest fire detection method based on deep convolutional model with convolution kernels of multiple sizes

A deep convolution and forest fire technology, applied in the field of deep convolutional neural network models, can solve the problems of slow detection speed and poor processing effect of forest fire, and achieve the effect of improving the problem of local minimum, low cost and high precision

Inactive Publication Date: 2018-11-30
南京启德电子科技有限公司
View PDF3 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the problems of slow detection speed and poor processing effect of forest fires, the present invention provides a forest fire detection method based on a deep convolutional neural network model with multi-size convolution kernels, which can save manpower and material resources to a certain extent

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Forest fire detection method based on deep convolutional model with convolution kernels of multiple sizes
  • Forest fire detection method based on deep convolutional model with convolution kernels of multiple sizes
  • Forest fire detection method based on deep convolutional model with convolution kernels of multiple sizes

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] The following describes specific embodiments of the present invention in conjunction with the accompanying drawings.

[0020] figure 1 It is a schematic diagram of the system structure of the forest fire detection method based on the deep convolution model of the multi-size convolution kernel, which mainly includes two parts: the video surveillance network and the forest fire detection model. The former is used to capture real-time conditions in the forest and transmit the video to the control center; the latter judges whether a fire has occurred and the trend of the fire based on the collected data.

[0021] The monitoring network is mainly realized by a network high-definition camera installed at a suitable location in the forest. The content captured by the camera needs to cover the entire forest, mainly shooting dynamic substances or objects in the forest, such as smoke and flames, etc., and saving the collected video, transmitted to the control center. The video ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a forest fire detection method based on a deep convolutional model with convolution kernels of multiple sizes, and relates to the technical field of deep learning video recognition. The method comprises the steps of collecting video image information; wirelessly transmitting data; forming an optical flow field; building a convolutional neural network model in deep learning;predicting a result; and the like. By training the deep convolutional neural network model with the convolution kernels of different sizes, an existing algorithm is improved; and in combination withexisting hardware conditions, the forest fire judgment speed is increased, the prediction precision of the whole model is improved, and the problems that an existing forest fire is not timely discovered and the economic loss is relatively serious can be effectively solved; and therefore, the method has a certain practical value.

Description

technical field [0001] The invention belongs to the field of video network technology identification technology and artificial intelligence, and specifically relates to establishing a deep convolutional neural network model based on multi-size convolution kernels, using optical flow field as the input of the model to judge the fire situation of forest monitoring video and methods for predicting fire trends. Background technique [0002] 1. Video recognition technology is widely used due to its good effect and convenient data collection. It mainly includes three stages: collection and transmission of front-end video information, intermediate video detection and back-end video analysis and processing. The video stream network composed of video recognition technology is a large-scale, unattended, efficient and accurate information collection method, which automatically collects the imaging of optical flow in the air in the forest, abnormal light and air quality through fixed or...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G08B17/00G08B17/12
CPCG08B17/005G08B17/125G06V20/41G06N3/045
Inventor 何铁军曹凯鑫
Owner 南京启德电子科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products