A video flame detection method based on two-stream convolution neural network

A convolutional neural network and flame detection technology, applied in the field of video image detection, to achieve the effect of improving the effect and reducing the time complexity

Inactive Publication Date: 2019-02-22
BEIJING UNIV OF TECH
View PDF7 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to provide a video flame detection method based on a dual-stream convolutional neural network. Aiming at the problem that the flame detection method based on deep learning does not combine video motion information, by using a dual-stream con

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A video flame detection method based on two-stream convolution neural network
  • A video flame detection method based on two-stream convolution neural network
  • A video flame detection method based on two-stream convolution neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.

[0051] figure 1 It is a schematic flow chart of the main steps of the video flame detection method based on the dual-stream convolutional neural network, and the specific implementation plan is:

[0052] S1: Data preparation, input the training data set into the dual-stream convolutional neural network for training, and obtain the trained network model;

[0053] Specific steps include:

[0054] S1.1 Data set preparation

[0055] figure 2 It is a schematic diagram of the flame data set of the video flame detection method based on the dual-stream convolutional neural network. The data set used is composed of 4000 RGB images and 400 dynamic videos collected by myself; the data set is the training data set, and the training data set includes RGB images and dynamic video.

[0056] S1.2 Construction of network model

[0057] image 3 ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a video flame detection method based on a dual-stream convolution neural network. The main process comprises the following steps: S1, preparing data, inputting a training dataset into the dual-stream convolution neural network for training, and obtaining a trained network model; 2, pretreating that video to be measured by cascade motion feature detection and color featuredetection to obtain a suspected flame region; S3, inputting the obtained suspected flame region into the trained two-stream convolution neural network for classification and identification; S4: identifying The region as the flame by the two-stream convolution neural network is output as the final detection result. The invention efficiently processes most of the non-flame regions in the video by extracting the suspected flame regions, and reduces the time complexity. By using two-stream convolution neural network, combining the video flame motion information, and adding the spatio-temporal pyramid pooling layer to provide more robust spatio-temporal characteristics, the detection accuracy is improved.

Description

technical field [0001] The invention relates to the field of video image detection, in particular to a video flame detection method based on a two-stream convolutional network. Background technique [0002] Fire promotes the progress of human beings from weak to strong, and gives them light, security and warmth. However, fire has also brought huge harm and loss to society and life. The occurrence of fire is uncertain, which makes it difficult to prevent and detect flames, especially in complex environments, how to detect fires in time is a thorny problem. The traditional sensor-based flame detection technology has the disadvantages of small detection range, low reliability and slow speed. In recent years, the flame video image detection technology based on computer vision has overcome the main weakness of the traditional flame detection technology, so that the visual flame detection technology can combine a large number of dynamic and static characteristics of the flame, w...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/46G06K9/00G06K9/62G06N3/04
CPCG06V20/42G06V20/46G06V10/56G06N3/045G06F18/214
Inventor 于乃功陈玥吕健张勃
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products