Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A fall detection method based on video joints and hybrid classifier

A hybrid classifier and detection method technology, applied in the field of fall detection based on video joints and hybrid classifiers, can solve the problems of difficult to reduce detection time consumption, complex models, simple models, etc., to achieve reduced detection time consumption and high accuracy , the effect of reducing the accuracy

Active Publication Date: 2021-11-02
HANGZHOU DIANZI UNIV
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the traditional video-based fall detection algorithm relies on manually extracting fall features, and detects falls with the help of a linear discriminant classifier. The model is simple but the accuracy is low. The existing deep learning-based detection algorithm model is complex, and it is difficult to reduce the time-consuming detection.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A fall detection method based on video joints and hybrid classifier
  • A fall detection method based on video joints and hybrid classifier
  • A fall detection method based on video joints and hybrid classifier

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 2

[0087] like image 3 As shown, a fall detection method based on video joints and hybrid classifiers differs from Embodiment 1 in that:

[0088] In step 2, instead of performing Gaussian denoising processing on each frame of the detected video clip, grayscale processing is performed.

[0089] In step 6, the secondary classifier is different from that in Embodiment 1. The secondary classifier in this implementation is a multi-scale convolutional neural network (referred to as MultiCNN), including a convolutional layer connected by activation function Relu in turn. , a pooling layer, and three fully connected layers. In the convolution mode of the convolutional neural network, the padding parameter is set to 'valid'. Four convolution kernels with scales of 3×3, 5×5, 7×7, and 9×9 are set in the convolutional layer; Four pooling operators of 2×1; 3×3, 5×5, 7×7, 9×9 convolution kernels and 8×1, 6×1, 4×1, 2×1 pooling operators Subs are connected correspondingly. The stride of th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a fall detection method based on video articulation points and a hybrid classifier. Traditional video-based fall detection algorithms rely on manual extraction of fall features, and use linear discriminant classifiers to detect falls. The model is simple but the accuracy is low. The present invention is as follows: 1. Extract each frame image of the detected video segment. 2. Obtain human joint data matrix; 3. Establish multiple behavioral matrices. 4. Calculating time characteristic parameters and spatial characteristic parameters. 5. Primary classification; 6. Secondary classification. The method for extracting joint points of human skeletons adopted in the present invention solves the problem that the traditional method of extracting the aspect ratio and projected area of ​​the human body cannot accurately estimate the posture of the human body. The invention adopts a fixed-size sliding window to construct a behavior matrix, which can simultaneously model the time and space axes, and fully express the characteristics of the falling behavior.

Description

technical field [0001] The invention belongs to the technical field of fall detection, and in particular relates to a fall detection method based on a video joint point and a hybrid classifier. Background technique [0002] Scholars at home and abroad have done a lot of research on falls of the elderly. There are three main fall detection methods: wearable sensor-based fall detection, environmental sensor-based fall detection, and video image-based detection. Fall detection based on wearable sensors mainly detects falls by collecting data collected by wearable sensors and setting thresholds to detect falls. Inaccurate threshold settings will affect the final detection results. The fall detection based on the environmental sensor mainly judges the fall through the data obtained by the pressure sensor on the ground or the sound detection equipment. If the environmental noise is too large, the data may be abnormal. Video-based fall detection mainly uses ordinary cameras or dep...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V20/41G06N3/045G06F18/2411
Inventor 蔡文郁郑雪晨郭嘉豪
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products