A multimodal perception and analysis system of patient behavior based on deep learning

An analysis system and deep learning technology, applied in the field of patient behavior multimodal perception and analysis system based on deep learning, can solve the problems of unsolvable gradient problem, difficult to express timing clues, difficult to express texture information, etc., to achieve accurate perception Sexual needs, improve the efficiency and level of diagnosis and treatment, and accurately assess the effect of patient behavior

Active Publication Date: 2022-03-29
FUDAN UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] For complex medical scenarios such as emergency department, ICU, nursing, isolation ward or metabolic cabin, the traditional disadvantage of the multi-dimensional perception algorithm of patient behavior based on deep learning is that it cannot effectively perceive the fine behavior and fine-grainedness of patients. The compliance of behavior implementation cannot be accurately judged. At the same time, most hospitals and medical data centers are still in the stage of manual sample collection and analysis and automated single-mode analysis for the analysis and research of patient behavior, although some institutions have launched Perceptual analysis of multimodal data, but the lack of compatibility processing and consideration of multimodal data greatly restricts data analysis and research on patient behavior and next-step medical outcomes
[0005] Most of the existing deep learning methods are applied to image information processing of patients. The processing of multi-modal data still has defects such as complex network design, slow training speed, and unsolvable gradient problems, which cannot make good use of multi-modal information. Fusion can make single-modal heterogeneous features and multi-modal heterogeneous features complement each other in the dimension of temporal and spatial cues
For example, in the feature extraction based on image information, RGB expresses rich texture information, but it is easily disturbed by light intensity, and it is difficult to express timing clues; although the timing clues of human body pose inertial data obtained based on the 3D human motion capture system are relatively easy Extraction and expression, but it is difficult to express texture information, etc.
[0006] The above issues result in inaccurate placement of patient behavior samples, delays in assessment and treatment of patient behavior, or overdiagnosis and overtreatment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A multimodal perception and analysis system of patient behavior based on deep learning
  • A multimodal perception and analysis system of patient behavior based on deep learning
  • A multimodal perception and analysis system of patient behavior based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0043] like figure 1 As shown, this embodiment provides a deep learning-based multimodal perception and analysis system for patient behavior, including a data acquisition unit, a patient body posture recognition unit, a patient physiological signal recognition unit, a patient image information recognition unit, and a patient voice information recognition unit. Unit, deep fusion unit and display module, the data acquisition unit is used to acquire multimodal patient data, and the data acquisition unit is respectively connected to the patient body posture recognition unit, patient physiological signal recognition unit, patient image information recognition unit and patient voice information recognition unit; The deep fusion unit is respectively connected to the patient body posture recognition unit, the patient physiological signal recognition unit, the patient image information recognition unit, the patient voice information recognition unit and the display module.

[0044] The...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention relates to a patient behavior multimodal perception and analysis system based on deep learning, including a data acquisition unit, a patient body posture recognition unit, a patient physiological signal recognition unit, a patient image information recognition unit, a patient voice information recognition unit and deep fusion unit; through the preprocessing and region-of-interest extraction and diagnosis of the collected multi-modal data such as patient posture, physiology, image and voice, the deep fusion unit adopts a multi-modal two-dimensional feature and three-dimensional feature fusion network structure, composed of 2D The deep learning network obtains the preliminary segmentation results, and obtains the patient behavior detection results based on the preliminary segmentation results through the 3D deep learning network. Compared with the prior art, the present invention achieves a more accurate assessment of patient behavior, accurately locates the lesion, significantly improves the prediction accuracy of the patient's pathological trend, and provides a strong basic guarantee for the implementation of scientific intervention in patient behavior and intelligent optimization of medical procedures. .

Description

technical field [0001] The invention relates to the field of patient behavior analysis, in particular to a deep learning-based multimodal perception and analysis system for patient behavior. Background technique [0002] With the continuous development of deep learning technology, in dealing with many single-modal perceptual machine learning tasks, deep neural networks have achieved great advantages and information processing effects compared with traditional information processing methods. For example, the proposal of cyclic neural network and recurrent neural network (RNN) has achieved extremely successful engineering promotion and application of medical diagnosis for the processing of sequence problems of patient medical record text information and voice information; the proposal of models such as AlexNet and ResNet, It even surpasses human performance in task processing in the field of patient behavior video information. [0003] Applying deep learning technology to the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08G06V40/10A61B5/318A61B5/389A61B5/398A61B5/0533A61B5/11A61B5/00A61B5/0205A61B5/055A61B6/00A61B6/03A61B8/00
CPCG06N3/08A61B5/1118A61B5/1116A61B5/1121A61B5/02055A61B5/08A61B5/055A61B5/053A61B5/7203A61B5/7235A61B5/726A61B5/7267A61B5/7264A61B5/7275A61B6/00A61B6/032A61B6/52A61B8/52G06V40/10G06V10/44G06N3/045G06F18/253G06F18/254
Inventor 张立华杨鼎康翟鹏董志岩
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products