A multimodal surgical trajectory fast segmentation method based on unsupervised deep learning

A deep learning, unsupervised technology, applied in image analysis, character and pattern recognition, biological neural network models, etc., can solve problems such as low efficiency of unsupervised methods, excessive segmentation of results, and unremarkable video features

Active Publication Date: 2019-01-08
CAPITAL NORMAL UNIVERSITY
View PDF6 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] However, there are still many defects in the existing unsupervised trajectory segmentation methods. First, the slowness of video feature extraction is the main problem affecting medical trajectory segmentation. For example, TSC-VGG, whose video feature extraction time accounts for more than 95% of the total segmentation time , so that the efficiency of unsupervised methods is greatly reduced; secondly, the extracted video features are not significant
The q

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A multimodal surgical trajectory fast segmentation method based on unsupervised deep learning
  • A multimodal surgical trajectory fast segmentation method based on unsupervised deep learning
  • A multimodal surgical trajectory fast segmentation method based on unsupervised deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0214] The data set used is the JIGSAWS data set published by Johns Hopkins University (Johns Hopkins University), which includes two parts: surgical data and manual annotation. The data set is collected from the da Vinci medical robot system and is divided into kinematic data and video data. The sampling frequency of both kinematic data and video data is 30Hz. The data set contains 3 tasks, needle threading (NP), suturing (SU) and knotting (KT), which are operated and annotated by doctors with different skill levels. In the experiment, it is found that the kinematics data of the data set has a small amount of segment trajectory noise and data jitter, so the kinematics data is smoothed by wavelet transform and then the trajectory is segmented.

[0215] A subset of the JIGSAWS data set is selected for verification, including two tasks of needle threading and suturing. Each surgical task contains 11 groups of demonstrations, from 5 experts (E), 3 intermediate experts (I), and 3...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multimodal surgical trajectory fast segmentation method based on unsupervised deep learning, belonging to the field of robot assisted minimally invasive surgery. Firstly, therobot system collects kinematics data and video data for a robot-assisted minimally invasive surgery process. Then wavelet transform is used to smooth and filter the short trajectory noise and data jitter in kinematics data. At the same time, a DCED-Net network structure is used to extract features from video data. The kinematics data after smoothing filtering and the video data image after feature extraction are sent to the improved TSC model for clustering, and the pre-segmentation results of n surgical demonstration trajectories are obtained. Finally, the PMDD merging algorithm is used tosimilarly merge the pre-segmentation results of each trajectory, and the merging result is the final trajectory segmentation result. The invention provides an optimization scheme and unsupervised deeplearning for over-segmentation and other problems, accelerates the extraction speed of video features, improves the quality of features and makes the clustering result more accurate.

Description

technical field [0001] The invention belongs to the field of robot-assisted minimally invasive surgery (RMIS), and relates to image feature extraction, deep learning clustering, similarity evaluation, etc., and is specifically a method for fast segmentation of multimodal surgical trajectories based on unsupervised deep learning. Background technique [0002] During robot-assisted minimally invasive surgery (RMIS), the surgical trajectory is recorded by a series of robot kinematics data and video data. By segmenting these surgical trajectories, the surgical process is decomposed into several low-complexity sub-trajectories (sub-actions), which can be used for doctor skill evaluation and demonstration learning. More importantly, by learning these sub-trajectories, the robot can realize the autonomous operation of simple tasks, thereby advancing the automation of robotic surgery. However, due to the complexity of the surgical environment and the differences in the skill levels...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06T7/215G06N3/04
CPCG06T7/215G06V20/41G06N3/045
Inventor 邵振洲渠瀛谢劼欣赵红发施智平关永谈金东李贺
Owner CAPITAL NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products