Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

A Fast Segmentation Method for Multimodal Surgical Trajectories Based on Unsupervised Deep Learning

A deep learning, unsupervised technology, applied in image analysis, character and pattern recognition, biological neural network model, etc., can solve the problems of insignificant video features, poor video feature quality, slow video feature extraction, etc., to reduce redundancy The effect of shifting points, improving feature quality, and speeding up extraction

Active Publication Date: 2021-06-29
CAPITAL NORMAL UNIVERSITY
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] However, there are still many defects in the existing unsupervised trajectory segmentation methods. First, the slowness of video feature extraction is the main problem affecting medical trajectory segmentation. For example, TSC-VGG, whose video feature extraction time accounts for more than 95% of the total segmentation time , so that the efficiency of unsupervised methods is greatly reduced; secondly, the extracted video features are not significant
The quality of the video features extracted by the existing method is poor, and it may even have a negative effect in the trajectory segmentation, resulting in poor segmentation stability; finally, due to the characteristics of the unsupervised trajectory segmentation method itself, the result will have the problem of over-segmentation, that is, The split segment of the same atomic operation is divided into multiple segments, and some split segment "fragments" appear

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Fast Segmentation Method for Multimodal Surgical Trajectories Based on Unsupervised Deep Learning
  • A Fast Segmentation Method for Multimodal Surgical Trajectories Based on Unsupervised Deep Learning
  • A Fast Segmentation Method for Multimodal Surgical Trajectories Based on Unsupervised Deep Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0214] The data set used is the JIGSAWS data set published by Johns Hopkins University (Johns Hopkins University), which includes two parts: surgical data and manual annotation. The data set is collected from the da Vinci medical robot system and is divided into kinematic data and video data. The sampling frequency of both kinematic data and video data is 30Hz. The data set contains 3 tasks, needle threading (NP), suturing (SU) and knotting (KT), which are operated and annotated by doctors with different skill levels. In the experiment, it is found that the kinematics data of the data set has a small amount of segment trajectory noise and data jitter, so the kinematics data is smoothed by wavelet transform and then the trajectory is segmented.

[0215] A subset of the JIGSAWS data set is selected for verification, including two tasks of needle threading and suturing. Each surgical task contains 11 groups of demonstrations, from 5 experts (E), 3 intermediate experts (I), and 3...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for rapidly segmenting multimodal surgical trajectories based on unsupervised deep learning, which belongs to the field of robot-assisted minimally invasive surgery. The method first aims at a robot-assisted minimally invasive surgery process. The robot system collects surgical kinematics data and video data; structure for feature extraction on video data. The kinematics data processed by smoothing filter and video data image after feature extraction are sent to the improved TSC model for clustering, and the trajectory pre-segmentation results of n surgical demonstrations are obtained; finally, the PMDD merging algorithm is used to pre-segment each trajectory. The segmentation results are similarly merged, and the merged result is the final trajectory segmentation result. The present invention proposes an optimization scheme and unsupervised deep learning for problems such as over-segmentation, accelerates the extraction speed of video features, improves feature quality, and makes clustering results more accurate.

Description

technical field [0001] The invention belongs to the field of robot-assisted minimally invasive surgery (RMIS), and relates to image feature extraction, deep learning clustering, similarity evaluation, etc., and is specifically a method for fast segmentation of multimodal surgical trajectories based on unsupervised deep learning. Background technique [0002] During robot-assisted minimally invasive surgery (RMIS), the surgical trajectory is recorded by a series of robot kinematics data and video data. By segmenting these surgical trajectories, the surgical process is decomposed into several low-complexity sub-trajectories (sub-actions), which can be used for doctor skill evaluation and demonstration learning. More importantly, by learning these sub-trajectories, the robot can realize the autonomous operation of simple tasks, thereby advancing the automation of robotic surgery. However, due to the complexity of the surgical environment and the differences in the skill levels...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06T7/215G06N3/04
CPCG06T7/215G06V20/41G06N3/045
Inventor 邵振洲渠瀛谢劼欣赵红发施智平关永谈金东李贺
Owner CAPITAL NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products