Unlock instant, AI-driven research and patent intelligence for your innovation.

Training method of surgical action recognition model, medium and equipment

A technology of action recognition and training methods, which is applied in the field of image processing, can solve problems such as target detection algorithm failure, failure to display the surrounding environment of organs, and no context information, etc., to achieve the effect of solving video features that are not obvious

Pending Publication Date: 2021-11-26
SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

First of all, there are non-rigid deformations in the tissues and organs of the human body, and the boundary, shape and color differences between two different organs are very small. It is difficult to extract effective feature information in the image based on the method of spatial information, resulting in poor accuracy of the classifier.
Second, the scene captured with the endoscopic camera is too close to show the complete organ and its surroundings, so there is little contextual information
Such a dynamic-based text detection method is difficult to effectively use the time and space information between the upper and lower frames of the surgical video, and these methods are difficult to meet the task requirements of surgical action detection
Finally, the movement and orientation of the endoscope at close range makes organs appear very different from different angles, and these highly variable conditions can also cause traditional object detection algorithms to fail

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Training method of surgical action recognition model, medium and equipment
  • Training method of surgical action recognition model, medium and equipment
  • Training method of surgical action recognition model, medium and equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0036] Before describing the various embodiments of the present application in detail, first briefly describe the technical concept of the present application: the existing detection method based on deep learning needs to rely on sufficient context information, and in the real surgical scene, since the camera shooting scene is very close to , it is difficult to extract effective context information, and the classification accuracy cannot be improved. This application provides a training method for a surgical action recognition model. First, hierarchical feature maps of different scales are extra...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a training method of a surgical action recognition model, a storage medium and equipment. The surgical action recognition model comprises a backbone network, a pyramid feature aggregation network and a prediction network, the pyramid feature aggregation network comprises a feature map collection module and a feature map divergence module, and the training method comprises the following steps: inputting an obtained original surgical action image into the backbone network to obtain a plurality of hierarchical feature maps of different scales; inputting the hierarchical feature map into a pyramid feature aggregation network, and sequentially performing fusion processing through a feature map collection module and a feature map divergence module to obtain a plurality of fused feature maps with different scales; inputting a plurality of fused feature maps of different scales into a prediction network to obtain a prediction target value; and updating a loss function according to the predicted target value and the obtained real target value, and adjusting model parameters of the surgical action recognition model. According to the method, spatial information is fully utilized, more scale features are fused, and a high-precision prediction model is obtained through training.

Description

technical field [0001] The invention belongs to the technical field of image processing, and in particular relates to a training method for a surgical action recognition model, a computer-readable storage medium, and computer equipment. Background technique [0002] Surgical robot system is an intelligent computer-aided system that can assist surgeons to complete operations. In minimally invasive surgery, the processing result based on the image algorithm enables the auxiliary surgical robot to make corresponding surgical operation actions to assist the attending surgeon to complete the surgical operation. The surgical robot system not only has the characteristics of minimally invasive surgery, less trauma, quick recovery, and less pain for the patient, but also because the introduction of the intelligent auxiliary robot system combines the patient's image data with the patient's anatomical parts in the actual operation, the operation can be tracked in real time during the o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06N3/045G06F18/253G06F18/214
Inventor 贾富仓徐文廷
Owner SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More