Digestive endoscopy video scene classification method based on convolutional neural network

A convolutional neural network and digestive endoscopy technology, applied in biological neural network models, endoscopy, neural architecture, etc., can solve single-frame misclassification, need to start the identification of intestinal polyps, and single-frame image scene misjudgment and other problems to achieve the effect of enhancing reliability

Inactive Publication Date: 2020-12-11
HIGHWISE CO LTD
View PDF3 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Gastrointestinal surgery videos generally include the in vitro preparation process, entry process, internal process, and final return to the external process. Automatic scene recognition can help the CAD system automatically start the corresponding functional modules according to the current scene. For example, it needs to be started in upper gastrointestinal endoscopic surgery. For early screening of gastric cancer, the identification of intestinal polyps needs to be activated during colonoscopic surgery, and no lesion identification function needs to be activated when the lens is outside the body; on the contrary, if there is no automatic scene recognition, the CAD system can only manually input the current scene information by the user, so that Limits the functionality and use of the system
[0005] Usually image classification is to fully describe the entire single image through manual feature annotation or feature learning methods, and then use classifiers to learn features and identify object categories. With the development of deep learning, CNN-based image classification algorithms have improved in terms of accuracy. Rapid development, CNN-based image classifiers usually have excellent classification capabilities on a single image, but in the process of video judgment based on image streams, even the most accurate classifiers will inevitably encounter misjudgments; Digestive endoscopy video is image data collected by multi-frame digestive endoscopy lenses in a time-series direction. CNN-based scene image classifiers can more accurately identify the scenes represented by each single frame, but due to the image data generated by video collection The number is large, and it is inevitable that single frame misclassification will occur during the sequential image sampling process, resulting in the wrong judgment of the scene represented by the single frame image

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Digestive endoscopy video scene classification method based on convolutional neural network
  • Digestive endoscopy video scene classification method based on convolutional neural network
  • Digestive endoscopy video scene classification method based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0047] Please refer to the attached figure 1 A method for classifying scenes of digestive endoscopy videos based on convolutional neural networks according to the present invention is to solve the problem of automatic recognition of scenes in digestive endoscopy videos in AI-based digestive endoscopy CAD systems, and includes the following steps :

[0048] (1): Obtain real-time scene images through digestive endoscopy equipment.

[0049] (2): Use the CNN scene classifier to perform preliminary scene classification on the acquired scene images.

[0050] (3): Incorporate the preliminary classification results of the CNN scene classifier into the K sliding window timing signal queue, where K is the length of the timing signal queue.

[0051] (4): Count the proportions of various states in the K sliding window queue.

[0052] (5): Det...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a digestive endoscopy video scene classification method based on a convolutional neural network. The method comprises the following steps: (1) acquiring a real-time scene image through digestive endoscopy equipment; (2) carrying out preliminary scene classification on the obtained scene image by using a CNN scene classifier; (3) bringing the preliminary classification result of the CNN scene classifier into a K sliding window time sequence signal queue, wherein K is the length of the time sequence signal queue; (4) counting the proportions of various states in the K sliding window queue; (5) determining a scene state according to the proportion of the current scene to the K sliding window queue state; the CNN scene classifier can efficiently perform primary scene classification on the image acquired by the endoscope equipment in real time, and in order to ensure the stability of the image scene classification acquired by the endoscope equipment on a time sequence signal, the scene state of the primary scene classification result is determined by using a sliding window statistical scene state conversion algorithm; and the reliability of scene classificationand scene state conversion is enhanced.

Description

technical field [0001] The invention belongs to the technical field of image processing of digestive endoscopy video, and in particular relates to a method for classifying digestive endoscopy video scenes based on a convolutional neural network. Background technique [0002] In recent years, with the emergence of deep learning algorithms, the field of artificial intelligence (AI) has shown revolutionary development and progress in research. With the support of massive data, AI systems based on deep learning algorithms have begun to be used in more and more application fields. It has the recognition and judgment ability similar to that of human experts, and even exceeds that of humans in some special fields; especially in the medical field, deep learning algorithms also show great potential, for example, in the diagnosis of skin lesions and diabetic retinopathy images, based on The deep learning AI system already has the same or even better recognition ability than medical ex...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04A61B1/00
CPCA61B1/00009G06V2201/03G06N3/045G06F18/24
Inventor 曹鱼陈齐磊刘本渊
Owner HIGHWISE CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products