Multi-visual-field convolutional neural network-based image feature identification method

A convolutional neural network and feature recognition technology, applied in the field of CT image matching and recognition, can solve problems such as lack of relatively perfect CNN technology

Active Publication Date: 2017-06-13
BEIJING BAIHUI WEIKANG SCI & TECH CO LTD
View PDF2 Cites 42 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, there is no relatively complet...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-visual-field convolutional neural network-based image feature identification method
  • Multi-visual-field convolutional neural network-based image feature identification method
  • Multi-visual-field convolutional neural network-based image feature identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0074] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0075] figure 1 It is a flowchart of an image feature recognition method based on a multi-view convolutional neural network provided by an embodiment of the present invention. Such as figure 1 As shown, it mainly includes the following steps:

[0076] Step 1. Collect CT images with positive and negative labels in the historical database to establish a data set.

[0077] Here, the positive and negative labels can refer to the attributes of the CT...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-visual-field convolutional neural network-based image feature identification method. The method comprises the steps of collecting CT images with positive and negative tags in a historical database, and establishing a data set; judging a position region of a specified feature in each CT image of the data set by utilizing an image segmentation algorithm, and extracting sensitive regions of different pixel sizes; constructing a multi-visual-field convolutional neural network; inputting the extracted sensitive regions of different pixel sizes as samples to the multi-visual-field convolutional neural network, and training the multi-visual-field convolutional neural network to obtain a trained multi-visual-field convolutional neural network; and processing the to-be-identified CT images, inputting the extracted sensitive regions of different pixel sizes to the trained multi-visual-field convolutional neural network for performing feature identification, and determining the positive and negative tags of the to-be-identified images according to an identification result. According to the scheme, the end-to-end image identification is realized and the identification accuracy is ensured.

Description

technical field [0001] The invention relates to the technical field of CT image matching and recognition, in particular to an image feature recognition method based on a multi-view convolutional neural network. Background technique [0002] At present, most methods for automatic image classification are not end-to-end, which means that before matching and recognition, it is necessary to extract features with predefined filters (such as histograms of oriented gradients, local binary patterns, etc.) or manually extract images characteristics (such as geometry, texture, appearance, etc.). Feature learning is a high-order representation learned directly from training data. Artificial neural network (ANN) learns features from the original data. However, due to the full connection and shallow network structure of the traditional artificial neural network, it is impossible to extract high-level features with strong independence, which severely limits the application of actual imag...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/08G06K9/46
CPCG06N3/084G06V10/44G06V10/751G06F18/2415
Inventor 刘达刘奎侯蓓蓓
Owner BEIJING BAIHUI WEIKANG SCI & TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products