Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

12715 results about "Classification result" patented technology

Lung nodule detection and classification

A computer assisted method of detecting and classifying lung nodules within a set of CT images includes performing body contour, airway, lung and esophagus segmentation to identify the regions of the CT images in which to search for potential lung nodules. The lungs are processed to identify the left and right sides of the lungs and each side of the lung is divided into subregions including upper, middle and lower subregions and central, intermediate and peripheral subregions. The computer analyzes each of the lung regions to detect and identify a three-dimensional vessel tree representing the blood vessels at or near the mediastinum. The computer then detects objects that are attached to the lung wall or to the vessel tree to assure that these objects are not eliminated from consideration as potential nodules. Thereafter, the computer performs a pixel similarity analysis on the appropriate regions within the CT images to detect potential nodules and performs one or more expert analysis techniques using the features of the potential nodules to determine whether each of the potential nodules is or is not a lung nodule. Thereafter, the computer uses further features, such as speculation features, growth features, etc. in one or more expert analysis techniques to classify each detected nodule as being either benign or malignant. The computer then displays the detection and classification results to the radiologist to assist the radiologist in interpreting the CT exam for the patient.
Owner:RGT UNIV OF MICHIGAN

A text implication relation recognition method based on multi-granularity information fusion

The present invention provides a text implication relation recognition method which fuses multi-granularity information, and proposes a modeling method which fuses multi-granularity information fusionand interaction between words and words, words and words, words and sentences. The invention firstly uses convolution neural network and Highway network layer in character vector layer to establish word vector model based on character level, and splices with word vector pre-trained by GloVe; Then the sentence modeling layer uses two-way long-short time memory network to model the word vector of fused word granularity, and then interacts and matches the text pairs through the sentence matching layer to fuse the attention mechanism, finally obtains the category through the integration classification layer; After the model is established, the model is trained and tested to obtain the text implication recognition and classification results of the test samples. This hierarchical structure method which combines the multi-granularity information of words, words and sentences combines the advantages of shallow feature location and deep feature learning in the model, so as to further improve the accuracy of text implication relationship recognition.
Owner:SUN YAT SEN UNIV +1

Small sample and zero sample image classification method based on metric learning and meta-learning

The invention relates to the field of computer vision recognition and transfer learning, and provides a small sample and zero sample image classification method based on metric learning and meta-learning, which comprises the following steps of: constructing a training data set and a target task data set; selecting a support set and a test set from the training data set; respectively inputting samples of the test set and the support set into a feature extraction network to obtain feature vectors; sequentially inputting the feature vectors of the test set and the support set into a feature attention module and a distance measurement module, calculating the category similarity of the test set sample and the support set sample, and updating the parameters of each module by utilizing a loss function; repeating the above steps until the parameters of the networks of the modules converge, and completing the training of the modules; and enabling the to-be-tested picture and the training picture in the target task data set to sequentially pass through a feature extraction network, a feature attention module and a distance measurement module, and outputting a category label with the highestcategory similarity with the test set to obtain a classification result of the to-be-tested picture.
Owner:SUN YAT SEN UNIV

Industrial character identification method based on convolution neural network

The invention provides an industrial character identification method based on a convolution neural network. The method comprises the steps of establishing character data sets, carrying out data enhancement and preprocessing on the character data sets and establishing a CNN (Convolution Neural Network) integrated model, wherein the model comprises three different individual classifiers, training is carried out through utilization of the model, the training is finished by two steps, a first step is offline training, an offline training model is obtained, a second step is online training, the offline training model is used for initialization, a special production line character data set is trained, and an online training model is obtained; carrying out preprocessing, character positioning and single character image segmentation on a target image; sending the segmented character images to the trained online training model, and probability values of classifying the single target images into classes by the three classifiers in the CNN integrated model is obtained; final decision is carried out in a voting mode, thereby obtaining a classification result of test data. According to the method, characters on different production lines can be identified rapidly and efficiently.
Owner:吴晓军

Bidirectional long short-term memory unit-based behavior identification method for video

The invention discloses a bidirectional long short-term memory unit-based behavior identification method for a video. The method comprises the steps of (1) inputting a video sequence and extracting an RGB (Red, Green and Blue) frame sequence and an optical flow image from the video sequence; (2) respectively training a deep convolutional network of an RGB image and a deep convolutional network of the optical flow image; (3) extracting multilayer characteristics of the network, wherein characteristics of a third convolutional layer, a fifth convolutional layer and a seventh fully connected layer are at least extracted, and the characteristics of the convolutional layers are pooled; (4) training a recurrent neural network constructed by use of a bidirectional long short-term memory unit to obtain a probability matrix of each frame of the video; and (5) averaging the probability matrixes, finally fusing the probability matrixes of an optical flow frame and an RGB frame, taking a category with a maximum probability as a final classification result, and thus realizing behavior identification. According to the method, the conventional artificial characteristics are replaced with multi-layer depth learning characteristics, the depth characteristics of different layers represent different pieces of information, and the combination of multi-layer characteristics can improve the accuracy rate of classification; and the time information is captured by use of the bidirectional long short-term memory, many pieces of time domain structural information are obtained and a behavior identification effect is improved.
Owner:SUZHOU UNIV

Stratification characteristic analysis-based method and apparatus thereof for on-line identification for TCP, UDP flows

The invention relates to a stratification characteristic analysis-based method and an apparatus thereof for on-line identification for TCP, UDP flows. The method comprises the following steps that: an off-line phase determines a common port number of a first layer to-be-identified service type and a characteristic field of a second layer to-be-identified service data flow through a protocol analysis; a port number and characteristic field database is constructed; meanwhile, a third layer Bayesian decision tree model is obtained by training by employing a machine study method; and service type identification on a flow is completed by utilizing the characteristic database and a study model at an on-line classification phase. In addition, the apparatus provided in the invention comprises a data flow separating module, a characteristic extraction module, a characteristic storage module, a characteristic matching module, an attribute extraction module, a model construction and classification module and a classification result display module. According to the embodiment of the invention, various application layer services based on TCP and UDP are accurately identified; moreover, the identification process is simple and highly efficient; therefore, the method and the apparatus are suitable for realization of a hardware apparatus and can be applied for equipment and systems that require on-line flow identification in a high speed backbone network and an access network.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Remote sensing image classification method based on multi-feature fusion

The invention discloses a remote sensing image classification method based on multi-feature fusion, which includes the following steps: A, respectively extracting visual word bag features, color histogram features and textural features of training set remote sensing images; B, respectively using the visual word bag features, the color histogram features and the textural features of the training remote sensing images to perform support vector machine training to obtain three different support vector machine classifiers; and C, respectively extracting visual word bag features, color histogram features and textural features of unknown test samples, using corresponding support vector machine classifiers obtained in the step B to perform category forecasting to obtain three groups of category forecasting results, and synthesizing the three groups of category forecasting results in a weighting synthesis method to obtain the final classification result. The remote sensing image classification method based on multi-feature fusion further adopts an improved word bag model to perform visual word bag feature extracting. Compared with the prior art, the remote sensing image classification method based on multi-feature fusion can obtain more accurate classification result.
Owner:HOHAI UNIV

Urban rail transit panoramic monitoring video fault detection method based on depth learning

The invention provides an urban rail transit panoramic monitoring video fault detection method based on depth learning. The method comprises a data set construction process, a model training generation process and an image classification recognition process. The data set construction process processes a definition abnormity video, a colour cast abnormity video and a normal video in an urban rail transit panoramic monitoring video. A training set and a test set are classified. The model training generation process comprises model training and model test. The model training is to train a fault video image recognition model based on a convolution neural network. The convolutional neural network comprises a plurality of convolution layers and a plurality of full connection layers. The model test is to calculate the test accuracy. If expectation is not fulfilled, the fault video image recognition model is optimized. The image classification recognition process comprises the steps that a single-frame image to be recognized is input into the model, and the fault video image recognition model outputs an image classification result to complete the fault image detection of the urban rail transit panoramic monitoring video.
Owner:HUAZHONG NORMAL UNIV +1

Unsupervised domain adaptive image classification method based on conditional generative adversarial network

The invention discloses an unsupervised domain adaptive image classification method based on a conditional generative adversarial network. The method comprises the following steps: preprocessing an image data set; constructing a cross-domain conditional confrontation image generation network by adopting a cyclic consistent generation confrontation network and applying a constraint loss function; using the preprocessed image data set to train the constructed conditional adversarial image generation network; and testing the to-be-classified target image by using the trained network model to obtain a final classification result. According to the method, a conditional adversarial cross-domain image migration algorithm is adopted to carry out mutual conversion on source domain image samples andtarget domain image samples, and consistency loss function constraint is applied to classification prediction of target images before and after conversion. Meanwhile, discriminative classification tags are applied to carry out conditional adversarial learning to align joint distribution of source domain image tags and target domain image tags, so that the source domain image with the tags is applied to train the target domain image, classification of the target image is achieved, and classification precision is improved.
Owner:NANJING NORMAL UNIVERSITY

Vehicle license plate recognition method based on video

The invention provides a vehicle license plate recognition method based on a video. According to the vehicle license plate recognition method based on the video, moving vehicles are detected and separated out with the vehicle video which is obtained through actual photographing by means of a camera serving as input, the accurate position of a vehicle license plate area is determined by conducting vertical edge extraction on a target vehicle image obtained after pre-processing, a vehicle license plate image is separated out, color correction, binaryzation and inclination correction are conducted on a vehicle license plate image, each character in the positioned vehicle license plate area is separated to serve as an independent character, feature extraction is conducted one each character, obtained feature vectors are classified through a classifier which is well trained in advance, a classification result serves as a preliminary recognition result, secondary recognition is conducted on the stained vehicle license plate characters according to a template matching algorithm imitating the visual characteristics of human eyes, and then a final vehicle license plate recognition result is obtained. The vehicle license plate recognition method based on the video has the advantages that hardware cost is reduced, the management efficiency of an intelligent transportation system is improved, the anti-jamming performance and the robustness are high, the recognition efficiency is high, and the recognition speed is high.
Owner:XIAN TONGRUI NEW MATERIAL DEV

Graph-based semi-supervised high-spectral remote sensing image classification method

The invention relates to a graph-based semi-supervised high-spectral remote sensing image classification method. The method comprises the following steps: extracting the features of an input image; randomly sampling M points from an unlabeled sample, constructing a set S with L marked points, constructing a set R with the rest of the points; calculating K adjacent points of the points in the sets S and R in the set S by use of a class probability distance; constructing two sparse matrixes WSS and WSR by a linear representation method; using label propagation to obtain a label function F<*><S>, and calculating the label prediction function F<*><R> of the sample points in the set R to determine the labels of all the pixel points of the input image. According to the method, the adjacent points of the sample points can be calculated by use of the class probability distance, and the accurate classification of high-spectral images can be achieved by utilizing semi-supervised conduction, thus the calculation complexity is greatly reduced; in addition, the problem that the graph-based semi-supervised learning algorithm can not be used for large-scale data processing is solved, and the calculation efficiency can be improved by at least 20-50 times within the per unit time when the method provided by the invention is used, and the visual effects of the classified result graphs are good.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products