Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

955results about How to "Improve classification effect" patented technology

Zero sample image classification method based on combination of variational autocoder and adversarial network

ActiveCN108875818AImplement classificationMake up for the problem of missing training samples of unknown categoriesCharacter and pattern recognitionPhysical realisationClassification methodsSample image
The invention discloses a zero sample image classification method based on combination of a variational autocoder and an adversarial network. Samples of a known category are input during model training; category mapping of samples of a training set serves as a condition for guidance; the network is subjected to back propagation of optimization parameters through five loss functions of reconstruction loss, generation loss, discrimination loss, divergence loss and classification loss; pseudo-samples of a corresponding unknown category are generated through guidance of category mapping of the unknown category; and a pseudo-sample training classifier is used for testing on the samples of the unknown category. The high-quality samples beneficial to image classification are generated through theguidance of the category mapping, so that the problem of lack of the training samples of the unknown category in a zero sample scene is solved; and zero sample learning is converted into supervised learning in traditional machine learning, so that the classification accuracy of traditional zero sample learning is improved, the classification accuracy is obviously improved in generalized zero sample learning, and an idea for efficiently generating the samples to improve the classification accuracy is provided for the zero sample learning.
Owner:XI AN JIAOTONG UNIV

Valueless image removing method based on deep convolutional neural networks

The invention relates to a valueless image removing method based on deep convolutional neural networks. The valueless image removing method comprises the steps of firstly, after performing whitening preprocessing on an image sample set, performing pre-training on a sparse autocoder to obtain the initialization results of deep convolutional network parameters, secondly, building a plurality of layers of deep convolutional neural networks and optimizing the network parameters layer by layer, and finally, classifying a plurality of classes of problems by use of a realized multi-classification softmax model and then realizing the removal of valueless images. Due to the automatic image learning characteristic of the sparse autocoder, the classification correction rate of the valueless image removing method based on the deep convolutional neural networks is increased. The plurality of layers of deep convolutional neural networks are built on the basis of the automatic image learning characteristic of the sparse autocoder, the network parameters are optimized layer by layer, the characteristic of each layer after learning is the combination result of the characteristics of the previous layer, and the multi-classification softmax model is trained to judge images, and consequently, the removal of valueless images is realized.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

System and method for smiling face recognition in video sequence

The invention discloses a system and a method for smiling face recognition in a video sequence. The system comprises a pre-processing module, a feature extraction module, and a classification recognition module. According to the pre-processing module, through video collection, face detection and mouth detection, a face image region capable of directly extracting optical flow features or PHOG features can be acquired; according to the feature extraction module, Optical-PHOG algorithm is adopted to extract smiling face features, and information most facilitating smiling face recognition is obtained; and according to the classification recognition module, random forest algorithm is adopted, and classification standards on a smiling face type and a non-smiling face type are obtained according to feature vectors of a large number of training samples obtained by the feature extraction module in a machine learning method. Comparison or matching or other operation is carried out between feature vectors of a to-be-recognized image and the classifier, and the smiling face type or the non-smiling face type to which the to-be-recognized image belongs can be recognized, and the purpose of classification recognition can be achieved. Thus, according to the system and the method for smiling face recognition in the video sequence, accuracy of smiling face recognition can be improved.
Owner:WINGTECH COMM

Magnetic resonance image feature extraction and classification method based on deep learning

The invention provides a magnetic resonance image feature extraction and classification method based on deep learning, comprising: S1, taking a magnetic resonance image, and performing pretreatment operation and feature mapping operation on the magnetic resonance image; S2, constructing a multilayer convolutional neural network including an input layer, a plurality of convolutional layers, at least one pooling layer/lower sampling layer and a fully connected layer, wherein the convolutional layers and the pooling layer/lower sampling layer are successively alternatively arranged between the input layer and the fully connected layer, and the convolutional layers are one more than the pooling layer/lower sampling layer; S3, employing the multilayer convolutional neural network constructed in Step 2 to extract features of the magnetic resonance image; and S4, inputting feature vectors outputted in Step 3 into a Softmax classifier, and determining the disease attribute of the magnetic resonance image. The magnetic resonance image feature extraction and classification method can automatically obtain highly distinguishable features/feature combinations based on the nonlinear mapping of the multilayer convolutional neural network, and continuously optimize a network structure to obtain better classification effects.
Owner:WEST CHINA HOSPITAL SICHUAN UNIV

Feature extraction and state recognition of one-dimensional physiological signal based on depth learning

The present invention discloses a feature extraction and state recognition method for one-dimensional physiological signal based on depth learning. The method comprises: establishing a feature extraction and state recognition analysis model DBN of a on-dimensional physiological signal based on depth learning, wherein the DBN model adopts a "pre-training+fine-tuning" training process, and in a pre-training stage, a first RBM is trained firstly and then a well-trained node is used as an input of a second RBM, and then the second RBM is trained, and so forth; and after training of all RBMs is finished, using a BP algorithm to fin-tune a network, and finally inputting an eigenvector output by the DBN into a Softmax classifier, and determining a state of an individual that is incorporated into the one-dimensional physiological signal. The method provided by the present invention effectively solves the problem that in the conventional one-dimensional physiological signal classification process, feature inputs need to be selected manually so that classification precision is low; and through non-linear mapping of the deep confidence network, highly-separable features/feature combinations are automatically obtained for classification, and a better classification effect can be obtained by keeping optimizing the structure of the network.
Owner:SICHUAN UNIV

Image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding

InactiveCN103020647AReduce the dimensionality of SIFT featuresHigh simulationCharacter and pattern recognitionSingular value decompositionData set
The invention discloses an image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding. The method includes the implementation steps: (1) extracting 512-dimension scale unchanged SIFT features from each image in a data set according to 8-pixel step length and 32X32 pixel blocks; (2) applying a space maximization pool method to the SIFT features of each image block so that a 168-dimension vector y is obtained; (3) selecting several blocks from all 32X32 image blocks in the data set randomly and training a dictionary D by the aid of a K-singular value decomposition method; (4) as for the vectors y of all blocks in each image, performing sparse representation for the dictionary D; (5) applying the method in the step (2) for all sparse representations of each image so that feature representations of the whole image are obtained; and (6) inputting the feature representations of the images into a linear SVM (support vector machine) classifier so that classification results of the images are obtained. The image classification method has the advantages of capabilities of capturing local image structured information and removing image low-level feature redundancy and can be used for target identification.
Owner:XIDIAN UNIV

Medical image synthesis and classification method based on a conditional multi-judgment generative adversarial network

The invention discloses a medical image synthesis and classification method based on a conditional multi-judgment generative adversarial network. The method comprises the following steps: 1, segmenting a lesion area in a computed tomography (CT) image, and extracting a lesion interested area (Region of Interest, ROIs for short); 2, performing data preprocessing on the lesion ROIs extracted in thestep 1; 3, designing a Conditional Multi-Discriminant Generative Adversarial Network (Conditional Multi-) based on multiple conditions The method comprises the following steps: firstly, establishing aCMDGAN model architecture for short, and training the CMDGAN model architecture by using an image in the second step to obtain a generation model; 4, performing synthetic data enhancement on the extracted lesion ROIs by using the generation model obtained in the step 3; and 5, designing a multi-scale residual network (Multiscale ResNet Network for short), and training the multi-scale residual network. According to the method provided by the invention, the synthetic medical image data set with high quality can be generated, and the classification accuracy of the classification network on the test image is relatively high, so that auxiliary diagnosis can be better provided for medical workers.
Owner:JILIN UNIV

Multispectral remote sensing image terrain classification method based on deep and semi-supervised transfer learning

The invention discloses a multispectral remote sensing image terrain classification method based on deep and semi-supervised transfer learning. A training data set and kNN data are extracted according to ground truth; the training data set is divided into two parts to be trained respectively; a multispectral image to be classified is inputted, and two classification result images are obtained from two CNN models; two kNN nearest neighbor algorithm images are constructed according to the training samples; the tested data are extracted by using the two classification result images, and the data are classified by using the kNN nearest neighbor algorithm; the classification result images are updated; the training samples and the kNN training samples of cooperative training are updated; and two cooperative training CNN networks are trained again, and the points having the class label of the test data set are classified by using the trained model so that the class of partial pixel points in the test data set is obtained and compared with the real class label. The k nearest neighbor algorithm and the sample similarity are introduced so that deviation of cooperative training can be prevented, the classification accuracy in case of insufficient training samples can be enhanced and thus the method can be used for target recognition.
Owner:XIDIAN UNIV

Text feature extraction method based on categorical distribution probability

The invention discloses a text feature extraction method based on categorical distribution probability. The text feature extraction method based on the categorical distribution probability extracts text feature words by means of the manner according to which categorical distribution difference estimation is carried out on words of a text to be categorized. Mean square error values of probability distribution of each word at different categories are worked out by means of category word frequency probability of the words. A certain number of words with high mean square error values are extracted to form a final feature set. The obtained feature set is used as feature words of a text categorizing task to build a vector space model in practical application. A designated categorizer is used for training and obtaining a final category model to categorize the text to be categorized. According to the text feature extraction method based on the categorical distribution probability, category distribution of the words is accurately measured in a probability statistics manner. Category values of the words are estimated in a mean square error manner so as to accurately select features of the text. As far as the text categorizing task is concerned, a text categorizing effect of balanced linguistic data and non-balanced linguistic data is obviously improved.
Owner:EAST CHINA NORMAL UNIV

Multiple-sparse-representation face recognition method for solving small sample size problem

Provided is a multiple-sparse-representation face recognition method for solving the small sample size problem. In the method, two modes are adopted to solve the small sample size problem during face recognition, one mode is that given original training samples produce 'virtual samples' so as to increase the number of the training samples, and the other mode is that three nonlinear feature extraction methods, namely a kernel principle component analysis method, a kernel discriminant analysis method and a kernel locality preserving projection algorithm method are adopted to extract features of the samples on the basis that the virtual samples are produced. Therefore, three feature modes are obtained, sparse-representation models are established for each feature mode. Three sparse-representation models are established for each sample, and finally classification is performed according to representation results. By means of the multiple-sparse-representation face recognition method, virtual faces are produced through mirror symmetry, and then norm L1 based multiple-sparse-representation models are established and classified. Compared with other classification methods, the multiple-sparse-representation face recognition method is good in robustness and classification effect and is especially suitable for a lot of classification occasions with high data dimensionality and few training samples.
Owner:EAST CHINA JIAOTONG UNIVERSITY

Fourier descriptor and gait energy image fusion feature-based gait identification method

The invention relates to a Fourier descriptor and gait energy image fusion feature-based gait identification method. The method comprises the steps of performing graying preprocessing on a single frame of image, updating a background in real time by using a Gaussian mixture model, and obtaining a foreground through a background subtraction method; performing binarization and morphological processing on each frame, obtaining a minimum enclosing rectangle of a moving human body, performing normalization to a same height, and obtaining a gait cycle and key 5 frames according to cyclic variation of a height-width ratio of the minimum enclosing rectangle; extracting low-frequency parts of Fourier descriptors of the key 5 frames to serve as features I; centralizing all frames in the cycle to obtain a gait energy image, and performing dimension reduction through principal component analysis to serve as features II; and fusing the features I and II and performing identification by adopting a support vector machine. According to the method, the judgment whether a current human behavior is abnormal or not can be realized; the background is accurately modeled by using the Gaussian mixture model, and relatively good real-time property is achieved; and the used fused feature has strong representability and robustness, so that the abnormal gait identification rate can be effectively increased.
Owner:WUHAN UNIV OF TECH

Traveling vehicle vision detection method combining laser point cloud data

ActiveCN110175576AAvoid the problem of difficult access to spatial geometric informationRealize 3D detectionImage enhancementImage analysisHistogram of oriented gradientsVehicle detection
The invention discloses a traveling vehicle vision detection method combining laser point cloud data, belongs to the field of unmanned driving, and solves the problems in vehicle detection with a laser radar as a core in the prior art. The method comprises the following steps: firstly, completing combined calibration of a laser radar and a camera, and then performing time alignment; calculating anoptical flow grey-scale map between two adjacent frames in the calibrated video data, and performing motion segmentation based on the optical flow grey-scale map to obtain a motion region, namely a candidate region; searching point cloud data corresponding to the vehicle in a conical space corresponding to the candidate area based on the point cloud data after time alignment corresponding to eachframe of image to obtain a three-dimensional bounding box of the moving object; based on the candidate region, extracting a direction gradient histogram feature from each frame of image; extracting features of the point cloud data in the three-dimensional bounding box; and based on a genetic algorithm, carrying out feature level fusion on the obtained features, and classifying the motion areas after fusion to obtain a final driving vehicle detection result. The method is used for visual inspection of the driving vehicle.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products