Patents
Literature
Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

6229 results about "Classification result" patented technology

Small sample and zero sample image classification method based on metric learning and meta-learning

The invention relates to the field of computer vision recognition and transfer learning, and provides a small sample and zero sample image classification method based on metric learning and meta-learning, which comprises the following steps of: constructing a training data set and a target task data set; selecting a support set and a test set from the training data set; respectively inputting samples of the test set and the support set into a feature extraction network to obtain feature vectors; sequentially inputting the feature vectors of the test set and the support set into a feature attention module and a distance measurement module, calculating the category similarity of the test set sample and the support set sample, and updating the parameters of each module by utilizing a loss function; repeating the above steps until the parameters of the networks of the modules converge, and completing the training of the modules; and enabling the to-be-tested picture and the training picture in the target task data set to sequentially pass through a feature extraction network, a feature attention module and a distance measurement module, and outputting a category label with the highestcategory similarity with the test set to obtain a classification result of the to-be-tested picture.
Owner:SUN YAT SEN UNIV

Industrial character identification method based on convolution neural network

The invention provides an industrial character identification method based on a convolution neural network. The method comprises the steps of establishing character data sets, carrying out data enhancement and preprocessing on the character data sets and establishing a CNN (Convolution Neural Network) integrated model, wherein the model comprises three different individual classifiers, training is carried out through utilization of the model, the training is finished by two steps, a first step is offline training, an offline training model is obtained, a second step is online training, the offline training model is used for initialization, a special production line character data set is trained, and an online training model is obtained; carrying out preprocessing, character positioning and single character image segmentation on a target image; sending the segmented character images to the trained online training model, and probability values of classifying the single target images into classes by the three classifiers in the CNN integrated model is obtained; final decision is carried out in a voting mode, thereby obtaining a classification result of test data. According to the method, characters on different production lines can be identified rapidly and efficiently.
Owner:吴晓军

Bidirectional long short-term memory unit-based behavior identification method for video

The invention discloses a bidirectional long short-term memory unit-based behavior identification method for a video. The method comprises the steps of (1) inputting a video sequence and extracting an RGB (Red, Green and Blue) frame sequence and an optical flow image from the video sequence; (2) respectively training a deep convolutional network of an RGB image and a deep convolutional network of the optical flow image; (3) extracting multilayer characteristics of the network, wherein characteristics of a third convolutional layer, a fifth convolutional layer and a seventh fully connected layer are at least extracted, and the characteristics of the convolutional layers are pooled; (4) training a recurrent neural network constructed by use of a bidirectional long short-term memory unit to obtain a probability matrix of each frame of the video; and (5) averaging the probability matrixes, finally fusing the probability matrixes of an optical flow frame and an RGB frame, taking a category with a maximum probability as a final classification result, and thus realizing behavior identification. According to the method, the conventional artificial characteristics are replaced with multi-layer depth learning characteristics, the depth characteristics of different layers represent different pieces of information, and the combination of multi-layer characteristics can improve the accuracy rate of classification; and the time information is captured by use of the bidirectional long short-term memory, many pieces of time domain structural information are obtained and a behavior identification effect is improved.
Owner:SUZHOU UNIV

Remote sensing image classification method based on multi-feature fusion

The invention discloses a remote sensing image classification method based on multi-feature fusion, which includes the following steps: A, respectively extracting visual word bag features, color histogram features and textural features of training set remote sensing images; B, respectively using the visual word bag features, the color histogram features and the textural features of the training remote sensing images to perform support vector machine training to obtain three different support vector machine classifiers; and C, respectively extracting visual word bag features, color histogram features and textural features of unknown test samples, using corresponding support vector machine classifiers obtained in the step B to perform category forecasting to obtain three groups of category forecasting results, and synthesizing the three groups of category forecasting results in a weighting synthesis method to obtain the final classification result. The remote sensing image classification method based on multi-feature fusion further adopts an improved word bag model to perform visual word bag feature extracting. Compared with the prior art, the remote sensing image classification method based on multi-feature fusion can obtain more accurate classification result.
Owner:HOHAI UNIV

Unsupervised domain adaptive image classification method based on conditional generative adversarial network

The invention discloses an unsupervised domain adaptive image classification method based on a conditional generative adversarial network. The method comprises the following steps: preprocessing an image data set; constructing a cross-domain conditional confrontation image generation network by adopting a cyclic consistent generation confrontation network and applying a constraint loss function; using the preprocessed image data set to train the constructed conditional adversarial image generation network; and testing the to-be-classified target image by using the trained network model to obtain a final classification result. According to the method, a conditional adversarial cross-domain image migration algorithm is adopted to carry out mutual conversion on source domain image samples andtarget domain image samples, and consistency loss function constraint is applied to classification prediction of target images before and after conversion. Meanwhile, discriminative classification tags are applied to carry out conditional adversarial learning to align joint distribution of source domain image tags and target domain image tags, so that the source domain image with the tags is applied to train the target domain image, classification of the target image is achieved, and classification precision is improved.
Owner:NANJING NORMAL UNIVERSITY

Vehicle license plate recognition method based on video

The invention provides a vehicle license plate recognition method based on a video. According to the vehicle license plate recognition method based on the video, moving vehicles are detected and separated out with the vehicle video which is obtained through actual photographing by means of a camera serving as input, the accurate position of a vehicle license plate area is determined by conducting vertical edge extraction on a target vehicle image obtained after pre-processing, a vehicle license plate image is separated out, color correction, binaryzation and inclination correction are conducted on a vehicle license plate image, each character in the positioned vehicle license plate area is separated to serve as an independent character, feature extraction is conducted one each character, obtained feature vectors are classified through a classifier which is well trained in advance, a classification result serves as a preliminary recognition result, secondary recognition is conducted on the stained vehicle license plate characters according to a template matching algorithm imitating the visual characteristics of human eyes, and then a final vehicle license plate recognition result is obtained. The vehicle license plate recognition method based on the video has the advantages that hardware cost is reduced, the management efficiency of an intelligent transportation system is improved, the anti-jamming performance and the robustness are high, the recognition efficiency is high, and the recognition speed is high.
Owner:XIAN TONGRUI NEW MATERIAL DEV

Graph-based semi-supervised high-spectral remote sensing image classification method

The invention relates to a graph-based semi-supervised high-spectral remote sensing image classification method. The method comprises the following steps: extracting the features of an input image; randomly sampling M points from an unlabeled sample, constructing a set S with L marked points, constructing a set R with the rest of the points; calculating K adjacent points of the points in the sets S and R in the set S by use of a class probability distance; constructing two sparse matrixes WSS and WSR by a linear representation method; using label propagation to obtain a label function F<*><S>, and calculating the label prediction function F<*><R> of the sample points in the set R to determine the labels of all the pixel points of the input image. According to the method, the adjacent points of the sample points can be calculated by use of the class probability distance, and the accurate classification of high-spectral images can be achieved by utilizing semi-supervised conduction, thus the calculation complexity is greatly reduced; in addition, the problem that the graph-based semi-supervised learning algorithm can not be used for large-scale data processing is solved, and the calculation efficiency can be improved by at least 20-50 times within the per unit time when the method provided by the invention is used, and the visual effects of the classified result graphs are good.
Owner:XIDIAN UNIV

Deep learning-based vulnerability detection method and system

The invention discloses a deep learning-based vulnerability detection method and system. The method comprises an offline vulnerability classifier training part and an online vulnerability detection part. The offline vulnerability classifier training part comprises the following steps of: calling candidate code sections for a training program extraction library/API function; adding type label for the candidate code sections; converting the candidate code sections into vectors; inputting the vectors into a neural network model to carry out training; and finally outputting a vulnerability classifier. The online vulnerability detection part comprises the following steps of: calling candidate code sections for a target program extraction library/API function; converting the candidate code sections into vectors; classifying the candidate code sections by adoption of the trained vulnerability classifier; and finally outputting the code sections which contain online vulnerabilities in the classification result. According to the method and system, vulnerability features aiming at library/API function calling can be automatically generated, and the operation does not depend on expert knowledges and is not restricted to vulnerability types, so that the false report rate and missing report rate of vulnerability detection in target programs can be remarkably reduced and vulnerability positions can be given.
Owner:HUAZHONG UNIV OF SCI & TECH

Electric power user figure establishment and analysis method based on big data technology

The invention discloses an electric power user figure establishment and analysis method based on the big data technology. The method comprises steps that the historical electricity information, basic attributes, the fee-paying information and the appeal information of electric power users are acquired; classification category sets of user figures are determined, an influence factor set of a classification result is determined, and a mapping relationship between the influence factor set and the classification set is determined; random extraction of the acquired data is carried out, one part of the data is taken as a training sample, and other data is taken as prediction sample; normalization processing, discretization processing and attribute reduction for the training sample and the prediction sample are carried out, and an influence factor set after correction is determined; the training sample is trained, ten-fold cross validation is taken as a test mode, an electric power user figure prediction model based on a naive Bayes classifier is established, data classification mining analysis on the prediction sample is carried out through utilizing the prediction model, and electric power user figures are acquired. The method is advantaged in that electric power electric quantity prediction and management can be facilitated.
Owner:国网山东省电力公司营销服务中心(计量中心) +3

Color image three-dimensional reconstruction method based on three-dimensional matching

The invention relates to a color image three-dimensional reconstruction method based on three-dimensional matching, comprising the following steps of: (1) simultaneously and respectively taking an image from proper angles by using two color cameras; (2) respectively calibrating the internal parameter matrixes and the external parameter matrixes of the two cameras; (3) carrying out polar line correction and image transformation according to calibrated data; (4) working out matching cost for each pixel point in the two corrected images by applying a self-adaption weight window algorithm and acquiring an initial parallax image; (5) marking the reliability coefficient of the pixel initial matching result by adopting matching cost reliability detection and left and right consistency verification; (6) carrying out color segmentation on the images through a Mean-Shift algorithm; (7) carrying out global optimization by a selective confidence propagation algorithm on the basis of color segmentation and pixel reliability classification results to obtain a final parallax image; and (8) working out the three-dimensional coordinates of actual object points on the images according to the calibrated data and the matching relation, thereby reconstructing the three-dimensional point cloud of an object.
Owner:南通洁万家纺织有限公司 +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products