Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

708 results about "Training phase" patented technology

Training Phases. Officer candidate training is divided into five distinct phases: In-processing (Phase I), Transition Training (Phase 11), Adaptation (Phase 111), Decision Making and Execution (Phase IV), and Out-processing (Phase V). Each of the various OCS programs will progress through the training phases.

Pedestrian re-identification method based on multi-attribute and multi-strategy fusion learning

The invention discloses a pedestrian re-identification method based on multi-attribute and multi-strategy fusion learning. The method of the invention includes the steps of in an offline training phase, firstly selecting pedestrian attributes which are easy to be judged and have a sufficient distinguishing degree, training a pedestrian attribute identifier on an attribute data set, then labeling attribute tags for a pedestrian re-identification data set by using the attribute identifier, and next, by combining the attributes and pedestrian identity tags, training a pedestrian re-identification model by using a strategy fused with pedestrian classification and novel constraint comparison verification; and in an online query phase, extracting features of a query image and images in a database by using the pedestrian re-identification model, and calculating the Euclidean distance between the feature of the query image and the feature of each image in the database to obtain the image with the shortest distance, which is considered as the result of pedestrian re-identification. In terms of performance, the features in the invention are distinguishable and high accuracy is obtained; and in terms of efficiency, the method of the invention can quickly search for the pedestrian indicated by the query image from the pedestrian image database.
Owner:HUAZHONG UNIV OF SCI & TECH

Intelligence relation extraction method based on neural network and attention mechanism

ActiveCN107239446AStrong feature extraction abilityOvercome the problem of heavy workload of manual feature extractionBiological neural network modelsNatural language data processingNetwork modelMachine learning
The invention discloses an intelligence relation extraction method based on neural network and attention mechanism, and relates to the field of recurrent neural network, natural language processing and intelligence analysis combined with attention mechanism. The method is used for solving the problem of large workload and low generalization ability in the existing intelligence analysis system based on artificial constructed knowledge base. The implementation of the method includes a training phase and an application phase. In the training phase, firstly a user dictionary and training word vectors are constructed, then a training set is constructed from a historical information database, then corpus is pre-processed, and then neural network model training is conducted; in the application phase, information is obtained, information pre-processing is conducted, intelligence relation extraction task can be automatically completed, at the same time expanding user dictionary and correction judgment are supported, training neural network model with training set is incremented. The intelligence relation extraction method can find the relationship between intelligence, and provide the basis for integrating event context and decision making, and has a wide range of practical value.
Owner:CHINA UNIV OF MINING & TECH

Dynamic gesture recognition method based on hybrid deep learning model

ActiveCN106991372AAchieving an efficient space-time representationEasy to identifyCharacter and pattern recognitionFrame basedModel parameters
The invention discloses a dynamic gesture recognition method based on a hybrid deep learning model. The dynamic gesture recognition method includes a training phase and a test phase. The training phase includes first, training a CNN based on an image set constituting a gesture video and then extracting spatial features of each frame of the dynamic gesture video sequence frame by frame using the trained CNN; for each gesture video sequence to be recognized, organizing the frame-level features learned by the CNN into a matrix in chronological order; inputting the matrix to an MVRBM to learn gesture action spatiotemporal features that fuse spatiotemporal attributes; and introducing a discriminative NN; and taking the MVRBM as a pre-training process of NN model parameters and network weights and bias that are learned by the MVRBM as initial values of the weights and bias of the NN, and fine-tuning the weights and bias of the NN by a back propagation algorithm. The test phase includes extracting and splicing features of each frame of the dynamic gesture video sequence frame by frame based on CNN, and inputting the features into the trained NN for gesture recognition. The effective spatiotemporal representation of the 3D dynamic gesture video sequence is realized by adopting the technical scheme of the invention.
Owner:BEIJING UNIV OF TECH

Behavior identification method based on 3D convolution neural network

The invention discloses a behavior identification method based on a 3D convolution neural network, and relates to the fields of machine learning, feature matching, mode identification and video image processing. The behavior identification method is divided into two phases including the off-line training phase and the on-line identification phase. In the off-line training phase, sample videos of various behaviors are input, different outputs are obtained through calculation, each output corresponds to one type of behaviors, parameters in the calculation process are modified according to the error between an output vector and a label vector so that all output data errors can be reduced, and labels are added to the outputs according to behavior names of the sample videos corresponding to the outputs after the errors meet requirements. In the on-line identification phase, videos needing behavior identification are input, calculation is conducted on the videos through the same method as the training phase to obtain outputs, the outputs and a sample vector for adding the labels are matched, and the name of the sample label most matched with the sample vector is viewed as a behavior name of the corresponding input video. The behavior identification method has the advantages of being low in complexity, small in calculation amount, high in real-time performance and high in accuracy.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products