Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

13936 results about "Training methods" patented technology

Unsupervised training in natural language call routing

A method of training a natural language call routing system using an unsupervised trainer is provided. The unsupervised trainer is adapted to tune performance of the call routing system on the basis of feedback and new topic information. The method of training comprises: storing audio data from an incoming call as well as associated unique identifier information for the incoming call; applying a highly accurate speech recognizer to the audio data from the waveform database to produce a text transcription of the stored audio for the call; forwarding outputs of the second speech recognizer to a training database, the training database being adapted to store text transcripts from the second recognizer with respective unique call identifiers as well as topic data; for a call routed by the call router to an agent: entering a call topic determined by the agent into a form; and supplying the call topic information from the form to the training database together with the associated unique call identifier; and for a call routed to automated fulfillment: querying the caller regarding the true topic of the call; and adding this topic information, together with the associated unique call identifier, to the training database; and performing topic identification model training and statistical grammar model training on the basis of the topic information and transcription information stored in the training database.
Owner:RAYTHEON BBN TECH CORP +1

A combined deep learning training method based on a privacy protection technology

The invention belongs to the technical field of artificial intelligence, and relates to a combined deep learning training method based on a privacy protection technology. The efficient combined deep learning training method based on the privacy protection technology is achieved. In the invention, each participant first trains a local model on a private data set to obtain a local gradient, then performs Laplace noise disturbance on the local gradient, encrypts the local gradient and sends the encrypted local gradient to a cloud server; The cloud server performs aggregation operation on all thereceived local gradients and the ciphertext parameters of the last round, and broadcasts the generated ciphertext parameters; And finally, the participant decrypts the received ciphertext parameters and updates the local model so as to carry out subsequent training. According to the method, a homomorphic encryption scheme and a differential privacy technology are combined, a safe and efficient deep learning training method is provided, the accuracy of a training model is guaranteed, and meanwhile a server is prevented from inferring model parameters, training data privacy and internal attacksto obtain private information.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Voice identification method using long-short term memory model recurrent neural network

The invention discloses a voice identification method using a long-short term memory model recurrent neural network. The voice identification method comprises training and identification. The training process comprises steps of introducing voice data and text data to generate a commonly-trained acoustic and language mode, and using an RNN sensor to perform decoding to form a model parameter. The identification process comprises steps of converting voice input to a frequency spectrum graph through Fourier conversion, using the recursion neural network of the long-short term memory model to perform orientational searching decoding and finally generating an identification result. The voice identification method adopts the recursion neural network (RNNs) and adopts connection time classification (CTC) to train RNNs through an end-to-end training method. These LSTM units combining with the long-short term memory have good effects and combines with multi-level expression to prove effective in a deep network; only one neural network model (end-to-end model) exits from a voice characteristic (an input end) to a character string (an output end) and the neural network can be directly trained by a target function which is a some kind of a proxy of WER, which avoids to cost useless work to optimize an individual target function.
Owner:SHENZHEN WEITESHI TECH

Model parameter training method and device based on federated learning, equipment and medium

The invention discloses a model parameter training method and device based on federal learning, equipment and a medium. The method comprises the following steps: when a first terminal receives encrypted second data sent by a second terminal, obtaining a corresponding loss encryption value and a first gradient encryption value; randomly generating a random vector with the same dimension as the first gradient encryption value, performing fuzzy on the first gradient encryption value based on the random vector, and sending the fuzzy first gradient encryption value and the loss encryption value toa second terminal; when the decrypted first gradient value and the loss value returned by the second terminal are received, detecting whether the model to be trained is in a convergence state or not according to the decrypted loss value; and if yes, obtaining a second gradient value according to the random vector and the decrypted first gradient value, and determining the sample parameter corresponding to the second gradient value as the model parameter. According to the method, model training can be carried out only by using data of two federated parties without a trusted third party, so thatapplication limitation is avoided.
Owner:WEBANK (CHINA)

Model parameter training method, terminal, system and medium based on federated learning

The invention discloses a model parameter training method based on federal learning, a terminal, a system and a medium, and the method comprises the steps: determining a feature intersection of a first sample of a first terminal and a second sample of a second terminal, training the first sample based on the feature intersection to obtain a first mapping model, and sending the first mapping modelto the second terminal; receiving a second encryption mapping model sent by a second terminal, and predicting the missing feature part of the first sample to obtain a first encryption completion sample; receiving a first encrypted federal learning model parameter sent by a third terminal, training a to-be-trained federal learning model according to the first encrypted federal learning model parameter, and calculating a first encryption loss value; sending the first encryption loss value to a third terminal; and when a training stopping instruction sent by the third terminal is received, takingthe first encrypted federal learning model parameter as a final parameter of the federal learning model to be trained. According to the invention, the characteristic space of two federated parties isexpanded by using transfer learning, and the prediction capability of the federated model is improved.
Owner:WEBANK (CHINA)

Multi-task named entity recognition and confrontation training method for medical field

The invention discloses a multi-task named entity recognition and confrontation training method for medical field. The method includes the following steps of (1) collecting and processing data sets, so that each row is composed of a word and a label; (2) using a convolutional neural network to encode the information at the word character level, obtaining character vectors, and then stitching withword vectors to form input feature vectors; (3) constructing a sharing layer, and using a bidirection long-short-term memory nerve network to conduct modeling on input feature vectors of each word ina sentence to learn the common features of each task; (4) constructing a task layer, and conducting model on the input feature vectors and the output information in (3) through a bidirection long-short-term network to learn private features of each task; (5) using conditional random fields to decode labels of the outputs of (3) and (4); (6) using the information of the sharing layer to train a confrontation network to reduce the private features mixed into the sharing layer. According to the method, multi-task learning is performed on the data sets of multiple disease domains, confrontation training is introduced to make the features of the sharing layer and task layer more independent, and the task of training multiple named entity recognition simultaneously in a specific domain is accomplished quickly and efficiently.
Owner:ZHEJIANG UNIV

Neural network-based face detection model training method, neural network-based face detection method and corresponding systems

The present invention provides a neural network-based face detection model training method, a neural network-based face detection method, a neural network-based face detection model training system and a neural network-based face detection system. The training method includes the following steps that: the loss function of the bias network layer of a prediction face frame is calculated according to the bias information of the prediction face frame relative to a default face frame and the bias information of a real face frame relative to the default face frame; the loss function of the confidence network layer of the prediction face frame is calculated according to the confidence of the default face frame; the error of the two loss functions is calculated, and is fed back to a neural network, so that the weight of the neural network is adjusted; iterative training is repeated until convergence appears, so that a face detection model can be obtained, and therefore, the prediction face frame can contain a face more accurately. The detection method includes the following steps that: a face image to be detected is inputted to a trained face detection model, bias information and confidence are outputted; corresponding prediction face frames are calculated according to the bias information; and a prediction face frame corresponding to confidence greater than a preset confidence threshold or the highest confidence is selected as a face detection result.
Owner:CHONGQING INST OF GREEN & INTELLIGENT TECH CHINESE ACADEMY OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products