Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

40 results about "Language modelling" patented technology

Model fusion triad representation learning system and method based on deep learning

The invention discloses a model fusion triad representation learning system and method based on deep learning. The method comprises the following steps: carrying out the embedded representation of a word through a pre-trained BERT language model, and obtaining a more contextualized representation of the word; meanwhile, a masking language modeling task of a BERT structure is used for taking a triple of the masking language modeling task as sequence input; the method is used for solving the problem of multiple semantics of the same entity; the mapping entity relationship can be represented differently in different fields by using a projection or conversion matrix; however, the transformed BERT can take the triad or the description information thereof as text input and train the triad and the description information together; the mechanism of the BERT itself has different word vectors for the entity relationship in different sentences, and the problem of different semantics of the entityrelationship is effectively solved, so that the selection of TransE is not limited by the model itself. On the contrary, the model is simple enough to truly reflect the corresponding relationship among the triples. Meanwhile, the complexity of the model is reduced.
Owner:XI AN JIAOTONG UNIV

Joint simulation system based on Modelica and construction method thereof

The invention discloses a joint simulation system based on Modelica and a construction method thereof. The method comprises the steps of: establishing a subsystem model in each modeling software; using various different modeling software and languages, for example, a C language model is established in Visual Studio, a Simulink model is established in Simulink, an AMESim model is established in AMESim, a Fortran language model is established in Visual Studio, and the like; compiling each subsystem model to generate a dynamic link library, wherein the dynamic link library comprises a model simulation interface and a model solver; packaging each sub-calculation model into a subsystem Modelica model by utilizing an external function mechanism of Modelica; and finally, writing a master controlmodel by using a Modelica language, defining a sampling step length of each subsystem module, and scheduling each subsystem by using a solver of a master control end to realize data synchronization ofthe system. According to the technology, multi-modeling software and multi-language modeling can be supported, system simulation is divided into system scheduling and subsystem calculation, all subsystems are placed in independent processes to be solved, and the solving speed of the system is increased.
Owner:苏州同元软控信息技术有限公司 +1

Encoder-decoder framework pre-training method for neural machine translation

The invention discloses an encoder-decoder framework pre-training method for neural machine translation. The encoder-decoder framework pre-training method comprises the steps of: constructing a largenumber of multi-language document-level monolingual corpora, and adding a special identifier in front of each sentence to represent the language type of the sentence; processing sentence pairs to obtain training data; training monolingual data of different languages to obtain converged pre-training model parameters; constructing parallel corpora, and initializing parameters of a neural machine translation model by using the pre-training model parameters; finely adjusting model parameters of the initialized neural machine translation model through parallel corpora to finish a training process;and in a decoding stage, encoding a source language sentence by using an encoder of the trained neural machine translation model, and decoding by using a decoder to generate a target language sentence. According to the encoder-decoder framework pre-training method, the model has language modeling capability and language generation capability, the pre-training model is applied to the neural machinetranslation model, the convergence rate of the model can be increased, and the robustness of the model is improved.
Owner:沈阳雅译网络技术有限公司

SIMSCRIPT language-oriented discrete event simulation graphical modeling method

The invention provides an SIMSCRIPT language-oriented discrete event simulation graphical modeling method, which comprises the following steps of: adding entities, routines, events and other primitives into a canvas serving as a primitive bearing container in a dragging manner according to entity flow graphs, activity cycle graphs and other modeling technologies, and representing an interaction relationship among the entities, the routines and the events through connecting lines, according to the method, primitives can be drawn and managed in the canvases, the canvases can be divided into a plurality of canvases according to the calling relation and the hierarchical relation, all the canvases can be stored as engineering files of specific formats, and the engineering files can automatically generate SIMSCRIPT simulation codes according to mapping rules. The discrete event simulation program is built based on the SIMSCRIPT simulation language and a graphical dragging mode, the problem that SIMSCRIPT grammar and manual code writing need to be familiar with when SIMSCRIPT language modeling is adopted is solved, the modeling process can be clear and visual, model reuse is simpler, thelearning cost is lower, user groups are wider, and communication between domain experts and modeling personnel is facilitated.
Owner:中国人民解放军国防大学联合作战学院

Language modeling system structure searching method for translation tasks

The invention discloses a language modeling system structure search method for translation tasks, and the method comprises the following steps: obtaining and processing training data through the Internet, and modeling and training a network structure representation space; carrying out normalization operation on structure parameter values of meta-structure topology and operation in the training process; optimizing structure parameters and model parameters of the used model, and adjusting and optimizing a network structure and target parameters; further obtaining a discretized final structure according to the weight difference of different topologies and operations obtained after tuning, wherein the search result comprises the topological structure of the meta-structure and the operation used between the nodes; and circularly unfolding the searched meta-structures by using a connection mode between the meta-structures to obtain an integral model, performing parameter tuning on the model again by using training data, and finally training until convergence. The method greatly improves the possibility that the optimal solution of the model structure falls into the representation space of the search structure, thereby improving the effectiveness of the network structure search method.
Owner:沈阳雅译网络技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products