Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

24888 results about "Neural network nn" patented technology

Method and System for Discovering Ancestors using Genomic and Genealogic Data

InactiveUS20170213127A1Reduced travel tendencyReduce in quantityData visualisationBiostatisticsCommon ancestryGenotype
Described invention and its embodiments, in part, facilitate discovery of ‘Most Recent Common Ancestors’ in the family trees between a massive plurality of individuals who have been predicted to be related according to amount of deoxyribonucleic acids (DNA) shared as determined from a plurality of 3rd party genome sequencing and matching systems. This facilitation is enabled through a holistic set of distributed software Agents running, in part, a plurality of cooperating Machine Learning systems, such as smart evolutionary algorithms, custom classification algorithms, cluster analysis and geo-temporal proximity analysis, which in part, enable and rely on a system of Knowledge Management applied to manually input and data-mined evidences and hierarchical clusters, quality metrics, fuzzy logic constraints and Bayesian network inspired inference sharing spanning across and between all data available on personal family trees or system created virtual trees, and employing all available data regarding the genome-matching results of Users associated to those trees, and all available historical data influencing the subjects in the trees, which are represented in a form of Competitive Learning network. Derivative results of this system include, in part, automated clustering and association of phenotypes to genotypes, automated recreation of ancestor partial genomes from accumulated DNA from triangulations and the traits correlated to that DNA, and a system of cognitive computing based on distributed neural networks with mobile Agents mediating activation according to connection weights.

Binocular stereoscopic vision matching method combining depth characteristics

The invention discloses a binocular stereoscopic vision matching method combining depth characteristics. The binocular stereoscopic vision matching method comprises: obtaining a depth characteristic pattern from left and right images through a convolutional neural network; calculating a truncation similarity measurement degree of pixel depth characteristics by taking the depth characteristics as the standard, and constructing a truncation matching cost function combining color, gradients and depth characteristics to obtain a matched cost volume; processing the matched cost volume by adopting a fixed window, a variable window and a self-adaptive weight polymerization or guide filtering method to obtain a cost volume polymerized by a matching cost; selecting an optimal parallax error of the cost volume by adopting WTA (Wireless Telephony Application) to obtain an initial parallax error pattern; then finding a shielding region by adopting a double-peak test, left-right consistency detection, sequence consistency detection or shielding constraint algorithm, and giving a shielding point to a parallax error value of a same-row point closest to the shielding point to obtain a parallax error pattern; and filtering the parallax error pattern by adopting a mean value or bilateral filter to obtain a final parallax error pattern. By adopting the binocular stereoscopic vision matching method combining the depth characteristics, the incorrect matching rate of three-dimensional matching can be effectively reduced, the images are smooth and image edges including edges of small objects are effectively kept.

Text sentiment classification algorithm based on convolutional neural network and attention mechanism

The invention discloses a text sentiment classification algorithm based on a convolutional neural network and an attention mechanism. The text sentiment classification algorithm comprises the steps of1, establishing the convolutional neural network comprising multiple convolutions and multiple kinds of pooling, and using sentiment classification text for training to obtain a first model; 2, establishing the multi-head point product attention mechanism into which residual connection and nonlinearity are added, and using the sentiment classification text for training to obtain a second model; 3, conducting model fusion on the two models to obtain sentiment classification of the text. Multiple granularity, the convolutions and multiple kinds of pooling are fused into the convolutional neuralnetwork, the residual connection and the nonlinearity are introduced into the attention mechanism, and attention is calculated several times to obtain two text sentiment classification models. Through a Bagging model fusion method, a fusion model is obtained, the text is classified, the advantages that the convolutional neural network can well capture local features and the attention mechanism can well capture global information can be combined, and the more comprehensive text sentiment classification models are obtained.

Infrared target instance segmentation method based on feature fusion and a dense connection network

PendingCN109584248ASolving the gradient explosion/gradient disappearance problemStrengthen detection and segmentation capabilitiesImage enhancementImage analysisData setFeature fusion
The invention discloses an infrared target instance segmentation method based on feature fusion and a dense connection network, and the method comprises the steps: collecting and constructing an infrared image data set required for instance segmentation, and obtaining an original known infrared tag image; Performing image enhancement preprocessing on the infrared image data set; Processing the preprocessed training set to obtain a classification result, a frame regression result and an instance segmentation mask result graph; Performing back propagation in the convolutional neural network by using a random gradient descent method according to the prediction loss function, and updating parameter values of the convolutional neural network; Selecting a fixed number of infrared image data training sets each time and sending the infrared image data training sets to the network for processing, and repeatedly carrying out iterative updating on the convolutional network parameters until the convolutional network training is completed by the maximum number of iterations; And processing the test set image data to obtain average precision and required time of instance segmentation and a finalinstance segmentation result graph.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products