cfrna markers for predicting preterm birth risk
A risk, RAB27B technology, applied in the field of biomedicine, can solve the problem of non-invasive prenatal diagnosis technology blank
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0039] Example 1 Screening genetic markers related to premature delivery
[0040] 1, sample
[0041] From the sample preserved sample, 208 cases were screened, including 156 cases of normal pregnant women, 52 cases of pregnant women in pregnant women, randomly divided into training group (156 cases) and verification group (52 cases), and patient clinical information Including the age, BMI value, pregnancy history, two pregnancy intervals, drinking history, smoking history, drug abuse history, whether there is late abortion and / or premature birth history, cervical surgery, vaginal ultrasound examination results, fetus and amniotic fluid volume Is there a multi-child pregnancy, whether there is pregnancy complications or complications, whether it is auxiliary reproductive technology to promote pregnancy, production methods, whether it is full of fertile production, whether it is difficult to produce.
[0042] 2, data standardization
[0043] Collecting CFRNA expression of raw data...
Embodiment 2
[0049] Example 2 Feature Selection Process
[0050] 1. Use the lasso algorithm to select important feature variables
[0051] The loss function of Lasso minimizes as follows:
[0052]
[0053] Based on the R language, the GLMNET package is solved. Simply, by selecting different λ values, different W is obtained. By selecting the optimal parameters, the error rate is minimized. The selection of the characteristic variable is used to filter 16 CFRNAs.
[0054] 2. Select important feature variables using the BORUTA algorithm
[0055] BORUTA algorithm It is a packaging algorithm around the random forest. When fitting a random forest model for a data set, you can recursively deal with poor features in each iteration process. This method minimizes errors of random forest models, which will eventually form a minimized optimal feature subset.
[0056] The BORUTA algorithm runs as follows:
[0057] 1) First, it adds randomness to a given data set by creating all features (i.e., shadow fe...
Embodiment 3
[0064] Example 3 Random forest model predict pregnant women premature birth risk
[0065] Random forest estimation process
[0066] 1) Specify the M value, that is, randomly generates M variables for binary tree on nodes, and the selection of binary variables still meets the minimum principle of node unprofitability, and m = 2 in this model;
[0067] 2) Application Bootstrap self-help method is randomly extracted in the original data set, forming K tree decision tree, and for unsatisfactory samples for single decision tree prediction, this model k = 300, The model is basically stable;
[0068] 3) The random forest model RandomforeSt_Model composed of K decision tree is predicted to predict the classification sample unknown_sample, the function used for Predict (Randomforest_Model, Data = UNKNOWN_SAMPLE_DATA), the principle of prediction is simple average;
[0069] 4) Importance function calculates the importance of model variables, determining which variables have the greatest con...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


