Statistics
Statisticians
- Karl Pearson (1857-1936): chi-square, p-value, PCA
- William Sealy Gosset (1876-1937): Student's t
- Ronald Fisher (1890-1962): ANOVA
- Egon Pearson (1895-1980): son of Karl Pearson
- Jerzy Neyman (1894-1981): type 1 error
Statistics for biologists
http://www.nature.com/collections/qghhqm
Transform sample values to their percentiles
https://stackoverflow.com/questions/21219447/calculating-percentile-of-dataset-column
set.seed(1234) x <- rnorm(10) x # [1] -1.2070657 0.2774292 1.0844412 -2.3456977 0.4291247 0.5060559 # [7] -0.5747400 -0.5466319 -0.5644520 -0.8900378 ecdf(x)(x) # [1] 0.2 0.7 1.0 0.1 0.8 0.9 0.4 0.6 0.5 0.3 rank(x) # [1] 2 7 10 1 8 9 4 6 5 3
Box(Box and whisker) plot in R
See
- https://en.wikipedia.org/wiki/Box_plot
- https://owi.usgs.gov/blog/boxplots/ (ggplot2 is used)
- https://flowingdata.com/2008/02/15/how-to-read-and-use-a-box-and-whisker-plot/
- Quartile from Wikipedia. The quartiles returned from R are the same as the method defined by Method 2 described in Wikipedia.
An example for a graphical explanation.
> x=c(0,4,15, 1, 6, 3, 20, 5, 8, 1, 3) > summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 0 2 4 6 7 20 > sort(x) [1] 0 1 1 3 3 4 5 6 8 15 20 > boxplot(x, col = 'grey') # https://en.wikipedia.org/wiki/Quartile#Example_1 > summary(c(6, 7, 15, 36, 39, 40, 41, 42, 43, 47, 49)) Min. 1st Qu. Median Mean 3rd Qu. Max. 6.00 25.50 40.00 33.18 42.50 49.00
- The lower and upper edges of box is determined by the first and 3rd quartiles (2 and 7 in the above example).
- 2 = median(c(0, 1, 1, 3, 3, 4)) = (1+3)/2
- 7 = median(c(4, 5, 6, 8, 15, 20)) = (6+8)/2
- IQR = 7 - 2 = 5
- The thick dark horizon line is the median (4 in the example).
- Outliers are defined by (the empty circles in the plot)
- Observations larger than 3rd quartile + 1.5 * IQR (7+1.5*5=14.5) and
- smaller than 1st quartile - 1.5 * IQR (2-1.5*5=-5.5).
- Note that the cutoffs are not shown in the Box plot.
- Whisker (defined using the cutoffs used to define outliers)
- Upper whisker is defined by the largest "data" below 3rd quartile + 1.5 * IQR (8 in this example), and
- Lower whisker is defined by the smallest "data" greater than 1st quartile - 1.5 * IQR (0 in this example).
- See another example below where we can see the whiskers fall on observations.
Note the wikipedia lists several possible definitions of a whisker. R uses the 2nd method (Tukey boxplot) to define whiskers.
Create boxplots from a list object
Normally we use a vector to create a single boxplot or a formula on a data to create boxplots.
But we can also use split() to create a list and then make boxplots.
Dot-box plot
- http://civilstat.com/2012/09/the-grammar-of-graphics-notes-on-first-reading/
- http://www.r-graph-gallery.com/89-box-and-scatter-plot-with-ggplot2/
- http://www.sthda.com/english/wiki/ggplot2-box-plot-quick-start-guide-r-software-and-data-visualization
- Graphs in R – Overlaying Data Summaries in Dotplots. Note that for some reason, the boxplot will cover the dots when we save the plot to an svg or a png file. So an alternative solution is to change the order
par(cex.main=0.9,cex.lab=0.8,font.lab=2,cex.axis=0.8,font.axis=2,col.axis="grey50") boxplot(weight ~ feed, data = chickwts, range=0, whisklty = 0, staplelty = 0) par(new = TRUE) stripchart(weight ~ feed, data = chickwts, xlim=c(0.5,6.5), vertical=TRUE, method="stack", offset=0.8, pch=19, main = "Chicken weights after six weeks", xlab = "Feed Type", ylab = "Weight (g)")
Other boxplots
stem and leaf plot
stem(). See R Tutorial.
Note that stem plot is useful when there are outliers.
> stem(x) The decimal point is 10 digit(s) to the right of the | 0 | 00000000000000000000000000000000000000000000000000000000000000000000+419 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 9 > max(x) [1] 129243100275 > max(x)/1e10 [1] 12.92431 > stem(y) The decimal point is at the | 0 | 014478 1 | 0 2 | 1 3 | 9 4 | 8 > y [1] 3.8667356428 0.0001762708 0.7993462430 0.4181079732 0.9541728562 [6] 4.7791262101 0.6899313108 2.1381289177 0.0541736818 0.3868776083 > set.seed(1234) > z <- rnorm(10)*10 > z [1] -12.070657 2.774292 10.844412 -23.456977 4.291247 5.060559 [7] -5.747400 -5.466319 -5.644520 -8.900378 > stem(z) The decimal point is 1 digit(s) to the right of the | -2 | 3 -1 | 2 -0 | 9665 0 | 345 1 | 1
Box-Cox transformation
the Holy Trinity (LRT, Wald, Score tests)
- https://en.wikipedia.org/wiki/Likelihood_function which includes profile likelihood and partial likelihood
- Review of the likelihood theory
- The “Three Plus One” Likelihood-Based Test Statistics: Unified Geometrical and Graphical Interpretations
- Variable selection – A review and recommendations for the practicing statistician by Heinze et al 2018.
- Score test is step-up. Score test is typically used in forward steps to screen covariates currently not included in a model for their ability to improve model.
- Wald test is step-down. Wald test starts at the full model. It evaluate the significance of a variable by comparing the ratio of its estimate and its standard error with an appropriate t distribution (for linear models) or standard normal distribution (for logistic or Cox regression).
- Likelihood ratio tests provide the best control over nuisance parameters by maximizing the likelihood over them both in H0 model and H1 model. In particular, if several coefficients are being tested simultaneously, LRTs for model comparison are preferred over Wald or score tests.
- R packages
- lmtest package, waldtest() and lrtest().
Don't invert that matrix
- http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/
- http://civilstat.com/2015/07/dont-invert-that-matrix-why-and-how/
Different matrix decompositions/factorizations
- QR decomposition, qr()
- LU decomposition, lu() from the 'Matrix' package
- Cholesky decomposition, chol()
- Singular value decomposition, svd()
set.seed(1234) x <- matrix(rnorm(10*2), nr= 10) cmat <- cov(x); cmat # [,1] [,2] # [1,] 0.9915928 -0.1862983 # [2,] -0.1862983 1.1392095 # cholesky decom d1 <- chol(cmat) t(d1) %*% d1 # equal to cmat d1 # upper triangle # [,1] [,2] # [1,] 0.9957875 -0.1870864 # [2,] 0.0000000 1.0508131 # svd d2 <- svd(cmat) d2$u %*% diag(d2$d) %*% t(d2$v) # equal to cmat d2$u %*% diag(sqrt(d2$d)) # [,1] [,2] # [1,] -0.6322816 0.7692937 # [2,] 0.9305953 0.5226872
Linear Regression
Regression Models for Data Science in R by Brian Caffo
Comic https://xkcd.com/1725/
Different models (in R)
http://www.quantide.com/raccoon-ch-1-introduction-to-linear-models-with-r/
dummy.coef.lm() in R
Extracts coefficients in terms of the original levels of the coefficients rather than the coded variables.
model.matrix, design matrix
ExploreModelMatrix: Explore design matrices interactively with R/Shiny
Contrasts in linear regression
- Page 147 of Modern Applied Statistics with S (4th ed)
- https://biologyforfun.wordpress.com/2015/01/13/using-and-interpreting-different-contrasts-in-linear-models-in-r/ This explains the meanings of 'treatment', 'helmert' and 'sum' contrasts.
- A (sort of) Complete Guide to Contrasts in R by Rose Maier
mat ## constant NLvMH NvL MvH ## [1,] 1 -0.5 0.5 0.0 ## [2,] 1 -0.5 -0.5 0.0 ## [3,] 1 0.5 0.0 0.5 ## [4,] 1 0.5 0.0 -0.5 mat <- mat[ , -1] model7 <- lm(y ~ dose, data=data, contrasts=list(dose=mat) ) summary(model7) ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 118.578 1.076 110.187 < 2e-16 *** ## doseNLvMH 3.179 2.152 1.477 0.14215 ## doseNvL -8.723 3.044 -2.866 0.00489 ** ## doseMvH 13.232 3.044 4.347 2.84e-05 *** # double check your contrasts attributes(model7$qr$qr)$contrasts ## $dose ## NLvMH NvL MvH ## None -0.5 0.5 0.0 ## Low -0.5 -0.5 0.0 ## Med 0.5 0.0 0.5 ## High 0.5 0.0 -0.5 library(dplyr) dose.means <- summarize(group_by(data, dose), y.mean=mean(y)) dose.means ## Source: local data frame [4 x 2] ## ## dose y.mean ## 1 None 112.6267 ## 2 Low 121.3500 ## 3 Med 126.7839 ## 4 High 113.5517 # The coefficient estimate for the first contrast (3.18) equals the average of # the last two groups (126.78 + 113.55 /2 = 120.17) minus the average of # the first two groups (112.63 + 121.35 /2 = 116.99).
Multicollinearity
- Multicollinearity in R
- alias: Find Aliases (Dependencies) In A Model
> op <- options(contrasts = c("contr.helmert", "contr.poly")) > npk.aov <- aov(yield ~ block + N*P*K, npk) > alias(npk.aov) Model : yield ~ block + N * P * K Complete : (Intercept) block1 block2 block3 block4 block5 N1 P1 K1 N1:P1 N1:K1 P1:K1 N1:P1:K1 0 1 1/3 1/6 -3/10 -1/5 0 0 0 0 0 0 > options(op)
Exposure
https://en.mimi.hu/mathematics/exposure_variable.html
Independent variable = predictor = explanatory = exposure variable
Confounders, confounding
- https://en.wikipedia.org/wiki/Confounding
- A method for controlling complex confounding effects in the detection of adverse drug reactions using electronic health records. It provides a rule to identify a confounder.
- http://anythingbutrbitrary.blogspot.com/2016/01/how-to-create-confounders-with.html (R example)
- Logistic Regression: Confounding and Colinearity
- Identifying a confounder
- Is it possible to have a variable that acts as both an effect modifier and a confounder?
- Which test to use to check if a possible confounder impacts a 0 / 1 result?
- Addressing confounding artifacts in reconstruction of gene co-expression networks Parsana 2019
Causal inference
- https://en.wikipedia.org/wiki/Causal_inference
- Confounding in causal inference: what is it, and what to do about it?
Confidence interval vs prediction interval
Confidence intervals tell you about how well you have determined the mean E(Y). Prediction intervals tell you where you can expect to see the next data point sampled. That is, CI is computed using Var(E(Y|X)) and PI is computed using Var(E(Y|X) + e).
- http://www.graphpad.com/support/faqid/1506/
- http://en.wikipedia.org/wiki/Prediction_interval
- http://robjhyndman.com/hyndsight/intervals/
- https://stat.duke.edu/courses/Fall13/sta101/slides/unit7lec3H.pdf
- https://datascienceplus.com/prediction-interval-the-wider-sister-of-confidence-interval/
Heteroskedasticity
Dealing with heteroskedasticity; regression with robust standard errors using R
Linear regression with Map Reduce
https://freakonometrics.hypotheses.org/53269
Quantile regression
Non- and semi-parametric regression
- Semiparametric Regression in R
- https://socialsciences.mcmaster.ca/jfox/Courses/Oxford-2005/R-nonparametric-regression.html
Mean squared error
Simulating the bias-variance tradeoff in R
Splines
- https://en.wikipedia.org/wiki/B-spline
- Cubic and Smoothing Splines in R. bs() is for cubic spline and smooth.spline() is for smoothing spline.
- Can we use B-splines to generate non-linear data?
- How to force passing two data points? (cobs package)
- https://www.rdocumentation.org/packages/cobs/versions/1.3-3/topics/cobs
k-Nearest neighbor regression
- k-NN regression in practice: boundary problem, discontinuities problem.
- Weighted k-NN regression: want weight to be small when distance is large. Common choices - weight = kernel(xi, x)
Kernel regression
- Instead of weighting NN, weight ALL points. Nadaraya-Watson kernel weighted average:
[math]\displaystyle{ \hat{y}_q = \sum c_{qi} y_i/\sum c_{qi} = \frac{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))*y_i}{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))} }[/math].
- Choice of bandwidth [math]\displaystyle{ \lambda }[/math] for bias, variance trade-off. Small [math]\displaystyle{ \lambda }[/math] is over-fitting. Large [math]\displaystyle{ \lambda }[/math] can get an over-smoothed fit. Cross-validation.
- Kernel regression leads to locally constant fit.
- Issues with high dimensions, data scarcity and computational complexity.
Principal component analysis
R source code
> stats:::prcomp.default function (x, retx = TRUE, center = TRUE, scale. = FALSE, tol = NULL, ...) { x <- as.matrix(x) x <- scale(x, center = center, scale = scale.) cen <- attr(x, "scaled:center") sc <- attr(x, "scaled:scale") if (any(sc == 0)) stop("cannot rescale a constant/zero column to unit variance") s <- svd(x, nu = 0) s$d <- s$d/sqrt(max(1, nrow(x) - 1)) if (!is.null(tol)) { rank <- sum(s$d > (s$d[1L] * tol)) if (rank < ncol(x)) { s$v <- s$v[, 1L:rank, drop = FALSE] s$d <- s$d[1L:rank] } } dimnames(s$v) <- list(colnames(x), paste0("PC", seq_len(ncol(s$v)))) r <- list(sdev = s$d, rotation = s$v, center = if (is.null(cen)) FALSE else cen, scale = if (is.null(sc)) FALSE else sc) if (retx) r$x <- x %*% s$v class(r) <- "prcomp" r } <bytecode: 0x000000003296c7d8> <environment: namespace:stats>
R example
http://genomicsclass.github.io/book/pages/pca_svd.html
pc <- prcomp(x) group <- as.numeric(tab$Tissue) plot(pc$x[, 1], pc$x[, 2], col = group, main = "PCA", xlab = "PC1", ylab = "PC2")
The meaning of colors can be found by palette().
- black
- red
- green3
- blue
- cyan
- magenta
- yellow
- gray
PCA and SVD
Using the SVD to perform PCA makes much better sense numerically than forming the covariance matrix to begin with, since the formation of [math]\displaystyle{ X X^T }[/math] can cause loss of precision.
http://math.stackexchange.com/questions/3869/what-is-the-intuitive-relationship-between-svd-and-pca
AIC/BIC in estimating the number of components
Related to Factor Analysis
- http://www.aaronschlegel.com/factor-analysis-introduction-principal-component-method-r/.
- http://support.minitab.com/en-us/minitab/17/topic-library/modeling-statistics/multivariate/principal-components-and-factor-analysis/differences-between-pca-and-factor-analysis/
In short,
- In Principal Components Analysis, the components are calculated as linear combinations of the original variables. In Factor Analysis, the original variables are defined as linear combinations of the factors.
- In Principal Components Analysis, the goal is to explain as much of the total variance in the variables as possible. The goal in Factor Analysis is to explain the covariances or correlations between the variables.
- Use Principal Components Analysis to reduce the data into a smaller number of components. Use Factor Analysis to understand what constructs underlie the data.
Calculated by Hand
http://strata.uga.edu/software/pdf/pcaTutorial.pdf
Do not scale your matrix
https://privefl.github.io/blog/(Linear-Algebra)-Do-not-scale-your-matrix/
Visualization
- PCA and Visualization
- Scree plots from the FactoMineR package (based on ggplot2)
What does it do if we choose center=FALSE in prcomp()?
In USArrests data, use center=FALSE gives a better scatter plot of the first 2 PCA components.
x1 = prcomp(USArrests) x2 = prcomp(USArrests, center=F) plot(x1$x[,1], x1$x[,2]) # looks random windows(); plot(x2$x[,1], x2$x[,2]) # looks good in some sense
Relation to Multidimensional scaling/MDS
With no missing data, classical MDS (Euclidean distance metric) is the same as PCA.
Comparisons are here.
Differences are asked/answered on stackexchange.com. The post also answered the question when these two are the same.
isoMDS (Non-metric)
cmdscale (Metric)
Matrix factorization methods
http://joelcadwell.blogspot.com/2015/08/matrix-factorization-comes-in-many.html Review of principal component analysis (PCA), K-means clustering, nonnegative matrix factorization (NMF) and archetypal analysis (AA).
Number of components
Obtaining the number of components from cross validation of principal components regression
Partial Least Squares (PLS)
- https://en.wikipedia.org/wiki/Partial_least_squares_regression. The general underlying model of multivariate PLS is
- [math]\displaystyle{ X = T P^\mathrm{T} + E }[/math]
- [math]\displaystyle{ Y = U Q^\mathrm{T} + F }[/math]
where X is an [math]\displaystyle{ n \times m }[/math] matrix of predictors, Y is an [math]\displaystyle{ n \times p }[/math] matrix of responses; T and U are [math]\displaystyle{ n \times l }[/math] matrices that are, respectively, projections of X (the X score, component or factor matrix) and projections of Y (the Y scores); P and Q are, respectively, [math]\displaystyle{ m \times l }[/math] and [math]\displaystyle{ p \times l }[/math] orthogonal loading matrices; and matrices E and F are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of X and Y are made so as to maximise the covariance between T and U (projection matrices).
- Supervised vs. Unsupervised Learning: Exploring Brexit with PLS and PCA
- pls R package
- plsRcox R package (archived). See here for the installation.
PLS, PCR (principal components regression) and ridge regression tend to behave similarly. Ridge regression may be preferred because it shrinks smoothly, rather than in discrete steps.
High dimension
Partial least squares prediction in high-dimensional regression Cook and Forzani, 2019
Independent component analysis
ICA is another dimensionality reduction method.
ICA vs PCA
ICS vs FA
Correspondence analysis
https://francoishusson.wordpress.com/2017/07/18/multiple-correspondence-analysis-with-factominer/ and the book Exploratory Multivariate Analysis by Example Using R
t-SNE
t-Distributed Stochastic Neighbor Embedding (t-SNE) is a technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets.
- https://distill.pub/2016/misread-tsne/
- https://lvdmaaten.github.io/tsne/
- Application to ARCHS4
- Visualization of High Dimensional Data using t-SNE with R
- http://blog.thegrandlocus.com/2018/08/a-tutorial-on-t-sne-1
- Quick and easy t-SNE analysis in R
Visualize the random effects
http://www.quantumforest.com/2012/11/more-sense-of-random-effects/
Calibration
- How to determine calibration accuracy/uncertainty of a linear regression?
- Linear Regression and Calibration Curves
- Regression and calibration Shaun Burke
- calibrate package
- The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models by Kattan and Gerds 2018. The following code demonstrates Figure 2.
# Odds ratio =1 and calibrated model set.seed(666) x = rnorm(1000) z1 = 1 + 0*x pr1 = 1/(1+exp(-z1)) y1 = rbinom(1000,1,pr1) mean(y1) # .724, marginal prevalence of the outcome dat1 <- data.frame(x=x, y=y1) newdat1 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr1)) # Odds ratio =1 and severely miscalibrated model set.seed(666) x = rnorm(1000) z2 = -2 + 0*x pr2 = 1/(1+exp(-z2)) y2 = rbinom(1000,1,pr2) mean(y2) # .12 dat2 <- data.frame(x=x, y=y2) newdat2 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr2)) library(riskRegression) lrfit1 <- glm(y ~ x, data = dat1, family = 'binomial') IPA(lrfit1, newdata = newdat1) # Variable Brier IPA IPA.gain # 1 Null model 0.1984710 0.000000e+00 -0.003160010 # 2 Full model 0.1990982 -3.160010e-03 0.000000000 # 3 x 0.1984800 -4.534668e-05 -0.003114664 1 - 0.1990982/0.1984710 # [1] -0.003160159 lrfit2 <- glm(y ~ x, family = 'binomial') IPA(lrfit2, newdata = newdat1) # Variable Brier IPA IPA.gain # 1 Null model 0.1984710 0.000000 -1.859333763 # 2 Full model 0.5674948 -1.859334 0.000000000 # 3 x 0.5669200 -1.856437 -0.002896299 1 - 0.5674948/0.1984710 # [1] -1.859334
From the simulated data, we see IPA = -3.16e-3 for a calibrated model and IPA = -1.86 for a severely miscalibrated model.
ROC curve and Brier score
- Binary case:
- Y = true positive rate = sensitivity,
- X = false positive rate = 1-specificity
- Area under the curve AUC from the wikipedia: the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').
- [math]\displaystyle{ A = \int_{\infty}^{-\infty} \mbox{TPR}(T) \mbox{FPR}'(T) \, dT = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} I(T'\gt T)f_1(T') f_0(T) \, dT' \, dT = P(X_1 \gt X_0) }[/math]
where [math]\displaystyle{ X_1 }[/math] is the score for a positive instance and [math]\displaystyle{ X_0 }[/math] is the score for a negative instance, and [math]\displaystyle{ f_0 }[/math] and [math]\displaystyle{ f_1 }[/math] are probability densities as defined in previous section.
- Interpretation of the AUC. A small toy example (n=12=4+8) was used to calculate the exact probability [math]\displaystyle{ P(X_1 \gt X_0) }[/math] (4*8=32 all combinations).
- It is a discrimination measure which tells us how well we can classify patients in two groups: those with and those without the outcome of interest.
- Since the measure is based on ranks, it is not sensitive to systematic errors in the calibration of the quantitative tests.
- The AUC can be defined as The probability that a randomly selected case will have a higher test result than a randomly selected control.
- Plot of sensitivity/specificity (y-axis) vs cutoff points of the biomarker
- The Mann-Whitney U test statistic (or Wilcoxon or Kruskall-Wallis test statistic) is equivalent to the AUC (Mason, 2002)
- The p-value of the Mann-Whitney U test can thus safely be used to test whether the AUC differs significantly from 0.5 (AUC of an uninformative test).
- Calculate AUC by hand. AUC is equal to the probability that a true positive is scored greater than a true negative.
- How to calculate Area Under the Curve (AUC), or the c-statistic, by hand or by R
- Introduction to the ROCR package. Add threshold labels
- http://freakonometrics.hypotheses.org/9066, http://freakonometrics.hypotheses.org/20002
- Illustrated Guide to ROC and AUC
- ROC Curves in Two Lines of R Code
- Gini and AUC. Gini = 2*AUC-1.
- Generally, an AUC value over 0.7 is indicative of a model that can distinguish between the two outcomes well. An AUC of 0.5 tells us that the model is a random classifier, and it cannot distinguish between the two outcomes.
Survival data
'Survival Model Predictive Accuracy and ROC Curves' by Heagerty & Zheng 2005
- Recall Sensitivity= [math]\displaystyle{ P(\hat{p_i} \gt c | Y_i=1) }[/math], Specificity= [math]\displaystyle{ P(\hat{p}_i \le c | Y_i=0 }[/math]), [math]\displaystyle{ Y_i }[/math] is binary outcomes, [math]\displaystyle{ \hat{p}_i }[/math] is a prediction, [math]\displaystyle{ c }[/math] is a criterion for classifying the prediction as positive ([math]\displaystyle{ \hat{p}_i \gt c }[/math]) or negative ([math]\displaystyle{ \hat{p}_i \le c }[/math]).
- For survival data, we need to use a fixed time/horizon (t) to classify the data as either a case or a control. Following Heagerty and Zheng's definition (Incident/dynamic), Sensitivity(c, t)= [math]\displaystyle{ P(M_i \gt c | T_i = t) }[/math], Specificity= [math]\displaystyle{ P(M_i \le c | T_i \gt 0 }[/math]) where M is a marker value or [math]\displaystyle{ Z^T \beta }[/math]. Here sensitivity measures the expected fraction of subjects with a marker greater than c among the subpopulation of individuals who die at time t, while specificity measures the fraction of subjects with a marker less than or equal to c among those who survive beyond time t.
- The AUC measures the probability that the marker value for a randomly selected case exceeds the marker value for a randomly selected control
- ROC curves are useful for comparing the discriminatory capacity of different potential biomarkers.
Confusion matrix, Sensitivity/Specificity/Accuracy
Predict | ||||
1 | 0 | |||
True | 1 | TP | FN | Sens=TP/(TP+FN)=Recall FNR=FN/(TP+FN) |
0 | FP | TN | Spec=TN/(FP+TN) | |
PPV=TP/(TP+FP) FDR=FP/(TP+FP) |
NPV=TN/(FN+TN) | N = TP + FP + FN + TN |
- Sensitivity = TP / (TP + FN) = Recall
- Specificity = TN / (TN + FP)
- Accuracy = (TP + TN) / N
- False discovery rate FDR = FP / (TP + FP)
- False negative rate FNR = FN / (TP + FN)
- Positive predictive value (PPV) = TP / # positive calls = TP / (TP + FP) = 1 - FDR
- Negative predictive value (NPV) = TN / # negative calls = TN / (FN + TN)
- Prevalence = (TP + FN) / N.
- Note that PPV & NPV can also be computed from sensitivity, specificity, and prevalence:
- PPV is directly proportional to the prevalence of the disease or condition..
- For example, in the extreme case if the prevalence =1, then PPV is always 1.
- [math]\displaystyle{ \text{PPV} = \frac{\text{sensitivity} \times \text{prevalence}}{\text{sensitivity} \times \text{prevalence}+(1-\text{specificity}) \times (1-\text{prevalence})} }[/math]
- [math]\displaystyle{ \text{NPV} = \frac{\text{specificity} \times (1-\text{prevalence})}{(1-\text{sensitivity}) \times \text{prevalence}+\text{specificity} \times (1-\text{prevalence})} }[/math]
Precision recall curve
- Precision and recall
- Y-axis: Precision = tp/(tp + fp) = PPV, large is better
- X-axis: Recall = tp/(tp + fn) = Sensitivity, large is better
- The Relationship Between Precision-Recall and ROC Curves. Remember ROC is defined as
- Y-axis: Sensitivity = tp/(tp + fn) = Recall
- X-axis: 1-Specificity = fp/(fp + tn)
Incidence, Prevalence
https://www.health.ny.gov/diseases/chronic/basicstat.htm
Calculate area under curve by hand (using trapezoid), relation to concordance measure and the Wilcoxon–Mann–Whitney test
- https://stats.stackexchange.com/a/146174
- The meaning and use of the area under a receiver operating characteristic (ROC) curve J A Hanley, B J McNeil 1982
genefilter package and rowpAUCs function
- rowpAUCs function in genefilter package. The aim is to find potential biomarkers whose expression level is able to distinguish between two groups.
# source("http://www.bioconductor.org/biocLite.R") # biocLite("genefilter") library(Biobase) # sample.ExpressionSet data data(sample.ExpressionSet) library(genefilter) r2 = rowpAUCs(sample.ExpressionSet, "sex", p=0.1) plot(r2[1]) # first gene, asking specificity = .9 r2 = rowpAUCs(sample.ExpressionSet, "sex", p=1.0) plot(r2[1]) # it won't show pAUC r2 = rowpAUCs(sample.ExpressionSet, "sex", p=.999) plot(r2[1]) # pAUC is very close to AUC now
Use and Misuse of the Receiver Operating Characteristic Curve in Risk Prediction
http://circ.ahajournals.org/content/115/7/928
Performance evaluation
- Testing for improvement in prediction model performance by Pepe et al 2013.
Some R packages
Comparison of two AUCs
- Statistical Assessments of AUC. This is using the pROC::roc.test function.
NRI (Net reclassification improvement)
Maximum likelihood
Difference of partial likelihood, profile likelihood and marginal likelihood
Generalized Linear Model
Lectures from a course in Simon Fraser University Statistics.
Doing magic and analyzing seasonal time series with GAM (Generalized Additive Model) in R
Link function
Link Functions versus Data Transforms
Quasi Likelihood
Quasi-likelihood is like log-likelihood. The quasi-score function (first derivative of quasi-likelihood function) is the estimating equation.
- Original paper by Peter McCullagh.
- Lecture 20 from SFU.
- U. Washington and another lecture focuses on overdispersion.
- This lecture contains a table of quasi likelihood from common distributions.
Plot
Deviance, stats::deviance() and glmnet::deviance.glmnet() from R
- It is a generalization of the idea of using the sum of squares of residuals (RSS) in ordinary least squares to cases where model-fitting is achieved by maximum likelihood. See What is Deviance? (specifically in CART/rpart) to manually compute deviance and compare it with the returned value of the deviance() function from a linear regression. Summary: deviance() = RSS in linear models.
- https://www.rdocumentation.org/packages/stats/versions/3.4.3/topics/deviance
- Likelihood ratio tests and the deviance http://data.princeton.edu/wws509/notes/a2.pdf#page=6
- Deviance(y,muhat) = 2*(loglik_saturated - loglik_proposed)
- Interpreting Residual and Null Deviance in GLM R
- Null Deviance = 2(LL(Saturated Model) - LL(Null Model)) on df = df_Sat - df_Null. The null deviance shows how well the response variable is predicted by a model that includes only the intercept (grand mean).
- Residual Deviance = 2(LL(Saturated Model) - LL(Proposed Model)) = [math]\displaystyle{ 2(LL(y|y) - LL(\hat{\mu}|y)) }[/math], df = df_Sat - df_Proposed=n-p. ==> deviance() has returned.
- Null deviance > Residual deviance. Null deviance df = n-1. Residual deviance df = n-p.
## an example with offsets from Venables & Ripley (2002, p.189) utils::data(anorexia, package = "MASS") anorex.1 <- glm(Postwt ~ Prewt + Treat + offset(Prewt), family = gaussian, data = anorexia) summary(anorex.1) # Call: # glm(formula = Postwt ~ Prewt + Treat + offset(Prewt), family = gaussian, # data = anorexia) # # Deviance Residuals: # Min 1Q Median 3Q Max # -14.1083 -4.2773 -0.5484 5.4838 15.2922 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 49.7711 13.3910 3.717 0.000410 *** # Prewt -0.5655 0.1612 -3.509 0.000803 *** # TreatCont -4.0971 1.8935 -2.164 0.033999 * # TreatFT 4.5631 2.1333 2.139 0.036035 * # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # # (Dispersion parameter for gaussian family taken to be 48.69504) # # Null deviance: 4525.4 on 71 degrees of freedom # Residual deviance: 3311.3 on 68 degrees of freedom # AIC: 489.97 # # Number of Fisher Scoring iterations: 2 deviance(anorex.1) # [1] 3311.263
- In glmnet package. The deviance is defined to be 2*(loglike_sat - loglike), where loglike_sat is the log-likelihood for the saturated model (a model with a free parameter per observation). Null deviance is defined to be 2*(loglike_sat -loglike(Null)); The NULL model refers to the intercept model, except for the Cox, where it is the 0 model. Hence dev.ratio=1-deviance/nulldev, and this deviance method returns (1-dev.ratio)*nulldev.
x=matrix(rnorm(100*2),100,2) y=rnorm(100) fit1=glmnet(x,y) deviance(fit1) # one for each lambda # [1] 98.83277 98.53893 98.29499 98.09246 97.92432 97.78472 97.66883 # [8] 97.57261 97.49273 97.41327 97.29855 97.20332 97.12425 97.05861 # ... # [57] 96.73772 96.73770 fit2 <- glmnet(x, y, lambda=.1) # fix lambda deviance(fit2) # [1] 98.10212 deviance(glm(y ~ x)) # [1] 96.73762 sum(residuals(glm(y ~ x))^2) # [1] 96.73762
Saturated model
- The saturated model always has n parameters where n is the sample size.
- Logistic Regression : How to obtain a saturated model
Simulate data
Density plot
# plot a Weibull distribution with shape and scale func <- function(x) dweibull(x, shape = 1, scale = 3.38) curve(func, .1, 10) func <- function(x) dweibull(x, shape = 1.1, scale = 3.38) curve(func, .1, 10)
The shape parameter plays a role on the shape of the density function and the failure rate.
- Shape <=1: density is convex, not a hat shape.
- Shape =1: failure rate (hazard function) is constant. Exponential distribution.
- Shape >1: failure rate increases with time
Simulate data from a specified density
Signal to noise ratio
- https://en.wikipedia.org/wiki/Signal-to-noise_ratio
- https://stats.stackexchange.com/questions/31158/how-to-simulate-signal-noise-ratio
- [math]\displaystyle{ \frac{\sigma^2_{signal}}{\sigma^2_{noise}} = \frac{Var(f(X))}{Var(e)} }[/math] if Y = f(X) + e
- Page 401 of ESLII (https://web.stanford.edu/~hastie/ElemStatLearn//) 12th print.
Some examples of signal to noise ratio
- ESLII_print12.pdf: .64, 5, 4
- Yuan and Lin 2006: 1.8, 3
- A framework for estimating and testing qualitative interactions with applications to predictive biomarkers Roth, Biostatistics, 2018
Effect size, Cohen's d and volcano plot
- https://en.wikipedia.org/wiki/Effect_size (See also the estimation by the pooled sd)
- [math]\displaystyle{ \theta = \frac{\mu_1 - \mu_2} \sigma, }[/math]
- Effect size, sample size and power from Learning statistics with R: A tutorial for psychology students and other beginners.
- t-statistic and Cohen's d for the case of mean difference between two independent groups
- Cohen’s D for Experimental Planning
- Volcano plot
- Y-axis: -log(p)
- X-axis: log2 fold change OR effect size (Cohen's D). An example from RNA-Seq data.
Multiple comparisons
- If you perform experiments over and over, you's bound to find something. So significance level must be adjusted down when performing multiple hypothesis tests.
- http://www.gs.washington.edu/academics/courses/akey/56008/lecture/lecture10.pdf
- Book 'Multiple Comparison Using R' by Bretz, Hothorn and Westfall, 2011.
- Plot a histogram of p-values, a post from varianceexplained.org. The anti-conservative histogram (tail on the RHS) is what we have typically seen in e.g. microarray gene expression data.
- Comparison of different ways of multiple-comparison in R.
Take an example, Suppose 550 out of 10,000 genes are significant at .05 level
- P-value < .05 ==> Expect .05*10,000=500 false positives
- False discovery rate < .05 ==> Expect .05*550 =27.5 false positives
- Family wise error rate < .05 ==> The probablity of at least 1 false positive <.05
According to Lifetime Risk of Developing or Dying From Cancer, there is a 39.7% risk of developing a cancer for male during his lifetime (in other words, 1 out of every 2.52 men in US will develop some kind of cancer during his lifetime) and 37.6% for female. So the probability of getting at least one cancer patient in a 3-generation family is 1-.6**3 - .63**3 = 0.95.
False Discovery Rate
- https://en.wikipedia.org/wiki/False_discovery_rate
- Paper Definition by Benjamini and Hochberg in JRSS B 1995.
- A comic
- P-value vs false discovery rate vs family wise error rate. See 10 statistics tip or Statistics for Genomics (140.688) from Jeff Leek. Suppose 550 out of 10,000 genes are significant at .05 level
- P-value < .05 implies expecting .05*10000 = 500 false positives
- False discovery rate < .05 implies expecting .05*550 = 27.5 false positives
- Family wise error rate (P (# of false positives ≥ 1)) < .05. See Understanding Family-Wise Error Rate
- Statistical significance for genomewide studies by Storey and Tibshirani.
- What’s the probability that a significant p-value indicates a true effect?
- http://onetipperday.sterding.com/2015/12/my-note-on-multiple-testing.html
- A practical guide to methods controlling false discoveries in computational biology by Korthauer, et al 2018, BMC Genome Biology 2019
- onlineFDR: an R package to control the false discovery rate for growing data repositories
Suppose [math]\displaystyle{ p_1 \leq p_2 \leq ... \leq p_n }[/math]. Then
- [math]\displaystyle{ \text{FDR}_i = \text{min}(1, n* p_i/i) }[/math].
So if the number of tests ([math]\displaystyle{ n }[/math]) is large and/or the original p value ([math]\displaystyle{ p_i }[/math]) is large, then FDR can hit the value 1.
However, the simple formula above does not guarantee the monotonicity property from the FDR. So the calculation in R is more complicated. See How Does R Calculate the False Discovery Rate.
Below is the histograms of p-values and FDR (BH adjusted) from a real data (Pomeroy in BRB-ArrayTools).
And the next is a scatterplot w/ histograms on the margins from a null data.
q-value
q-value is defined as the minimum FDR that can be attained when calling that feature significant (i.e., expected proportion of false positives incurred when calling that feature significant).
If gene X has a q-value of 0.013 it means that 1.3% of genes that show p-values at least as small as gene X are false positives.
SAM/Significance Analysis of Microarrays
The percentile option is used to define the number of falsely called genes based on 'B' permutations. If we use the 90-th percentile, the number of significant genes will be less than if we use the 50-th percentile/median.
In BRCA dataset, using the 90-th percentile will get 29 genes vs 183 genes if we use median.
Multivariate permutation test
In BRCA dataset, using 80% confidence gives 116 genes vs 237 genes if we use 50% confidence (assuming maximum proportion of false discoveries is 10%). The method is published on EL Korn, JF Troendle, LM McShane and R Simon, Controlling the number of false discoveries: Application to high dimensional genomic data, Journal of Statistical Planning and Inference, vol 124, 379-398 (2004).
String Permutations Algorithm
Solving the Empirical Bayes Normal Means Problem with Correlated Noise Sun 2018
The package cashr and the source code of the paper
Bayes
Bayes factor
Empirical Bayes method
- http://en.wikipedia.org/wiki/Empirical_Bayes_method
- Introduction to Empirical Bayes: Examples from Baseball Statistics
Naive Bayes classifier
Understanding Naïve Bayes Classifier Using R
MCMC
Speeding up Metropolis-Hastings with Rcpp
offset() function
- An offset is a term to be added to a linear predictor, such as in a generalised linear model, with known coefficient 1 rather than an estimated coefficient.
- https://www.rdocumentation.org/packages/stats/versions/3.5.0/topics/offset
Offset in Poisson regression
- http://rfunction.com/archives/223
- https://stats.stackexchange.com/questions/11182/when-to-use-an-offset-in-a-poisson-regression
- We need to model rates instead of counts
- More generally, you use offsets because the units of observation are different in some dimension (different populations, different geographic sizes) and the outcome is proportional to that dimension.
An example from here
Y <- c(15, 7, 36, 4, 16, 12, 41, 15) N <- c(4949, 3534, 12210, 344, 6178, 4883, 11256, 7125) x1 <- c(-0.1, 0, 0.2, 0, 1, 1.1, 1.1, 1) x2 <- c(2.2, 1.5, 4.5, 7.2, 4.5, 3.2, 9.1, 5.2) glm(Y ~ offset(log(N)) + (x1 + x2), family=poisson) # two variables # Coefficients: # (Intercept) x1 x2 # -6.172 -0.380 0.109 # # Degrees of Freedom: 7 Total (i.e. Null); 5 Residual # Null Deviance: 10.56 # Residual Deviance: 4.559 AIC: 46.69 glm(Y ~ offset(log(N)) + I(x1+x2), family=poisson) # one variable # Coefficients: # (Intercept) I(x1 + x2) # -6.12652 0.04746 # # Degrees of Freedom: 7 Total (i.e. Null); 6 Residual # Null Deviance: 10.56 # Residual Deviance: 8.001 AIC: 48.13
Offset in Cox regression
An example from biospear::PCAlasso()
coxph(Surv(time, status) ~ offset(off.All), data = data) # Call: coxph(formula = Surv(time, status) ~ offset(off.All), data = data) # # Null model # log likelihood= -2391.736 # n= 500 # versus without using offset() coxph(Surv(time, status) ~ off.All, data = data) # Call: # coxph(formula = Surv(time, status) ~ off.All, data = data) # # coef exp(coef) se(coef) z p # off.All 0.485 1.624 0.658 0.74 0.46 # # Likelihood ratio test=0.54 on 1 df, p=0.5 # n= 500, number of events= 438 coxph(Surv(time, status) ~ off.All, data = data)$loglik # [1] -2391.702 -2391.430 # initial coef estimate, final coef
Offset in linear regression
- https://www.rdocumentation.org/packages/stats/versions/3.5.1/topics/lm
- https://stackoverflow.com/questions/16920628/use-of-offset-in-lm-regression-r
Overdispersion
https://en.wikipedia.org/wiki/Overdispersion
Var(Y) = phi * E(Y). If phi > 1, then it is overdispersion relative to Poisson. If phi <1, we have under-dispersion (rare).
Heterogeneity
The Poisson model fit is not good; residual deviance/df >> 1. The lack of fit maybe due to missing data, covariates or overdispersion.
Subjects within each covariate combination still differ greatly.
- https://onlinecourses.science.psu.edu/stat504/node/169.
- https://onlinecourses.science.psu.edu/stat504/node/162
Consider Quasi-Poisson or negative binomial.
Test of overdispersion or underdispersion in Poisson models
Negative Binomial
The mean of the Poisson distribution can itself be thought of as a random variable drawn from the gamma distribution thereby introducing an additional free parameter.
Binomial
- Generating and modeling over-dispersed binomial data
- simstudy package. The final data sets can represent data from randomized control trials, repeated measure (longitudinal) designs, and cluster randomized trials. Missingness can be generated using various mechanisms (MCAR, MAR, NMAR).
Count data
Zero counts
Bias
Bias in Small-Sample Inference With Count-Data Models Blackburn 2019
Survival data analysis
Logistic regression
Simulate binary data from the logistic model
set.seed(666) x1 = rnorm(1000) # some continuous variables x2 = rnorm(1000) z = 1 + 2*x1 + 3*x2 # linear combination with a bias pr = 1/(1+exp(-z)) # pass through an inv-logit function y = rbinom(1000,1,pr) # bernoulli response variable #now feed it to glm: df = data.frame(y=y,x1=x1,x2=x2) glm( y~x1+x2,data=df,family="binomial")
Building a Logistic Regression model from scratch
https://www.analyticsvidhya.com/blog/2015/10/basics-logistic-regression
Odds ratio
Calculate the odds ratio from the coefficient estimates; see this post.
require(MASS) N <- 100 # generate some data X1 <- rnorm(N, 175, 7) X2 <- rnorm(N, 30, 8) X3 <- abs(rnorm(N, 60, 30)) Y <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 12) # dichotomize Y and do logistic regression Yfac <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi")) glmFit <- glm(Yfac ~ X1 + X2 + X3, family=binomial(link="logit")) exp(cbind(coef(glmFit), confint(glmFit)))
Medical applications
Subgroup analysis
Other related keywords: recursive partitioning, randomized clinical trials (RCT)
- Thinking about different ways to analyze sub-groups in an RCT
- Tutorial in biostatistics: data-driven subgroup identification and analysis in clinical trials I Lipkovich, A Dmitrienko - Statistics in medicine, 2017
- Personalized medicine:Four perspectives of tailored medicine SJ Ruberg, L Shen - Statistics in Biopharmaceutical Research, 2015
- Berger, J. O., Wang, X., and Shen, L. (2014), “A Bayesian Approach to Subgroup Identification,” Journal of Biopharmaceutical Statistics, 24, 110–129.
- Change over time is not "treatment response"
Interaction analysis
- Goal: assessing the predictiveness of biomarkers by testing their interaction (strength) with the treatment.
- Evaluation of biomarkers for treatment selection usingindividual participant data from multiple clinical trials Kang et al 2018
- http://www.stat.purdue.edu/~ghobbs/STAT_512/Lecture_Notes/ANOVA/Topic_27.pdf#page=15. For survival data, y-axis is the survival time and B1=treatment, B2=control and X-axis is treatment-effect modifying score. But as seen on page16, the effects may not be separated.
- Identification of biomarker-by-treatment interactions in randomized clinical trials with survival outcomes and high-dimensional spaces N Ternès, F Rotolo, G Heinze, S Michiels - Biometrical Journal, 2017
- Designing a study to evaluate the benefitof a biomarker for selectingpatient treatment Janes 2015
- A visualization method measuring theperformance of biomarkers for guidingtreatment decisions Yang et al 2015. Predictiveness curves were used a lot.
- Combining Biomarkers to Optimize Patient TreatmentRecommendations Kang et al 2014. Several simulations are conducted.
- An approach to evaluating and comparing biomarkers for patient treatment selection Janes et al 2014
- A Framework for Evaluating Markers Used to Select Patient Treatment Janes et al 2014
- Tian, L., Alizaden, A. A., Gentles, A. J., and Tibshirani, R. (2014) “A Simple Method for Detecting Interactions Between a Treatment and a Large Number of Covariates,” and the book chapter.
- Statistical Methods for Evaluating and Comparing Biomarkers for Patient Treatment Selection Janes et al 2013
- Assessing Treatment-Selection Markers using a Potential Outcomes Framework Huang et al 2012
- Methods for Evaluating Prediction Performance of Biomarkers and Tests Pepe et al 2012
- Measuring the performance of markers for guiding treatment decisions by Janes, et al 2011.
cf <- c(2, 1, .5, 0) f1 <- function(x) { z <- cf[1] + cf[3] + (cf[2]+cf[4])*x; 1/ (1 + exp(-z)) } f0 <- function(x) { z <- cf[1] + cf[2]*x; 1/ (1 + exp(-z)) } par(mfrow=c(1,3)) curve(f1, -3, 3, col = 'red', ylim = c(0, 1), ylab = '5-year DFS Rate', xlab = 'Marker A/D Value', main = 'Predictiveness Curve', lwd = 2) curve(f0, -3, 3, col = 'black', ylim = c(0, 1), xlab = '', ylab = '', lwd = 2, add = TRUE) legend(.5, .4, c("control", "treatment"), col = c("black", "red"), lwd = 2) cf <- c(.1, 1, -.1, .5) curve(f1, -3, 3, col = 'red', ylim = c(0, 1), ylab = '5-year DFS Rate', xlab = 'Marker G Value', main = 'Predictiveness Curve', lwd = 2) curve(f0, -3, 3, col = 'black', ylim = c(0, 1), xlab = '', ylab = '', lwd = 2, add = TRUE) legend(.5, .4, c("control", "treatment"), col = c("black", "red"), lwd = 2) abline(v= - cf[3]/cf[4], lty = 2) cf <- c(1, -1, 1, 2) curve(f1, -3, 3, col = 'red', ylim = c(0, 1), ylab = '5-year DFS Rate', xlab = 'Marker B Value', main = 'Predictiveness Curve', lwd = 2) curve(f0, -3, 3, col = 'black', ylim = c(0, 1), xlab = '', ylab = '', lwd = 2, add = TRUE) legend(.5, .85, c("control", "treatment"), col = c("black", "red"), lwd = 2) abline(v= - cf[3]/cf[4], lty = 2)
File:PredcurveLogit.svg - An Approach to Evaluating and Comparing Biomarkers for Patient Treatment Selection The International Journal of Biostatistics by Janes, 2014. Y-axis is risk given marker, not P(T > t0|X). Good details.
- Gunter, L., Zhu, J., and Murphy, S. (2011), “Variable Selection for Qualitative Interactions in Personalized Medicine While Controlling the Family-Wise Error Rate,” Journal of Biopharmaceutical Statistics, 21, 1063–1078.
Statistical Learning
- Elements of Statistical Learning Book homepage
- From Linear Models to Machine Learning by Norman Matloff
- 10 Free Must-Read Books for Machine Learning and Data Science
- 10 Statistical Techniques Data Scientists Need to Master
- Linear regression
- Classification: Logistic Regression, Linear Discriminant Analysis, Quadratic Discriminant Analysis
- Resampling methods: Bootstrapping and Cross-Validation
- Subset selection: Best-Subset Selection, Forward Stepwise Selection, Backward Stepwise Selection, Hybrid Methods
- Shrinkage/regularization: Ridge regression, Lasso
- Dimension reduction: Principal Components Regression, Partial least squares
- Nonlinear models: Piecewise function, Spline, generalized additive model
- Tree-based methods: Bagging, Boosting, Random Forest
- Support vector machine
- Unsupervised learning: PCA, k-means, Hierarchical
- 15 Types of Regression you should know
LDA (Fisher's linear discriminant), QDA
- https://en.wikipedia.org/wiki/Linear_discriminant_analysis
- How to perform Logistic Regression, LDA, & QDA in R
- Discriminant Analysis: Statistics All The Way
- Multiclass linear discriminant analysis with ultrahigh‐dimensional features Li 2019
Bagging
Chapter 8 of the book.
- Bootstrap mean is approximately a posterior average.
- Bootstrap aggregation or bagging average: Average the prediction over a collection of bootstrap samples, thereby reducing its variance. The bagging estimate is defined by
- [math]\displaystyle{ \hat{f}_{bag}(x) = \frac{1}{B}\sum_{b=1}^B \hat{f}^{*b}(x). }[/math]
Where Bagging Might Work Better Than Boosting
CLASSIFICATION FROM SCRATCH, BAGGING AND FORESTS 10/8
Boosting
- Ch8.2 Bagging, Random Forests and Boosting of An Introduction to Statistical Learning and the code.
- An Attempt To Understand Boosting Algorithm
- gbm package. An implementation of extensions to Freund and Schapire's AdaBoost algorithm and Friedman's gradient boosting machine. Includes regression methods for least squares, absolute loss, t-distribution loss, quantile regression, logistic, multinomial logistic, Poisson, Cox proportional hazards partial likelihood, AdaBoost exponential loss, Huberized hinge loss, and Learning to Rank measures (LambdaMart).
- https://www.biostat.wisc.edu/~kendzior/STAT877/illustration.pdf
- http://www.is.uni-freiburg.de/ressourcen/business-analytics/10_ensemblelearning.pdf and exercise
- Classification from scratch
AdaBoost
AdaBoost.M1 by Freund and Schapire (1997):
The error rate on the training sample is [math]\displaystyle{ \bar{err} = \frac{1}{N} \sum_{i=1}^N I(y_i \neq G(x_i)), }[/math]
Sequentially apply the weak classification algorithm to repeatedly modified versions of the data, thereby producing a sequence of weak classifiers [math]\displaystyle{ G_m(x), m=1,2,\dots,M. }[/math]
The predictions from all of them are combined through a weighted majority vote to produce the final prediction: [math]\displaystyle{ G(x) = sign[\sum_{m=1}^M \alpha_m G_m(x)]. }[/math] Here [math]\displaystyle{ \alpha_1,\alpha_2,\dots,\alpha_M }[/math] are computed by the boosting algorithm and weight the contribution of each respective [math]\displaystyle{ G_m(x) }[/math]. Their effect is to give higher influence to the more accurate classifiers in the sequence.
Dropout regularization
DART: Dropout Regularization in Boosting Ensembles
Gradient boosting
- https://en.wikipedia.org/wiki/Gradient_boosting
- Machine Learning Basics - Gradient Boosting & XGBoost
- Gradient Boosting Essentials in R Using XGBOOST
Gradient descent
Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function (Wikipedia).
- An Introduction to Gradient Descent and Linear Regression Easy to understand based on simple linear regression. Code is provided too.
- Applying gradient descent – primer / refresher
- An overview of Gradient descent optimization algorithms
- A Complete Tutorial on Ridge and Lasso Regression in Python
- How to choose the learning rate?
- Machine learning from Andrew Ng
- http://scikit-learn.org/stable/modules/sgd.html
- R packages
The error function from a simple linear regression looks like
- [math]\displaystyle{ \begin{align} Err(m,b) &= \frac{1}{N}\sum_{i=1}^n (y_i - (m x_i + b))^2, \\ \end{align} }[/math]
We compute the gradient first for each parameters.
- [math]\displaystyle{ \begin{align} \frac{\partial Err}{\partial m} &= \frac{2}{n} \sum_{i=1}^n -x_i(y_i - (m x_i + b)), \\ \frac{\partial Err}{\partial b} &= \frac{2}{n} \sum_{i=1}^n -(y_i - (m x_i + b)) \end{align} }[/math]
The gradient descent algorithm uses an iterative method to update the estimates using a tuning parameter called learning rate.
new_m &= m_current - (learningRate * m_gradient) new_b &= b_current - (learningRate * b_gradient)
After each iteration, derivative is closer to zero. Coding in R for the simple linear regression.
Gradient descent vs Newton's method
- What is the difference between Gradient Descent and Newton's Gradient Descent?
- Newton's Method vs Gradient Descent Method in tacking saddle points in Non-Convex Optimization
- Gradient Descent vs Newton Method
Classification and Regression Trees (CART)
Construction of the tree classifier
- Node proportion
- [math]\displaystyle{ p(1|t) + \dots + p(6|t) =1 }[/math] where [math]\displaystyle{ p(j|t) }[/math] define the node proportions (class proportion of class j on node t. Here we assume there are 6 classes.
- Impurity of node t
- [math]\displaystyle{ i(t) }[/math] is a nonnegative function [math]\displaystyle{ \phi }[/math] of the [math]\displaystyle{ p(1|t), \dots, p(6|t) }[/math] such that [math]\displaystyle{ \phi(1/6,1/6,\dots,1/6) }[/math] = maximumm [math]\displaystyle{ \phi(1,0,\dots,0)=0, \phi(0,1,0,\dots,0)=0, \dots, \phi(0,0,0,0,0,1)=0 }[/math]. That is, the node impurity is largest when all classes are equally mixed together in it, and smallest when the node contains only one class.
- Gini index of impurity
- [math]\displaystyle{ i(t) = - \sum_{j=1}^6 p(j|t) \log p(j|t). }[/math]
- Goodness of the split s on node t
- [math]\displaystyle{ \Delta i(s, t) = i(t) -p_Li(t_L) - p_Ri(t_R). }[/math] where [math]\displaystyle{ p_R }[/math] are the proportion of the cases in t go into the left node [math]\displaystyle{ t_L }[/math] and a proportion [math]\displaystyle{ p_R }[/math] go into right node [math]\displaystyle{ t_R }[/math].
A tree was grown in the following way: At the root node [math]\displaystyle{ t_1 }[/math], a search was made through all candidate splits to find that split [math]\displaystyle{ s^* }[/math] which gave the largest decrease in impurity;
- [math]\displaystyle{ \Delta i(s^*, t_1) = \max_{s} \Delta i(s, t_1). }[/math]
- Class character of a terminal node was determined by the plurality rule. Specifically, if [math]\displaystyle{ p(j_0|t)=\max_j p(j|t) }[/math], then t was designated as a class [math]\displaystyle{ j_0 }[/math] terminal node.
R packages
Partially additive (generalized) linear model trees
- https://eeecon.uibk.ac.at/~zeileis/news/palmtree/
- https://cran.r-project.org/web/packages/palmtree/index.html
Supervised Classification, Logistic and Multinomial
Variable selection
Review
Variable selection – A review and recommendations for the practicing statistician by Heinze et al 2018.
Variable selection and variable importance plot
Variable selection and cross-validation
- http://freakonometrics.hypotheses.org/19925
- http://ellisp.github.io/blog/2016/06/05/bootstrap-cv-strategies/
Mallow Cp
Mallows's Cp addresses the issue of overfitting. The Cp statistic calculated on a sample of data estimates the mean squared prediction error (MSPE).
- [math]\displaystyle{ E\sum_j (\hat{Y}_j - E(Y_j\mid X_j))^2/\sigma^2, }[/math]
The Cp statistic is defined as
- [math]\displaystyle{ C_p={SSE_p \over S^2} - N + 2P. }[/math]
- https://en.wikipedia.org/wiki/Mallows%27s_Cp
- Used in Yuan & Lin (2006) group lasso. The degrees of freedom is estimated by the bootstrap or perturbation methods. Their paper mentioned the performance is comparable with that of 5-fold CV but is computationally much faster.
Variable selection for mode regression
http://www.tandfonline.com/doi/full/10.1080/02664763.2017.1342781 Chen & Zhou, Journal of applied statistics ,June 2017
Neural network
- Build your own neural network in R
- (Video) 10.2: Neural Networks: Perceptron Part 1 - The Nature of Code from the Coding Train. The book THE NATURE OF CODE by DANIEL SHIFFMAN
- CLASSIFICATION FROM SCRATCH, NEURAL NETS. The ROCR package was used to produce the ROC curve.
Support vector machine (SVM)
- Improve SVM tuning through parallelism by using the foreach and doParallel packages.
Quadratic Discriminant Analysis (qda), KNN
Machine Learning. Stock Market Data, Part 3: Quadratic Discriminant Analysis and KNN
Regularization
Regularization is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting
Ridge regression
- What is ridge regression?
- Why does ridge estimate become better than OLS by adding a constant to the diagonal? The estimates become more stable if the covariates are highly correlated.
- (In ridge regression) the matrix we need to invert no longer has determinant near zero, so the solution does not lead to uncomfortably large variance in the estimated parameters. And that’s a good thing. See this post.
- Multicolinearity and ridge regression: results on type I errors, power and heteroscedasticity
Since L2 norm is used in the regularization, ridge regression is also called L2 regularization.
Hoerl and Kennard (1970a, 1970b) introduced ridge regression, which minimizes RSS subject to a constraint [math]\displaystyle{ \sum|\beta_j|^2 \le t }[/math]. Note that though ridge regression shrinks the OLS estimator toward 0 and yields a biased estimator [math]\displaystyle{ \hat{\beta} = (X^TX + \lambda X)^{-1} X^T y }[/math] where [math]\displaystyle{ \lambda=\lambda(t) }[/math], a function of t, the variance is smaller than that of the OLS estimator.
The solution exists if [math]\displaystyle{ \lambda \gt 0 }[/math] even if [math]\displaystyle{ n \lt p }[/math].
Ridge regression (L2 penalty) only shrinks the coefficients. In contrast, Lasso method (L1 penalty) tries to shrink some coefficient estimators to exactly zeros. This can be seen from comparing the coefficient path plot from both methods.
Geometrically (contour plot of the cost function), the L1 penalty (the sum of absolute values of coefficients) will incur a probability of some zero coefficients (i.e. some coefficient hitting the corner of a diamond shape in the 2D case). For example, in the 2D case (X-axis=[math]\displaystyle{ \beta_0 }[/math], Y-axis=[math]\displaystyle{ \beta_1 }[/math]), the shape of the L1 penalty [math]\displaystyle{ |\beta_0| + |\beta_1| }[/math] is a diamond shape whereas the shape of the L2 penalty ([math]\displaystyle{ \beta_0^2 + \beta_1^2 }[/math]) is a circle.
Lasso/glmnet, adaptive lasso and FAQs
- https://en.wikipedia.org/wiki/Lasso_(statistics). It has a discussion when two covariates are highly correlated. For example if gene [math]\displaystyle{ i }[/math] and gene [math]\displaystyle{ j }[/math] are identical, then the values of [math]\displaystyle{ \beta _{j} }[/math] and [math]\displaystyle{ \beta _{k} }[/math] that minimize the lasso objective function are not uniquely determined. Elastic Net has been designed to address this shortcoming.
- Strongly correlated covariates have similar regression coefficients, is referred to as the grouping effect. From the wikipedia page "one would like to find all the associated covariates, rather than selecting only one from each set of strongly correlated covariates, as lasso often does. In addition, selecting only a single covariate from each group will typically result in increased prediction error, since the model is less robust (which is why ridge regression often outperforms lasso)".
- Glmnet Vignette. It tries to minimize [math]\displaystyle{ RSS(\beta) + \lambda [(1-\alpha)||\beta||_2^2/2 + \alpha ||\beta||_1] }[/math]. The elastic-net penalty is controlled by [math]\displaystyle{ \alpha }[/math], and bridge the gap between lasso ([math]\displaystyle{ \alpha = 1 }[/math]) and ridge ([math]\displaystyle{ \alpha = 0 }[/math]). Following is a CV curve (adaptive lasso) using the example from glmnet(). Two vertical lines are indicated: left one is lambda.min (that gives minimum mean cross-validated error) and right one is lambda.1se (the most regularized model such that error is within one standard error of the minimum). For the tuning parameter [math]\displaystyle{ \lambda }[/math],
- The larger the [math]\displaystyle{ \lambda }[/math], more coefficients are becoming zeros (think about coefficient path plots) and thus the simpler (more regularized) the model.
- If [math]\displaystyle{ \lambda }[/math] becomes zero, it reduces to the regular regression and if [math]\displaystyle{ \lambda }[/math] becomes infinity, the coefficients become zeros.
- In terms of the bias-variance tradeoff, the larger the [math]\displaystyle{ \lambda }[/math], the higher the bias and the lower the variance of the coefficient estimators.
File:Glmnetplot.svg File:Glmnet path.svg File:Glmnet l1norm.svg
set.seed(1010) n=1000;p=100 nzc=trunc(p/10) x=matrix(rnorm(n*p),n,p) beta=rnorm(nzc) fx= x[,seq(nzc)] %*% beta eps=rnorm(n)*5 y=drop(fx+eps) px=exp(fx) px=px/(1+px) ly=rbinom(n=length(px),prob=px,size=1) ## Full lasso set.seed(999) cv.full <- cv.glmnet(x, ly, family='binomial', alpha=1, parallel=TRUE) plot(cv.full) # cross-validation curve and two lambda's plot(glmnet(x, ly, family='binomial', alpha=1), xvar="lambda", label=TRUE) # coefficient path plot plot(glmnet(x, ly, family='binomial', alpha=1)) # L1 norm plot log(cv.full$lambda.min) # -4.546394 log(cv.full$lambda.1se) # -3.61605 sum(coef(cv.full, s=cv.full$lambda.min) != 0) # 44 ## Ridge Regression to create the Adaptive Weights Vector set.seed(999) cv.ridge <- cv.glmnet(x, ly, family='binomial', alpha=0, parallel=TRUE) wt <- 1/abs(matrix(coef(cv.ridge, s=cv.ridge$lambda.min) [, 1][2:(ncol(x)+1)] ))^1 ## Using gamma = 1, exclude intercept ## Adaptive Lasso using the 'penalty.factor' argument set.seed(999) cv.lasso <- cv.glmnet(x, ly, family='binomial', alpha=1, parallel=TRUE, penalty.factor=wt) # defautl type.measure="deviance" for logistic regression plot(cv.lasso) log(cv.lasso$lambda.min) # -2.995375 log(cv.lasso$lambda.1se) # -0.7625655 sum(coef(cv.lasso, s=cv.lasso$lambda.min) != 0) # 34
- A list of potential lambdas: see Linear Regression case. The λ sequence is determined by lambda.max and lambda.min.ratio. The latter (default is ifelse(nobs<nvars,0.01,0.0001)) is the ratio of smallest value of the generated λ sequence (say lambda.min) to lambda.max. The program then generated nlambda values linear on the log scale from lambda.max down to lambda.min. lambda.max is not given, but easily computed from the input x and y; it is the smallest value for lambda such that all the coefficients are zero.
- Choosing hyper-parameters (α and λ) in penalized regression by Florian Privé
- lambda.min vs lambda.1se
- The lambda.1se represents the value of λ in the search that was simpler than the best model (lambda.min), but which has error within 1 standard error of the best model. In other words, using the value of lambda.1se as the selected value for λ results in a model that is slightly simpler than the best model but which cannot be distinguished from the best model in terms of error given the uncertainty in the k-fold CV estimate of the error of the best model.
- The lambda.min option refers to value of λ at the lowest CV error. The error at this value of λ is the average of the errors over the k folds and hence this estimate of the error is uncertain.
- https://www.rdocumentation.org/packages/glmnet/versions/2.0-10/topics/glmnet
- glmnetUtils: quality of life enhancements for elastic net regression with glmnet
- Mixing parameter: alpha=1 is the lasso penalty, and alpha=0 the ridge penalty and anything between 0–1 is Elastic net.
- RIdge regression uses Euclidean distance/L2-norm as the penalty. It won't remove any variables.
- Lasso uses L1-norm as the penalty. Some of the coefficients may be shrunk exactly to zero.
- In ridge regression and lasso, what is lambda?
- Lambda is a penalty coefficient. Large lambda will shrink the coefficients.
- cv.glment()$lambda.1se gives the most regularized model such that error is within one standard error of the minimum
- cv.glmnet() has a logical parameter parallel which is useful if a cluster or cores have been previously allocated
- Ridge regression and the LASSO
- Standard error/Confidence interval
- Standard Errors in GLMNET and Confidence intervals for Ridge regression
- Why SEs are not meaningful and are usually not provided in penalized regression?
- Hint: standard errors are not very meaningful for strongly biased estimates such as arise from penalized estimation methods.
- Penalized estimation is a procedure that reduces the variance of estimators by introducing substantial bias.
- The bias of each estimator is therefore a major component of its mean squared error, whereas its variance may contribute only a small part.
- Any bootstrap-based calculations can only give an assessment of the variance of the estimates.
- Reliable estimates of the bias are only available if reliable unbiased estimates are available, which is typically not the case in situations in which penalized estimates are used.
- Hottest glmnet questions from stackexchange.
- Standard errors for lasso prediction. There might not be a consensus on a statistically valid method of calculating standard errors for the lasso predictions.
- Code for Adaptive-Lasso for Cox's proportional hazards model by Zhang & Lu (2007). This can compute the SE of estimates. The weights are originally based on the maximizers of the log partial likelihood. However, the beta may not be estimable in cases such as high-dimensional gene data, or the beta may be unstable if strong collinearity exists among covariates. In such cases, robust estimators such as ridge regression estimators would be used to determine the weights.
- LASSO vs Least angle regression
- https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf
- Least Angle Regression, Forward Stagewise and the Lasso
- https://www.quora.com/What-is-Least-Angle-Regression-and-when-should-it-be-used
- A simple explanation of the Lasso and Least Angle Regression
- https://stats.stackexchange.com/questions/4663/least-angle-regression-vs-lasso
- https://cran.r-project.org/web/packages/lars/index.html
- Oracle property and adaptive lasso
- Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties, Fan & Li (2001) JASA
- Adaptive Lasso: What it is and how to implement in R. Adaptive lasso weeks to minimize [math]\displaystyle{ RSS(\beta) + \lambda \sum_1^p \hat{\omega}_j |\beta_j| }[/math] where [math]\displaystyle{ \lambda }[/math] is the tuning parameter, [math]\displaystyle{ \hat{\omega}_j = \frac{1}{(|\hat{\beta}_j^{ini}|)^\gamma} }[/math] is the adaptive weights vector and [math]\displaystyle{ \hat{\beta}_j^{ini} }[/math] is an initial estimate of the coefficients obtained through ridge regression. Adaptive Lasso ends up penalizing more those coefficients with lower initial estimates. [math]\displaystyle{ \gamma }[/math] is a positive constant for adjustment of the adaptive weight vector, and the authors suggest the possible values of 0.5, 1 and 2.
- When n goes to infinity, [math]\displaystyle{ \hat{\omega}_j |\beta_j| }[/math] converges to [math]\displaystyle{ I(\beta_j \neq 0) }[/math]. So the adaptive Lasso procedure can be regarded as an automatic implementation of best-subset selection in some asymptotic sense.
- What is the oracle property of an estimator? An oracle estimator must be consistent in 1) variable selection and 2) consistent parameter estimation.
- (Linear regression) The adaptive lasso and its oracle properties Zou (2006, JASA)
- (Cox model) Adaptive-LASSO for Cox's proportional hazard model by Zhang and Lu (2007, Biometrika)
- When the LASSO fails???. Adaptive lasso is used to demonstrate its usefulness.
- A deep dive into glmnet: penalty.factor, standardize, offset
- Lambda sequence is not affected by the "penalty.factor"
- How "penalty.factor" used by the objective function may need to be corrected
- Some issues:
- With group of highly correlated features, Lasso tends to select amongst them arbitrarily.
- Often empirically ridge has better predictive performance than lasso but lasso leads to sparser solution
- Elastic-net (Zou & Hastie '05) aims to address these issues: hybrid between Lasso and ridge regression, uses L1 and L2 penalties.
- Gradient-Free Optimization for GLMNET Parameters
- Gsslasso Cox: a Bayesian hierarchical model for predicting survival and detecting associated genes by incorporating pathway information, Tang et al BMC Bioinformatics 2019
Lasso logistic regression
https://freakonometrics.hypotheses.org/52894
Lagrange Multipliers
A Simple Explanation of Why Lagrange Multipliers Works
How to solve lasso/convex optimization
- Convex Optimization by Boyd S, Vandenberghe L, Cambridge 2004. It is cited by Zhang & Lu (2007). The interior point algorithm can be used to solve the optimization problem in adaptive lasso.
- Review of gradient descent:
- Finding maximum: [math]\displaystyle{ w^{(t+1)} = w^{(t)} + \eta \frac{dg(w)}{dw} }[/math], where [math]\displaystyle{ \eta }[/math] is stepsize.
- Finding minimum: [math]\displaystyle{ w^{(t+1)} = w^{(t)} - \eta \frac{dg(w)}{dw} }[/math].
- What is the difference between Gradient Descent and Newton's Gradient Descent? Newton's method requires [math]\displaystyle{ g''(w) }[/math], more smoothness of g(.).
- Finding minimum for multiple variables (gradient descent): [math]\displaystyle{ w^{(t+1)} = w^{(t)} - \eta \Delta g(w^{(t)}) }[/math]. For the least squares problem, [math]\displaystyle{ g(w) = RSS(w) }[/math].
- Finding minimum for multiple variables in the least squares problem (minimize [math]\displaystyle{ RSS(w) }[/math]): [math]\displaystyle{ \text{partial}(j) = -2\sum h_j(x_i)(y_i - \hat{y}_i(w^{(t)}), w_j^{(t+1)} = w_j^{(t)} - \eta \; \text{partial}(j) }[/math]
- Finding minimum for multiple variables in the ridge regression problem (minimize [math]\displaystyle{ RSS(w)+\lambda ||w||_2^2=(y-Hw)'(y-Hw)+\lambda w'w }[/math]): [math]\displaystyle{ \text{partial}(j) = -2\sum h_j(x_i)(y_i - \hat{y}_i(w^{(t)}), w_j^{(t+1)} = (1-2\eta \lambda) w_j^{(t)} - \eta \; \text{partial}(j) }[/math]. Compared to the closed form approach: [math]\displaystyle{ \hat{w} = (H'H + \lambda I)^{-1}H'y }[/math] where 1. the inverse exists even N<D as long as [math]\displaystyle{ \lambda \gt 0 }[/math] and 2. the complexity of inverse is [math]\displaystyle{ O(D^3) }[/math], D is the dimension of the covariates.
- Cyclical coordinate descent was used (vignette) in the glmnet package. See also coordinate descent. The reason we call it 'descent' is because we want to 'minimize' an objective function.
- [math]\displaystyle{ \hat{w}_j = \min_w g(\hat{w}_1, \cdots, \hat{w}_{j-1},w, \hat{w}_{j+1}, \cdots, \hat{w}_D) }[/math]
- See paper on JSS 2010. The Cox PHM case also uses the cyclical coordinate descent method; see the paper on JSS 2011.
- Coursera's Machine learning course 2: Regression at 1:42. Soft-thresholding the coefficients is the key for the L1 penalty. The range for the thresholding is controlled by [math]\displaystyle{ \lambda }[/math]. Note to view the videos and all materials in coursera we can enroll to audit the course without starting a trial.
- No step size is required as in gradient descent.
- Implementing LASSO Regression with Coordinate Descent, Sub-Gradient of the L1 Penalty and Soft Thresholding in Python
- Coordinate descent in the least squares problem: [math]\displaystyle{ \frac{\partial}{\partial w_j} RSS(w)= -2 \rho_j + 2 w_j }[/math]; i.e. [math]\displaystyle{ \hat{w}_j = \rho_j }[/math].
- Coordinate descent in the Lasso problem (for normalized features): [math]\displaystyle{ \hat{w}_j = \begin{cases} \rho_j + \lambda/2, & \text{if }\rho_j \lt -\lambda/2 \\ 0, & \text{if } -\lambda/2 \le \rho_j \le \lambda/2\\ \rho_j- \lambda/2, & \text{if }\rho_j \gt \lambda/2 \end{cases} }[/math]
- Choosing [math]\displaystyle{ \lambda }[/math] via cross validation tends to favor less sparse solutions and thus smaller [math]\displaystyle{ \lambda }[/math] then optimal choice for feature selection. See "Machine learning: a probabilistic perspective", Murphy 2012.
- Classical: Least angle regression (LARS) Efron et al 2004.
- Alternating Direction Method of Multipliers (ADMM). Boyd, 2011. “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers.” Foundations and Trends in Machine Learning. Vol. 3, No. 1, 2010, pp. 1–122.
- If some variables in design matrix are correlated, then LASSO is convex or not?
- Tibshirani. Regression shrinkage and selection via the lasso (free). JRSS B 1996.
- Convex Optimization in R by Koenker & Mizera 2014.
- Pathwise coordinate optimization by Friedman et al 2007.
- Statistical learning with sparsity: the Lasso and generalizations T. Hastie, R. Tibshirani, and M. Wainwright, 2015 (book)
- Element of Statistical Learning (book)
- https://youtu.be/A5I1G1MfUmA StatsLearning Lect8h 110913
- Fu's (1998) shooting algorithm for Lasso (mentioned in the history of coordinate descent) and Zhang & Lu's (2007) modified shooting algorithm for adaptive Lasso.
- Machine Learning: a Probabilistic Perspective Choosing [math]\displaystyle{ \lambda }[/math] via cross validation tends to favor less sparse solutions and thus smaller [math]\displaystyle{ \lambda }[/math] than optimal choice for feature selection.
Quadratic programming
- https://en.wikipedia.org/wiki/Quadratic_programming
- https://en.wikipedia.org/wiki/Lasso_(statistics)
- CRAN Task View: Optimization and Mathematical Programming
- quadprog package and solve.QP() function
- Solving Quadratic Progams with R’s quadprog package
- More on Quadratic Programming in R
- https://optimization.mccormick.northwestern.edu/index.php/Quadratic_programming
- Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects where the algorithm from Lee 2016 was used.
1. Elastic net
2. Group lasso
- Yuan and Lin 2006 JRSSB
- https://cran.r-project.org/web/packages/gglasso/, http://royr2.github.io/2014/04/15/GroupLasso.html
- https://cran.r-project.org/web/packages/grpreg/
- https://cran.r-project.org/web/packages/grplasso/ by Lukas Meier (paper), used in the biospear package for survival data
- https://cran.r-project.org/web/packages/SGL/index.html, http://royr2.github.io/2014/05/20/SparseGroupLasso.html, http://web.stanford.edu/~hastie/Papers/SGLpaper.pdf
Other Lasso
- pcLasso
- A Fast and Flexible Algorithm for Solving the Lasso in Large-scale and Ultrahigh-dimensional Problems Qian et al 2019 and the snpnet package
Comparison by plotting
If we are running simulation, we can use the DALEX package to visualize the fitting result from different machine learning methods and the true model. See http://smarterpoland.pl/index.php/2018/05/ml-models-what-they-cant-learn.
UMAP
- https://arxiv.org/abs/1802.03426
- https://www.biorxiv.org/content/early/2018/04/10/298430
- https://cran.r-project.org/web/packages/umap/index.html
Imbalanced Classification
- Practical Guide to deal with Imbalanced Classification Problems in R
- The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets
- Roc animation
Deep Learning
- CS294-129 Designing, Visualizing and Understanding Deep Neural Networks from berkeley.
- https://www.youtube.com/playlist?list=PLkFD6_40KJIxopmdJF_CLNqG3QuDFHQUm
- Deep Learning from first principles in Python, R and Octave – Part 5
Tensor Flow (tensorflow package)
- https://tensorflow.rstudio.com/
- Machine Learning with R and TensorFlow (Video)
- Machine Learning Crash Course with TensorFlow APIs
- Predicting cancer outcomes from histology and genomics using convolutional networks Pooya Mobadersany et al, PNAS 2018
Biological applications
Machine learning resources
- These Machine Learning Courses Will Prepare a Career Path for You
- 101 Machine Learning Algorithms for Data Science with Cheat Sheets
Randomization inference
- Google: randomization inference in r
- Randomization Inference for Outcomes with Clumping at Zero, The American Statistician 2018
- Randomization inference vs. bootstrapping for p-values
Bootstrap
- Bootstrap from Wikipedia.
- This contains an overview of different methods for computing bootstrap confidence intervals.
- boot.ci() from the 'boot' package provides a short explanation for different methods for computing bootstrap confidence intervals.
- Bootstrapping made easy and tidy with slipper
- bootstrap package. "An Introduction to the Bootstrap" by B. Efron and R. Tibshirani, 1993
- boot package. Functions and datasets for bootstrapping from the book Bootstrap Methods and Their Application by A. C. Davison and D. V. Hinkley (1997, CUP). A short course material can be found here.The main functions are boot() and boot.ci().
- https://www.rdocumentation.org/packages/boot/versions/1.3-20
- R in Action Nonparametric bootstrapping
# Compute the bootstrapped 95% confidence interval for R-squared in the linear regression rsq <- function(data, indices, formula) { d <- data[indices,] # allows boot to select sample fit <- lm(formula, data=d) return(summary(fit)$r.square) } # 'formula' is optional depends on the problem # bootstrapping with 1000 replications set.seed(1234) bootobject <- boot(data=mtcars, statistic=rsq, R=1000, formula=mpg~wt+disp) plot(bootobject) # or plot(bootobject, index = 1) if we have multiple statistics ci <- boot.ci(bootobject, conf = .95, type=c("perc", "bca") ) # default type is "all" which contains c("norm","basic", "stud", "perc", "bca"). # 'bca' (Bias Corrected and Accelerated) by Efron 1987 uses # percentiles but adjusted to account for bias and skewness. # Level Percentile BCa # 95% ( 0.6838, 0.8833 ) ( 0.6344, 0.8549 ) # Calculations and Intervals on Original Scale # Some BCa intervals may be unstable ci$bca[4:5] # [1] 0.6343589 0.8549305 # the mean is not the same mean(c(0.6838, 0.8833 )) # [1] 0.78355 mean(c(0.6344, 0.8549 )) # [1] 0.74465 summary(lm(mpg~wt+disp, data = mtcars))$r.square # [1] 0.7809306
- Resampling Methods in R: The boot Package by Canty
- An introduction to bootstrap with applications with R by Davison and Kuonen.
- http://people.tamu.edu/~alawing/materials/ESSM689/Btutorial.pdf
- http://statweb.stanford.edu/~tibs/sta305files/FoxOnBootingRegInR.pdf
- http://www.stat.wisc.edu/~larget/stat302/chap3.pdf
- https://www.stat.cmu.edu/~cshalizi/402/lectures/08-bootstrap/lecture-08.pdf. Variance, se, bias, confidence interval (basic, percentile), hypothesis testing, parametric & non-parametric bootstrap, bootstrapping regression models.
- http://www.math.ntu.edu.tw/~hchen/teaching/LargeSample/references/R-bootstrap.pdf No package is used
- http://web.as.uky.edu/statistics/users/pbreheny/621/F10/notes/9-21.pdf Bootstrap confidence interval
- http://www-stat.wharton.upenn.edu/~stine/research/spida_2005.pdf
- Optimism corrected bootstrapping (Harrell et al 1996)
- Adjusting for optimism/overfitting in measures of predictive ability using bootstrapping
- Part 1: Optimism corrected bootstrapping: a problematic method
- Part 2: Optimism corrected bootstrapping is definitely bias, further evidence
- Part 3: Two more implementations of optimism corrected bootstrapping show shocking bias
- Part 4: Why does bias occur in optimism corrected bootstrapping?
- Part 5: Code corrections to optimism corrected bootstrapping series
Nonparametric bootstrap
This is the most common bootstrap method
The upstrap Crainiceanu & Crainiceanu, Biostatistics 2018
Parametric bootstrap
- Parametric bootstraps resample a known distribution function, whose parameters are estimated from your sample
- http://www.math.ntu.edu.tw/~hchen/teaching/LargeSample/notes/notebootstrap.pdf#page=3 No package is used
- A parametric or non-parametric bootstrap?
- https://www.stat.cmu.edu/~cshalizi/402/lectures/08-bootstrap/lecture-08.pdf#page=11
- simulatorZ Bioc package
Cross Validation
R packages:
- rsample (released July 2017)
- CrossValidate (released July 2017)
Difference between CV & bootstrapping
Differences between cross validation and bootstrapping to estimate the prediction error
- CV tends to be less biased but K-fold CV has fairly large variance.
- Bootstrapping tends to drastically reduce the variance but gives more biased results (they tend to be pessimistic).
- The 632 and 632+ rules methods have been adapted to deal with the bootstrap bias
- Repeated CV does K-fold several times and averages the results similar to regular K-fold
.632 and .632+ bootstrap
- 0.632 bootstrap: Efron's paper Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation in 1983.
- 0.632+ bootstrap: The CV estimate of prediction error is nearly unbiased but can be highly variable. See Improvements on Cross-Validation: The .632+ Bootstrap Method by Efron and Tibshirani, JASA 1997.
- Chap 17.7 from "An Introduction to the Bootstrap" by Efron and Tibshirani. Chapman & Hall.
- Chap 7.4 (resubstitution error [math]\displaystyle{ \overline{err} }[/math]) and chap 7.11 ([math]\displaystyle{ Err_{boot(1)} }[/math]leave-one-out bootstrap estimate of prediction error) from "The Elements of Statistical Learning" by Hastie, Tibshirani and Friedman. Springer.
- What is the .632 bootstrap?
- [math]\displaystyle{ Err_{.632} = 0.368 \overline{err} + 0.632 Err_{boot(1)} }[/math]
- Bootstrap, 0.632 Bootstrap, 0.632+ Bootstrap from Encyclopedia of Systems Biology by Springer.
- bootpred() from bootstrap function.
- The .632 bootstrap estimate can be extended to statistics other than prediction error. See the paper Issues in developing multivariable molecular signatures for guiding clinical care decisions by Sachs. Source code. Let [math]\displaystyle{ \phi }[/math] be a performance metric, [math]\displaystyle{ S_b }[/math] a sample of size n from a bootstrap, [math]\displaystyle{ S_{-b} }[/math] subset of [math]\displaystyle{ S }[/math] that is disjoint from [math]\displaystyle{ S_b }[/math]; test set.
- [math]\displaystyle{ \hat{E}^*[\phi_{\mathcal{F}}(S)] = .368 \hat{E}[\phi_{f}(S)] + 0.632 \hat{E}[\phi_{f_b}(S_{-b})] }[/math]
- where [math]\displaystyle{ \hat{E}[\phi_{f}(S)] }[/math] is the naive estimate of [math]\displaystyle{ \phi_f }[/math] using the entire dataset.
- For survival data
- ROC632 package, Overview, and the paper Time Dependent ROC Curves for the Estimation of True Prognostic Capacity of Microarray Data by Founcher 2012.
- Efron-Type Measures of Prediction Error for Survival Analysis Gerds 2007.
- Assessment of survival prediction models based on microarray data Schumacher 2007. Brier score.
- Evaluating Random Forests for Survival Analysis using Prediction Error Curves Mogensen, 2012. pec package
- Assessment of performance of survival prediction models for cancer prognosis Chen 2012. Concordance, ROC... But bootstrap was not used.
- Comparison of Cox Model Methods in A Low-dimensional Setting with Few Events 2016. Concordance, calibration slopes RMSE are considered.
Create partitions
set.seed(), sample.split(),createDataPartition(), and createFolds() functions.
k-fold cross validation with modelr and broom
Nested resampling
- Nested Resampling with rsample
- https://stats.stackexchange.com/questions/292179/whats-the-meaning-of-nested-resampling
Nested resampling is need when we want to tuning a model by using a grid search. The default settings of a model are likely not optimal for each data set out. So an inner CV has to be performed with the aim to find the best parameter set of a learner for each fold.
See a diagram at https://i.stack.imgur.com/vh1sZ.png
In BRB-ArrayTools -> class prediction with multiple methods, the alpha (significant level of threshold used for gene selection, 2nd option in individual genes) can be viewed as a tuning parameter for the development of a classifier.
Pre-validation
- Pre-validation and inference in microarrays Tibshirani and Efron, Statistical Applications in Genetics and Molecular Biology, 2002.
- http://www.stat.columbia.edu/~tzheng/teaching/genetics/papers/tib_efron.pdf#page=5. In each CV, we compute the estimate of the response. This estimate of the response will serve as a new predictor (pre-validated predictor) in the final fitting model.
- P1101 of Sachs 2016. With pre-validation, instead of computing the statistic [math]\displaystyle{ \phi }[/math] for each of the held-out subsets ([math]\displaystyle{ S_{-b} }[/math] for the bootstrap or [math]\displaystyle{ S_{k} }[/math] for cross-validation), the fitted signature [math]\displaystyle{ \hat{f}(X_i) }[/math] is estimated for [math]\displaystyle{ X_i \in S_{-b} }[/math] where [math]\displaystyle{ \hat{f} }[/math] is estimated using [math]\displaystyle{ S_{b} }[/math]. This process is repeated to obtain a set of pre-validated signature estimates [math]\displaystyle{ \hat{f} }[/math]. Then an association measure [math]\displaystyle{ \phi }[/math] can be calculated using the pre-validated signature estimates and the true outcomes [math]\displaystyle{ Y_i, i = 1, \ldots, n }[/math].
- In CV, left-out samples = hold-out cases = test set
Clustering
See Clustering.
Mixed Effect Model
- Paper by Laird and Ware 1982
- John Fox's Linear Mixed Models Appendix to An R and S-PLUS Companion to Applied Regression. Very clear. It provides 2 typical examples (hierarchical data and longitudinal data) of using the mixed effects model. It also uses Trellis plots to examine the data.
- Chapter 10 Random and Mixed Effects from Modern Applied Statistics with S by Venables and Ripley.
- (Book) lme4: Mixed-effects modeling with R by Douglas Bates.
- (Book) Mixed-effects modeling in S and S-Plus by José Pinheiro and Douglas Bates.
- Simulation and power analysis of generalized linear mixed models
- Linear mixed-effect models in R by poissonisfish
- Dealing with correlation in designed field experiments: part II
Model selection criteria
- Assessing the Accuracy of our models (R Squared, Adjusted R Squared, RMSE, MAE, AIC)
- Comparing additive and multiplicative regressions using AIC in R
- Model Selection and Regression t-Statistics Derryberry 2019
Akaike information criterion/AIC
- [math]\displaystyle{ \mathrm{AIC} \, = \, 2k - 2\ln(\hat L) }[/math], where k be the number of estimated parameters in the model.
- Smaller is better
- Akaike proposed to approximate the expectation of the cross-validated log likelihood [math]\displaystyle{ E_{test}E_{train} [log L(x_{test}| \hat{\beta}_{train})] }[/math] by [math]\displaystyle{ log L(x_{train} | \hat{\beta}_{train})-k }[/math].
- Leave-one-out cross-validation is asymptotically equivalent to AIC, for ordinary linear regression models.
- AIC can be used to compare two models even if they are not hierarchically nested.
- AIC() from the stats package.
BIC
- [math]\displaystyle{ \mathrm{BIC} \, = \, \ln(n) \cdot 2k - 2\ln(\hat L) }[/math], where k be the number of estimated parameters in the model.
Overfitting
How to judge if a supervised machine learning model is overfitting or not?
AIC vs AUC
What is the difference in what AIC and c-statistic (AUC) actually measure for model fit?
Roughly speaking:
- AIC is telling you how good your model fits for a specific mis-classification cost.
- AUC is telling you how good your model would work, on average, across all mis-classification costs.
Frank Harrell: AUC (C-index) has the advantage of measuring the concordance probability as you stated, aside from cost/utility considerations. To me the bottom line is the AUC should be used to describe discrimination of one model, not to compare 2 models. For comparison we need to use the most powerful measure: deviance and those things derived from deviance: generalized 𝑅2 and AIC.
Variable selection and model estimation
Proper variable selection: Use only training data or full data?
- training observations to perform all aspects of model-fitting—including variable selection
- make use of the full data set in order to obtain more accurate coefficient estimates (This statement is arguable)
Entropy
Definition
Entropy is defined by -log2(p) where p is a probability. Higher entropy represents higher unpredictable of an event.
Some examples:
- Fair 2-side die: Entropy = -.5*log2(.5) - .5*log2(.5) = 1.
- Fair 6-side die: Entropy = -6*1/6*log2(1/6) = 2.58
- Weighted 6-side die: Consider pi=.1 for i=1,..,5 and p6=.5. Entropy = -5*.1*log2(.1) - .5*log2(.5) = 2.16 (less unpredictable than a fair 6-side die).
Use
When entropy was applied to the variable selection, we want to select a class variable which gives a largest entropy difference between without any class variable (compute entropy using response only) and with that class variable (entropy is computed by adding entropy in each class level) because this variable is most discriminative and it gives most information gain. For example,
- entropy (without any class)=.94,
- entropy(var 1) = .69,
- entropy(var 2)=.91,
- entropy(var 3)=.725.
We will choose variable 1 since it gives the largest gain (.94 - .69) compared to the other variables (.94 -.91, .94 -.725).
Why is picking the attribute with the most information gain beneficial? It reduces entropy, which increases predictability. A decrease in entropy signifies an decrease in unpredictability, which also means an increase in predictability.
Consider a split of a continuous variable. Where should we cut the continuous variable to create a binary partition with the highest gain? Suppose cut point c1 creates an entropy .9 and another cut point c2 creates an entropy .1. We should choose c2.
Related
In addition to information gain, gini (dʒiːni) index is another metric used in decision tree. See wikipedia page about decision tree learning.
Ensembles
- Combining classifiers. Pro: better classification performance. Con: time consuming.
- Comic http://flowingdata.com/2017/09/05/xkcd-ensemble-model/
- Common Ensemble Models can be Biased
Bagging
Draw N bootstrap samples and summary the results (averaging for regression problem, majority vote for classification problem). Decrease variance without changing bias. Not help much with underfit or high bias models.
Random forest
Variance importance: if you scramble the values of a variable, and the accuracy of your tree does not change much, then the variable is not very important.
Why is it useful to compute variance importance? So the model's predictions are easier to interpret (not improve the prediction performance).
Random forest has advantages of easier to run in parallel and suitable for small n large p problems.
Random forest versus logistic regression: a large-scale benchmark experiment by Raphael Couronné, BMC Bioinformatics 2018
Arborist: Parallelized, Extensible Random Forests
On what to permute in test-based approaches for variable importance measures in Random Forests
Boosting
Instead of selecting data points randomly with the boostrap, it favors the misclassified points.
Algorithm:
- Initialize the weights
- Repeat
- resample with respect to weights
- retrain the model
- recompute weights
Since boosting requires computation in iterative and bagging can be run in parallel, bagging has an advantage over boosting when the data is very large.
Time series
Ensemble learning for time series forecasting in R
p-values
p-values
- Prob(Data | H0)
- https://en.wikipedia.org/wiki/P-value
- Statistical Inference in the 21st Century: A World Beyond p < 0.05 The American Statistician, 2019
- THE ASA SAYS NO TO P-VALUES The problem is that with large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. We have the opposite problem with small samples: The power of the test is low, and we will announce that there is “no significant effect” when in fact we may have too little data to know whether the effect is important.
- It’s not the p-values’ fault
- Exploring P-values with Simulations in R from Stable Markets.
- p-value and effect size. http://journals.sagepub.com/doi/full/10.1177/1745691614553988
Distribution of p values in medical abstracts
- http://www.ncbi.nlm.nih.gov/pubmed/26608725
- An R package with several million published p-values in tidy data sets by Jeff Leek.
nominal p-value and Empirical p-values
- Nominal p-values are based on asymptotic null distributions
- Empirical p-values are computed from simulations/permutations
(nominal) alpha level
Conventional methodology for statistical testing is, in advance of undertaking the test, to set a NOMINAL ALPHA CRITERION LEVEL (often 0.05). The outcome is classified as showing STATISTICAL SIGNIFICANCE if the actual ALPHA (probability of the outcome under the null hypothesis) is no greater than this NOMINAL ALPHA CRITERION LEVEL.
- http://www.translationdirectory.com/glossaries/glossary033.htm
- http://courses.washington.edu/p209s07/lecturenotes/Week%205_Monday%20overheads.pdf
Normality assumption
Violating the normality assumption may be the lesser of two evils
T-statistic
Let [math]\displaystyle{ \scriptstyle\hat\beta }[/math] be an estimator of parameter β in some statistical model. Then a t-statistic for this parameter is any quantity of the form
- [math]\displaystyle{ t_{\hat{\beta}} = \frac{\hat\beta - \beta_0}{\mathrm{s.e.}(\hat\beta)}, }[/math]
where β0 is a non-random, known constant, and [math]\displaystyle{ \scriptstyle s.e.(\hat\beta) }[/math] is the standard error of the estimator [math]\displaystyle{ \scriptstyle\hat\beta }[/math].
Two sample test assuming equal variance
The t statistic (df = [math]\displaystyle{ n_1 + n_2 - 2 }[/math]) to test whether the means are different can be calculated as follows:
- [math]\displaystyle{ t = \frac{\bar {X}_1 - \bar{X}_2}{s_{X_1X_2} \cdot \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}} }[/math]
where
- [math]\displaystyle{ s_{X_1X_2} = \sqrt{\frac{(n_1-1)s_{X_1}^2+(n_2-1)s_{X_2}^2}{n_1+n_2-2}}. }[/math]
[math]\displaystyle{ s_{X_1X_2} }[/math] is an estimator of the common/pooled standard deviation of the two samples. The square-root of a pooled variance estimator is known as a pooled standard deviation.
- Pooled variance from Wikipedia
- The pooled sample variance is an unbiased estimator of the common variance if Xi and Yi are following the normal distribution.
- (From minitab) The pooled standard deviation is the average spread of all data points about their group mean (not the overall mean). It is a weighted average of each group's standard deviation. The weighting gives larger groups a proportionally greater effect on the overall estimate.
- Type I error rates in two-sample t-test by simulation
Two sample test assuming unequal variance
The t statistic (Behrens-Welch test statistic) to test whether the population means are different is calculated as:
- [math]\displaystyle{ t = {\overline{X}_1 - \overline{X}_2 \over s_{\overline{X}_1 - \overline{X}_2}} }[/math]
where
- [math]\displaystyle{ s_{\overline{X}_1 - \overline{X}_2} = \sqrt{{s_1^2 \over n_1} + {s_2^2 \over n_2}}. }[/math]
Here s2 is the unbiased estimator of the variance of the two samples.
The degrees of freedom is evaluated using the Satterthwaite's approximation
- [math]\displaystyle{ df = { ({s_1^2 \over n_1} + {s_2^2 \over n_2})^2 \over {({s_1^2 \over n_1})^2 \over n_1-1} + {({s_2^2 \over n_2})^2 \over n_2-1} }. }[/math]
Paired test
Have you ever asked yourself, "how should I approach the classic pre-post analysis?"
Z-value/Z-score
If the population parameters are known, then rather than computing the t-statistic, one can compute the z-score.
Nonparametric test: Wilcoxon rank sum test
Sensitive to differences in location
Nonparametric test: Kolmogorov-Smirnov test
Sensitive to difference in shape and location of the distribution functions of two groups
Limma: Empirical Bayes method
- Some Bioconductor packages: limma, RnBeads, IMA, minfi packages.
- The moderated T-statistics used in Limma is defined on Limma's user guide.
- Diagram of usage ?makeContrasts, ?contrasts.fit, ?eBayes
lmFit contrasts.fit eBayes topTable x ------> fit ------------------> fit2 -----> fit2 ---------> ^ ^ | | model.matrix | makeContrasts | class ---------> design ----------> contrasts
- Examples of contrasts (search contrasts.fit and/or model.matrix from the user guide)
# Ex 1 (Single channel design): design <- model.matrix(~ 0+factor(c(1,1,1,2,2,3,3,3))) # number of samples x number of groups colnames(design) <- c("group1", "group2", "group3") fit <- lmFit(eset, design) contrast.matrix <- makeContrasts(group2-group1, group3-group2, group3-group1, levels=design) # number of groups x number of contrasts fit2 <- contrasts.fit(fit, contrast.matrix) fit2 <- eBayes(fit2) topTable(fit2, coef=1, adjust="BH") topTable(fit2, coef=1, sort = "none", n = Inf, adjust="BH")$adj.P.Val # Ex 2 (Common reference design): targets <- readTargets("runxtargets.txt") design <- modelMatrix(targets, ref="EGFP") contrast.matrix <- makeContrasts(AML1,CBFb,AML1.CBFb,AML1.CBFb-AML1,AML1.CBFb-CBFb, levels=design) fit <- lmFit(MA, design) fit2 <- contrasts.fit(fit, contrasts.matrix) fit2 <- eBayes(fit2) # Ex 3 (Direct two-color design): design <- modelMatrix(targets, ref="CD4") contrast.matrix <- cbind("CD8-CD4"=c(1,0),"DN-CD4"=c(0,1),"CD8-DN"=c(1,-1)) rownames(contrast.matrix) <- colnames(design) fit <- lmFit(eset, design) fit2 <- contrasts.fit(fit, contrast.matrix) # Ex 4 (Single channel + Two groups): fit <- lmFit(eset, design) cont.matrix <- makeContrasts(MUvsWT=MU-WT, levels=design) fit2 <- contrasts.fit(fit, cont.matrix) fit2 <- eBayes(fit2) # Ex 5 (Single channel + Several groups): f <- factor(targets$Target, levels=c("RNA1","RNA2","RNA3")) design <- model.matrix(~0+f) colnames(design) <- c("RNA1","RNA2","RNA3") fit <- lmFit(eset, design) contrast.matrix <- makeContrasts(RNA2-RNA1, RNA3-RNA2, RNA3-RNA1, levels=design) fit2 <- contrasts.fit(fit, contrast.matrix) fit2 <- eBayes(fit2) # Ex 6 (Single channel + Interaction models 2x2 Factorial Designs) : cont.matrix <- makeContrasts( SvsUinWT=WT.S-WT.U, SvsUinMu=Mu.S-Mu.U, Diff=(Mu.S-Mu.U)-(WT.S-WT.U), levels=design) fit2 <- contrasts.fit(fit, cont.matrix) fit2 <- eBayes(fit2)
- Example from user guide 17.3 (Mammary progenitor cell populations)
setwd("~/Downloads/IlluminaCaseStudy") url <- c("http://bioinf.wehi.edu.au/marray/IlluminaCaseStudy/probe%20profile.txt.gz", "http://bioinf.wehi.edu.au/marray/IlluminaCaseStudy/control%20probe%20profile.txt.gz", "http://bioinf.wehi.edu.au/marray/IlluminaCaseStudy/Targets.txt") for(i in url) system(paste("wget ", i)) system("gunzip probe%20profile.txt.gz") system("gunzip control%20probe%20profile.txt.gz") source("http://www.bioconductor.org/biocLite.R") biocLite("limma") biocLite("statmod") library(limma) targets <- readTargets() targets x <- read.ilmn(files="probe profile.txt",ctrlfiles="control probe profile.txt", other.columns="Detection") options(digits=3) head(x$E) boxplot(log2(x$E),range=0,ylab="log2 intensity") y <- neqc(x) dim(y) expressed <- rowSums(y$other$Detection < 0.05) >= 3 y <- y[expressed,] dim(y) # 24691 12 plotMDS(y,labels=targets$CellType) ct <- factor(targets$CellType) design <- model.matrix(~0+ct) colnames(design) <- levels(ct) dupcor <- duplicateCorrelation(y,design,block=targets$Donor) # need statmod dupcor$consensus.correlation fit <- lmFit(y, design, block=targets$Donor, correlation=dupcor$consensus.correlation) contrasts <- makeContrasts(ML-MS, LP-MS, ML-LP, levels=design) fit2 <- contrasts.fit(fit, contrasts) fit2 <- eBayes(fit2, trend=TRUE) summary(decideTests(fit2, method="global")) topTable(fit2, coef=1) # Top ten differentially expressed probes between ML and MS # SYMBOL TargetID logFC AveExpr t P.Value adj.P.Val B # ILMN_1766707 IL17B <NA> -4.19 5.94 -29.0 2.51e-12 5.19e-08 18.1 # ILMN_1706051 PLD5 <NA> -4.00 5.67 -27.8 4.20e-12 5.19e-08 17.7 # ... tT <- topTable(fit2, coef=1, number = Inf) dim(tT) # [1] 24691 8
- Three groups comparison (What is the difference of A vs Other AND A vs (B+C)/2?). Contrasts comparing one factor to multiple others
library(limma) set.seed(1234) n <- 100 testexpr <- matrix(rnorm(n * 10, 5, 1), nc= 10) testexpr[, 6:7] <- testexpr[, 6:7] + 7 # mean is 12 design1 <- model.matrix(~ 0 + as.factor(c(rep(1,5),2,2,3,3,3))) design2 <- matrix(c(rep(1,5),rep(0,5),rep(0,5),rep(1,5)),ncol=2) colnames(design1) <- LETTERS[1:3] colnames(design2) <- c("A", "Other") fit1 <- lmFit(testexpr,design1) contrasts.matrix1 <- makeContrasts("AvsOther"=A-(B+C)/2, levels = design1) fit1 <- eBayes(contrasts.fit(fit1,contrasts=contrasts.matrix1)) fit2 <- lmFit(testexpr,design2) contrasts.matrix2 <- makeContrasts("AvsOther"=A-Other, levels = design2) fit2 <- eBayes(contrasts.fit(fit2,contrasts=contrasts.matrix2)) t1 <- topTable(fit1,coef=1, number = Inf) t2 <- topTable(fit2,coef=1, number = Inf) rbind(head(t1, 3), tail(t1, 3)) # logFC AveExpr t P.Value adj.P.Val B # 92 -5.293932 5.810926 -8.200138 1.147084e-15 1.147084e-13 26.335702 # 81 -5.045682 5.949507 -7.815607 2.009706e-14 1.004853e-12 23.334600 # 37 -4.720906 6.182821 -7.312539 7.186627e-13 2.395542e-11 19.625964 # 27 -2.127055 6.854324 -3.294744 1.034742e-03 1.055859e-03 -1.141991 # 86 -1.938148 7.153142 -3.002133 2.776390e-03 2.804434e-03 -2.039869 # 75 -1.876490 6.516004 -2.906626 3.768951e-03 3.768951e-03 -2.314869 rbind(head(t2, 3), tail(t2, 3)) # logFC AveExpr t P.Value adj.P.Val B # 92 -4.518551 5.810926 -2.5022436 0.01253944 0.2367295 -4.587080 # 81 -4.500503 5.949507 -2.4922492 0.01289503 0.2367295 -4.587156 # 37 -4.111158 6.182821 -2.2766414 0.02307100 0.2367295 -4.588728 # 27 -1.496546 6.854324 -0.8287440 0.40749644 0.4158127 -4.595601 # 86 -1.341607 7.153142 -0.7429435 0.45773401 0.4623576 -4.595807 # 75 -1.171366 6.516004 -0.6486690 0.51673851 0.5167385 -4.596008 var(as.numeric(testexpr[, 6:10])) # [1] 12.38074 var(as.numeric(testexpr[, 6:7])) # [1] 0.8501378 var(as.numeric(testexpr[, 8:10])) # [1] 0.9640699
As we can see the p-values returned from the first contrast are very small (large mean but small variance) but the p-values returned from the 2nd contrast are large (still large mean but very large variance). The variance from the "Other" group can be calculated from a mixture distribution ( pdf = .4 N(12, 1) + .6 N(5, 1), VarY = E(Y^2) - (EY)^2 where E(Y^2) = .4 (VarX1 + (EX1)^2) + .6 (VarX2 + (EX2)^2) = 73.6 and EY = .4 * 12 + .6 * 5 = 7.8; so VarY = 73.6 - 7.8^2 = 12.76). - Correct assumptions of using limma moderated t-test and the paper Should We Abandon the t-Test in the Analysis of Gene Expression Microarray Data: A Comparison of Variance Modeling Strategies.
- Evaluation: statistical power (figure 3, 4, 5), false-positive rate (table 2), execution time and ease of use (table 3)
- Limma presents several advantages
- RVM inflates the expected number of false-positives when sample size is small. On the other hand the, RVM is very close to Limma from either their formulas (p3 of the supporting info) or the Hierarchical clustering (figure 2) of two examples.
- Slides
- Use Limma to run ordinary T tests
# where 'fit' is the output from lmFit() or contrasts.fit(). unmod.t <- fit$coefficients/fit$stdev.unscaled/fit$sigma pval <- 2*pt(-abs(unmod.t), fit$df.residual) # Following the above example t.test(testexpr[1, 1:5], testexpr[1, 6:10], var.equal = T) # Two Sample t-test # # data: testexpr[1, 1:5] and testexpr[1, 6:10] # t = -1.2404, df = 8, p-value = 0.25 # alternative hypothesis: true difference in means is not equal to 0 # 95 percent confidence interval: # -7.987791 2.400082 # sample estimates: # mean of x mean of y # 4.577183 7.371037 fit2$coefficients[1] / (fit2$stdev.unscaled[1] * fit2$sigma[1]) # Ordinary t-statistic # [1] -1.240416 fit2$coefficients[1] / (fit2$stdev.unscaled[1] * sqrt(fit2$s2.post[1])) # moderated t-statistic # [1] -1.547156 topTable(fit2,coef=1, sort.by = "none")[1,] # logFC AveExpr t P.Value adj.P.Val B # 1 -2.793855 5.974110 -1.547156 0.1222210 0.2367295 -4.592992 # Square root of the pooled variance fit2$sigma[1] # [1] 3.561284 (((5-1)*var(testexpr[1, 1:5]) + (5-1)*var(testexpr[1, 6:10]))/(5+5-2)) %>% sqrt() # [1] 3.561284
- Comparison of ordinary T-statistic, RVM T-statistic and Limma/eBayes moderated T-statistic.
Test statistic for gene g | ||
---|---|---|
Ordinary T-test | [math]\displaystyle{ \frac{\overline{y}_{g1} - \overline{y}_{g2}}{S_g^{Pooled}/\sqrt{1/n_1 + 1/n_2}} }[/math] | [math]\displaystyle{ (S_g^{Pooled})^2 = \frac{(n_1-1)S_{g1}^2 + (n_2-1)S_{g2}^2}{n1+n2-2} }[/math] |
RVM | [math]\displaystyle{ \frac{\overline{y}_{g1} - \overline{y}_{g2}}{S_g^{RVM}/\sqrt{1/n_1 + 1/n_2}} }[/math] | [math]\displaystyle{ (S_g^{RVM})^2 = \frac{(n_1+n_2-2)S_{g}^2 + 2*a*(a*b)^{-1}}{n1+n2-2+2*a} }[/math] |
Limma | [math]\displaystyle{ \frac{\overline{y}_{g1} - \overline{y}_{g2}}{S_g^{Limma}/\sqrt{1/n_1 + 1/n_2}} }[/math] | [math]\displaystyle{ (S_g^{Limma})^2 = \frac{d_0 S_0^2 + d_g S_g^2}{d_0 + d_g} }[/math] |
- In Limma,
- [math]\displaystyle{ \sigma_g^2 }[/math] assumes an inverse Chi-square distribution with mean [math]\displaystyle{ S_0^2 }[/math] and [math]\displaystyle{ d_0 }[/math] degrees of freedom
- [math]\displaystyle{ d_0 }[/math] (fit$df.prior) and [math]\displaystyle{ d_g }[/math] are, respectively, prior and residual/empirical degrees of freedom.
- [math]\displaystyle{ S_0^2 }[/math] (fit$s2.prior) is the prior distribution and [math]\displaystyle{ S_g^2 }[/math] is the pooled variance.
- [math]\displaystyle{ (S_g^{Limma})^2 }[/math] can be obtained from fit$s2.post.
- Empirical Bayes estimation of normal means, accounting for uncertainty in estimated standard errors Lu 2019
ANOVA
- Practical Regression and Anova using R by Julian J. Faraway, 2002
- A simple ANOVA
- Repeated measures ANOVA in R Exercises
- Mixed models for ANOVA designs with one observation per unit of observation and cell of the design
- afex package, afex_plot(): Publication-Ready Plots for Factorial Designs
- Experiment designs for Agriculture
Common tests are linear models
https://lindeloev.github.io/tests-as-linear/
Post-hoc test
Determine which levels have significantly different means.
- http://jamesmarquezportfolio.com/one_way_anova_with_post_hocs_in_r.html
- pairwise.t.test() for one-way ANOVA
- Post-hoc Pairwise Comparisons of Two-way ANOVA using TukeyHSD().
- post-hoc tests: pairwise.t.test versus TukeyHSD test
TukeyHSD (Honestly Significant Difference), diagnostic checking
https://datascienceplus.com/one-way-anova-in-r/, Tukey HSD for Post-Hoc Analysis (detailed explanation including the type 1 error problem in multiple testings)
- TukeyHSD for the pairwise tests
- You can’t just perform a series of t tests, because that would greatly increase your likelihood of a Type I error.
- compute something analogous to a t score for each pair of means, but you don’t compare it to the Student’s t distribution. Instead, you use a new distribution called the studentized range (from Wikipedia) or q distribution.
- Suppose that we take a sample of size n from each of k populations with the same normal distribution N(μ, σ) and suppose that [math]\displaystyle{ \bar{y} }[/math]min is the smallest of these sample means and [math]\displaystyle{ \bar{y} }[/math]max is the largest of these sample means, and suppose S2 is the pooled sample variance from these samples. Then the following random variable has a Studentized range distribution: [math]\displaystyle{ q = \frac{\overline{y}_{\max} - \overline{y}_{\min}}{S/\sqrt{n}} }[/math]
- One-Way ANOVA Test in R from sthda.com.
res.aov <- aov(weight ~ group, data = PlantGrowth) summary(res.aov) # Df Sum Sq Mean Sq F value Pr(>F) # group 2 3.766 1.8832 4.846 0.0159 * # Residuals 27 10.492 0.3886 TukeyHSD(res.aov) # Tukey multiple comparisons of means # 95% family-wise confidence level # # Fit: aov(formula = weight ~ group, data = PlantGrowth) # # $group # diff lwr upr p adj # trt1-ctrl -0.371 -1.0622161 0.3202161 0.3908711 # trt2-ctrl 0.494 -0.1972161 1.1852161 0.1979960 # trt2-trt1 0.865 0.1737839 1.5562161 0.0120064 # Extra: # Check your data my_data <- PlantGrowth levels(my_data$group) set.seed(1234) dplyr::sample_n(my_data, 10) # compute the summary statistics by group library(dplyr) group_by(my_data, group) %>% summarise( count = n(), mean = mean(weight, na.rm = TRUE), sd = sd(weight, na.rm = TRUE) )
- Or we can use Benjamini-Hochberg method for p-value adjustment in pairwise comparisons
library(multcomp) pairwise.t.test(my_data$weight, my_data$group, p.adjust.method = "BH") # ctrl trt1 # trt1 0.194 - # trt2 0.132 0.013 # # P value adjustment method: BH
- Shapiro-Wilk test for normality
# Extract the residuals aov_residuals <- residuals(object = res.aov ) # Run Shapiro-Wilk test shapiro.test(x = aov_residuals )
- Bartlett test and Levene test for the homogeneity of variances across the groups
Repeated measure
- How to do Repeated Measures ANOVAs in R
- Cross-over Repeated Measure Designs
- Cross-over study design with a major constraint
Combining insignificant factor levels
COMBINING AUTOMATICALLY FACTOR LEVELS IN R
Omnibus tests
- https://en.wikipedia.org/wiki/Omnibus_test
- Understanding the definition of omnibus tests Tests are refereed to as omnibus if after rejecting the null hypothesis you do not know where the differences assessed by the statistical test are. In the case of F tests they are omnibus when there is more than one df in the numerator (3 or more groups) it is omnibus.
Goodness of fit
Chi-square tests
Fitting distribution
Contingency Tables
Odds ratio and Risk ratio
The ratio of the odds of an event occurring in one group to the odds of it occurring in another group
drawn | not drawn | ------------------------------------- white | A | B | Wh ------------------------------------- black | C | D | Bk
- Odds Ratio = (A / C) / (B / D) = (AD) / (BC)
- Risk Ratio = (A / Wh) / (C / Bk)
Hypergeometric, One-tailed Fisher exact test
- https://www.bioconductor.org/help/course-materials/2009/SeattleApr09/gsea/ (Are interesting features over-represented? or are selected genes more often in the GO category than expected by chance?)
- https://en.wikipedia.org/wiki/Hypergeometric_distribution. In a test for over-representation of successes in the sample, the hypergeometric p-value is calculated as the probability of randomly drawing k or more successes from the population in n total draws. In a test for under-representation, the p-value is the probability of randomly drawing k or fewer successes.
- http://stats.stackexchange.com/questions/62235/one-tailed-fishers-exact-test-and-the-hypergeometric-distribution
- Two sided hypergeometric test
- https://www.biostars.org/p/90662/ When computing the p-value (tail probability), consider to use 1 - Prob(observed -1) instead of 1 - Prob(observed) for discrete distribution.
- https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Hypergeometric.html p(x) = choose(m, x) choose(n, k-x) / choose(m+n, k).
drawn | not drawn | ------------------------------------- white | x | | m ------------------------------------- black | k-x | | n ------------------------------------- | k | | m+n
For example, k=100, m=100, m+n=1000,
> 1 - phyper(10, 100, 10^3-100, 100, log.p=F) [1] 0.4160339 > a <- dhyper(0:10, 100, 10^3-100, 100) > cumsum(rev(a)) [1] 1.566158e-140 1.409558e-135 3.136408e-131 3.067025e-127 1.668004e-123 5.739613e-120 1.355765e-116 [8] 2.325536e-113 3.018276e-110 3.058586e-107 2.480543e-104 1.642534e-101 9.027724e-99 4.175767e-96 [15] 1.644702e-93 5.572070e-91 1.638079e-88 4.210963e-86 9.530281e-84 1.910424e-81 3.410345e-79 [22] 5.447786e-77 7.821658e-75 1.013356e-72 1.189000e-70 1.267638e-68 1.231736e-66 1.093852e-64 [29] 8.900857e-63 6.652193e-61 4.576232e-59 2.903632e-57 1.702481e-55 9.240350e-54 4.650130e-52 [36] 2.173043e-50 9.442985e-49 3.820823e-47 1.441257e-45 5.074077e-44 1.669028e-42 5.134399e-41 [43] 1.478542e-39 3.989016e-38 1.009089e-36 2.395206e-35 5.338260e-34 1.117816e-32 2.200410e-31 [50] 4.074043e-30 7.098105e-29 1.164233e-27 1.798390e-26 2.617103e-25 3.589044e-24 4.639451e-23 [57] 5.654244e-22 6.497925e-21 7.042397e-20 7.198582e-19 6.940175e-18 6.310859e-17 5.412268e-16 [64] 4.377256e-15 3.338067e-14 2.399811e-13 1.626091e-12 1.038184e-11 6.243346e-11 3.535115e-10 [71] 1.883810e-09 9.442711e-09 4.449741e-08 1.970041e-07 8.188671e-07 3.193112e-06 1.167109e-05 [78] 3.994913e-05 1.279299e-04 3.828641e-04 1.069633e-03 2.786293e-03 6.759071e-03 1.525017e-02 [85] 3.196401e-02 6.216690e-02 1.120899e-01 1.872547e-01 2.898395e-01 4.160339e-01 5.550192e-01 [92] 6.909666e-01 8.079129e-01 8.953150e-01 9.511926e-01 9.811343e-01 9.942110e-01 9.986807e-01 [99] 9.998018e-01 9.999853e-01 1.000000e+00 # Density plot plot(0:100, dhyper(0:100, 100, 10^3-100, 100), type='h')
Moreover,
1 - phyper(q=10, m, n, k) = 1 - sum_{x=0}^{x=10} phyper(x, m, n, k) = 1 - sum(a[1:11]) # R's index starts from 1.
Another example is the data from the functional annotation tool in DAVID.
| gene list | not gene list | ------------------------------------------------------- pathway | 3 (q) | | 40 (m) ------------------------------------------------------- not in pathway | 297 | | 29960 (n) ------------------------------------------------------- | 300 (k) | | 30000
The one-tailed p-value from the hypergeometric test is calculated as 1 - phyper(3-1, 40, 29960, 300) = 0.0074.
Fisher's exact test
Following the above example from the DAVID website, the following R command calculates the Fisher exact test for independence in 2x2 contingency tables.
> fisher.test(matrix(c(3, 40, 297, 29960), nr=2)) # alternative = "two.sided" by default Fisher's Exact Test for Count Data data: matrix(c(3, 40, 297, 29960), nr = 2) p-value = 0.008853 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 1.488738 23.966741 sample estimates: odds ratio 7.564602 > fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="greater") Fisher's Exact Test for Count Data data: matrix(c(3, 40, 297, 29960), nr = 2) p-value = 0.008853 alternative hypothesis: true odds ratio is greater than 1 95 percent confidence interval: 1.973 Inf sample estimates: odds ratio 7.564602 > fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="less") Fisher's Exact Test for Count Data data: matrix(c(3, 40, 297, 29960), nr = 2) p-value = 0.9991 alternative hypothesis: true odds ratio is less than 1 95 percent confidence interval: 0.00000 20.90259 sample estimates: odds ratio 7.564602
From the documentation of fisher.test
Usage: fisher.test(x, y = NULL, workspace = 200000, hybrid = FALSE, control = list(), or = 1, alternative = "two.sided", conf.int = TRUE, conf.level = 0.95, simulate.p.value = FALSE, B = 2000)
- For 2 by 2 cases, p-values are obtained directly using the (central or non-central) hypergeometric distribution.
- For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one.
- The alternative for a one-sided test is based on the odds ratio, so ‘alternative = "greater"’ is a test of the odds ratio being bigger than ‘or’.
- Two-sided tests are based on the probabilities of the tables, and take as ‘more extreme’ all tables with probabilities less than or equal to that of the observed table, the p-value being the sum of such probabilities.
Chi-square independence test
Exploring the underlying theory of the chi-square test through simulation - part 2
GSEA
Determines whether an a priori defined set of genes shows statistically significant, concordant differences between two biological states
- https://www.bioconductor.org/help/course-materials/2015/SeattleApr2015/E_GeneSetEnrichment.html
- http://software.broadinstitute.org/gsea/index.jsp
- Statistical power of gene-set enrichment analysis is a function of gene set correlation structure by SWANSON 2017
- Towards a gold standard for benchmarking gene set enrichment analysis, GSEABenchmarkeR package
Two categories of GSEA procedures:
- Competitive: compare genes in the test set relative to all other genes.
- Self-contained: whether the gene-set is more DE than one were to expect under the null of no association between two phenotype conditions (without reference to other genes in the genome). For example the method by Jiang & Gentleman Bioinformatics 2007
Confidence vs Credibility Intervals
http://freakonometrics.hypotheses.org/18117
Power analysis/Sample Size determination
- Sample size determination from Wikipedia
- Power and Sample Size Determination http://www.stat.wisc.edu/~st571-1/10-power-2.pdf#page=12
- http://biostat.mc.vanderbilt.edu/wiki/pub/Main/AnesShortCourse/HypothesisTestingPart1.pdf#page=40
- Power analysis and sample size calculation for Agriculture (pwr, lmSupport, simr packages are used)
- Why Within-Subject Designs Require Fewer Participants than Between-Subject Designs
Power analysis for default Bayesian t-tests
http://daniellakens.blogspot.com/2016/01/power-analysis-for-default-bayesian-t.html
Using simulation for power analysis: an example based on a stepped wedge study design
https://www.rdatagen.net/post/using-simulation-for-power-analysis-an-example/
Power analysis and sample size calculation for Agriculture
http://r-video-tutorial.blogspot.com/2017/07/power-analysis-and-sample-size.html
Power calculation for proportions (shiny app)
https://juliasilge.shinyapps.io/power-app/
Derive the formula/manual calculation
- One-sample 1-sided test, One sample 2-sided test
- Two-sample 2-sided T test ([math]\displaystyle{ n }[/math] is the sample size in each group)
- [math]\displaystyle{ \begin{align} Power & = P_{\mu_1-\mu_2 = \Delta}(\frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\sigma^2/n + \sigma^2/n}} \gt Z_{\alpha /2}) + P_{\mu_1-\mu_2 = \Delta}(\frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\sigma^2/n + \sigma^2/n}} \lt -Z_{\alpha /2}) \\ & \approx P_{\mu_1-\mu_2 = \Delta}(\frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\sigma^2/n + \sigma^2/n}} \gt Z_{\alpha /2}) \\ & = P_{\mu_1-\mu_2 = \Delta}(\frac{\bar{X}_1 - \bar{X}_2 - \Delta}{\sqrt{2 * \sigma^2/n}} \gt Z_{\alpha /2} - \frac{\Delta}{\sqrt{2 * \sigma^2/n}}) \\ & = \Phi(-(Z_{\alpha /2} - \frac{\Delta}{\sqrt{2 * \sigma^2/n}})) \\ & = 1 - \beta =\Phi(Z_\beta) \end{align} }[/math]
Therefore
- [math]\displaystyle{ \begin{align} Z_{\beta} &= - Z_{\alpha/2} + \frac{\Delta}{\sqrt{2 * \sigma^2/n}} \\ Z_{\beta} + Z_{\alpha/2} & = \frac{\Delta}{\sqrt{2 * \sigma^2/n}} \\ 2 * (Z_{\beta} + Z_{\alpha/2})^2 * \sigma^2/\Delta^2 & = n \\ n & = 2 * (Z_{\beta} + Z_{\alpha/2})^2 * \sigma^2/\Delta^2 \end{align} }[/math]
# alpha = .05, delta = 200, n = 79.5, sigma=450 1 - pnorm(1.96 - 200*sqrt(79.5)/(sqrt(2)*450)) + pnorm(-1.96 - 200*sqrt(79.5)/(sqrt(2)*450)) # [1] 0.8 pnorm(-1.96 - 200*sqrt(79.5)/(sqrt(2)*450)) # [1] 9.58e-07 1 - pnorm(1.96 - 200*sqrt(79.5)/(sqrt(2)*450)) # [1] 0.8
Calculating required sample size in R and SAS
pwr package is used. For two-sided test, the formula for sample size is
- [math]\displaystyle{ n_{\mbox{each group}} = \frac{2 * (Z_{\alpha/2} + Z_\beta)^2 * \sigma^2}{\Delta^2} = \frac{2 * (Z_{\alpha/2} + Z_\beta)^2}{d^2} }[/math]
where [math]\displaystyle{ Z_\alpha }[/math] is value of the Normal distribution which cuts off an upper tail probability of [math]\displaystyle{ \alpha }[/math], [math]\displaystyle{ \Delta }[/math] is the difference sought, [math]\displaystyle{ \sigma }[/math] is the presumed standard deviation of the outcome, [math]\displaystyle{ \alpha }[/math] is the type 1 error, [math]\displaystyle{ \beta }[/math] is the type II error and (Cohen's) d is the effect size - difference between the means divided by the pooled standard deviation.
# An example from http://www.stat.columbia.edu/~gelman/stuff_for_blog/c13.pdf#page=3 # Method 1. require(pwr) pwr.t.test(d=200/450, power=.8, sig.level=.05, type="two.sample", alternative="two.sided") # # Two-sample t test power calculation # # n = 80.4 # d = 0.444 # sig.level = 0.05 # power = 0.8 # alternative = two.sided # # NOTE: n is number in *each* group # Method 2. 2*(qnorm(.975) + qnorm(.8))^2*450^2/(200^2) # [1] 79.5 2*(1.96 + .84)^2*450^2 / (200^2) # [1] 79.4
And stats::power.t.test() function.
power.t.test(n = 79.5, delta = 200, sd = 450, sig.level = .05, type ="two.sample", alternative = "two.sided") # # Two-sample t test power calculation # # n = 79.5 # delta = 200 # sd = 450 # sig.level = 0.05 # power = 0.795 # alternative = two.sided # # NOTE: n is number in *each* group
CRAN Task View: Design of Experiments
- powerAnalysis w/o vignette
- powerbydesign w/o vignette
- easypower w/ vignette
- pwr w/ vignette, https://www.statmethods.net/stats/power.html. The reference is Cohen's book.
- powerlmm Power Analysis for Longitudinal Multilevel/Linear Mixed-Effects Models.
- ssize.fdr w/o vignette
- samplesize w/o vignette
- ssizeRNA w/ vignette
- power.t.test(), power.anova.test(), power.prop.test() from stats package
Russ Lenth Java applets
https://homepage.divms.uiowa.edu/~rlenth/Power/index.html
Bootstrap method
The upstrap Crainiceanu & Crainiceanu, Biostatistics 2018
Multiple Testing Case
Optimal Sample Size for Multiple Testing The Case of Gene Expression Microarrays
Common covariance/correlation structures
See psu.edu. Assume covariance [math]\displaystyle{ \Sigma = (\sigma_{ij})_{p\times p} }[/math]
- Diagonal structure: [math]\displaystyle{ \sigma_{ij} = 0 }[/math] if [math]\displaystyle{ i \neq j }[/math].
- Compound symmetry: [math]\displaystyle{ \sigma_{ij} = \rho }[/math] if [math]\displaystyle{ i \neq j }[/math].
- First-order autoregressive AR(1) structure: [math]\displaystyle{ \sigma_{ij} = \rho^{|i - j|} }[/math].
rho <- .8 p <- 5 blockMat <- rho ^ abs(matrix(1:p, p, p, byrow=T) - matrix(1:p, p, p))
- Banded matrix: [math]\displaystyle{ \sigma_{ii}=1, \sigma_{i,i+1}=\sigma_{i+1,i} \neq 0, \sigma_{i,i+2}=\sigma_{i+2,i} \neq 0 }[/math] and [math]\displaystyle{ \sigma_{ij}=0 }[/math] for [math]\displaystyle{ |i-j| \ge 3 }[/math].
- Spatial Power
- Unstructured Covariance
- Toeplitz structure
To create blocks of correlation matrix, use the "%x%" operator. See kronecker().
covMat <- diag(n.blocks) %x% blockMat
Counter/Special Examples
Suppose X is a normally-distributed random variable with zero mean. Let Y = X^2. Clearly X and Y are not independent: if you know X, you also know Y. And if you know Y, you know the absolute value of X.
The covariance of X and Y is
Cov(X,Y) = E(XY) - E(X)E(Y) = E(X^3) - 0*E(Y) = E(X^3) = 0,
because the distribution of X is symmetric around zero. Thus the correlation r(X,Y) = Cov(X,Y)/Sqrt[Var(X)Var(Y)] = 0, and we have a situation where the variables are not independent, yet have (linear) correlation r(X,Y) = 0.
This example shows how a linear correlation coefficient does not encapsulate anything about the quadratic dependence of Y upon X.
Spearman vs Pearson correlation
Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship. https://stats.stackexchange.com/questions/8071/how-to-choose-between-pearson-and-spearman-correlation
x=(1:100); y=exp(x); cor(x,y, method='spearman') # 1 cor(x,y, method='pearson') # .25
Spearman vs Wilcoxon
By this post
- Wilcoxon used to compare categorical versus non-normal continuous variable
- Spearman's rho used to compare two continuous (including ordinal) variables that one or both aren't normally distributed
Spearman vs Kendall correlation
- Kendall's tau coefficient (after the Greek letter τ), is a statistic used to measure the ordinal association between two measured quantities.
- Kendall Tau or Spearman's rho?
Anscombe quartet
Four datasets have almost same properties: same mean in X, same mean in Y, same variance in X, (almost) same variance in Y, same correlation in X and Y, same linear regression.
The real meaning of spurious correlations
https://nsaunders.wordpress.com/2017/02/03/the-real-meaning-of-spurious-correlations/
library(ggplot2) set.seed(123) spurious_data <- data.frame(x = rnorm(500, 10, 1), y = rnorm(500, 10, 1), z = rnorm(500, 30, 3)) cor(spurious_data$x, spurious_data$y) # [1] -0.05943856 spurious_data %>% ggplot(aes(x, y)) + geom_point(alpha = 0.3) + theme_bw() + labs(title = "Plot of y versus x for 500 observations with N(10, 1)") cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z) # [1] 0.4517972 spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) + theme_bw() + geom_smooth(method = "lm") + scale_color_gradientn(colours = c("red", "white", "blue")) + labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 3)") spurious_data$z <- rnorm(500, 30, 6) cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z) # [1] 0.8424597 spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) + theme_bw() + geom_smooth(method = "lm") + scale_color_gradientn(colours = c("red", "white", "blue")) + labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 6)")
Time series
Structural change
Structural Changes in Global Warming
AR(1) processes and random walks
Spurious correlations and random walks
Measurement Error model
- Errors-in-variables models/errors-in-variables models or measurement error models
- Simulation‐‐Selection‐‐Extrapolation: Estimation in High‐‐Dimensional Errors‐‐in‐‐Variables Models Nghiem 2019
Dictionary
- Prognosis is the probability that an event or diagnosis will result in a particular outcome.
- For example, on the paper Developing and Validating Continuous Genomic Signatures in Randomized Clinical Trials for Predictive Medicine by Matsui 2012, the prognostic score .1 (0.9) represents a good (poor) prognosis.
- Prostate cancer has a much higher one-year overall survival rate than pancreatic cancer, and thus has a better prognosis. See Survival rate in wikipedia.
Data
Eleven quick tips for finding research data
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006038
Books
- Methods in Biostatistics with R ($)
- Modern Statistics for Modern Biology (free)
- Principles of Applied Statistics, by David Cox & Christl Donnelly
- Statistics by David Freedman,Robert Pisani, Roger Purves
- Wiley Online Library -> Statistics (Access by NIH Library)
- Computer Age Statistical Inference: Algorithms, Evidence and Data Science by Efron and Hastie 2016
Social
JSM
- 2019
- JSM 2019 and the post.
- An R Users Guide to JSM 2019