Statistics

From 太極
Jump to navigation Jump to search

Statisticians

The most important statistical ideas of the past 50 years

What are the most important statistical ideas of the past 50 years?, JASA 2021

Some Advice

Data

Rules for initial data analysis

Ten simple rules for initial data analysis

Exploratory Analysis (EDA)

Kurtosis

Kurtosis in R-What do you understand by Kurtosis?

Phi coefficient

How to Calculate Phi Coefficient in R. It is a measurement of the degree of association between two binary variables.

Coefficient of variation (CV)

Motivating the coefficient of variation (CV) for beginners:

  • Boss: Measure it 5 times.
  • You: 8, 8, 9, 6, and 8
  • B: SD=1. Make it three times more precise!
  • Y: 0.20 0.20 0.23 0.15 0.20 meters. SD=0.3!
  • B: All you did was change to meters! Report the CV instead!
  • Y: Damn it.
R> sd(c(8, 8, 9, 6, 8))
[1] 1.095445
R> sd(c(8, 8, 9, 6, 8)*2.54/100)
[1] 0.02782431

Agreement

Common pitfalls in statistical analysis: Measures of agreement 2017

Cohen's Kappa statistic (2-class)

Fleiss Kappa statistic (more than two raters)

  • https://en.wikipedia.org/wiki/Fleiss%27_kappa
  • Fleiss kappa (more than two raters) to test interrater reliability or to evaluate the repeatability and stability of models (robustness). This was used by Cancer prognosis prediction of Zheng 2020. "In our case, each trained model is designed to be a rater to assign the affiliation of each variable (gene or pathway). We conducted 20 replications of fivefold cross validation. As such, we had 100 trained models, or 100 raters in total, among which the agreement was measured by the Fleiss kappa..."
  • Fleiss’ Kappa in R: For Multiple Categorical Variables. irr::kappam.fleiss() was used.
  • Kappa statistic vs ICC
    • ICC and Kappa totally disagree
    • Measures of Interrater Agreement by Mandrekar 2011. "In certain clinical studies, agreement between the raters is assessed for a clinical outcome that is measured on a continuous scale. In such instances, intraclass correlation is calculated as a measure of agreement between the raters. Intraclass correlation is equivalent to weighted kappa under certain conditions, see the study by Fleiss and Cohen6, 7 for details."

ICC: intra-class correlation

See ICC

Compare two sets of p-values

https://stats.stackexchange.com/q/155407

Computing different kinds of correlations

correlation package

Association is not causation

Association is not causation

Predictive power score

Transform sample values to their percentiles

R> x <- c(1,2,3,4,4.5,6,7)
R> Fn <- ecdf(x)
R> Fn     # a *function*
Empirical CDF 
Call: ecdf(x)
 x[1:7] =      1,      2,      3,  ...,      6,      7
R> Fn(x)  # returns the percentiles for x
[1] 0.1428571 0.2857143 0.4285714 0.5714286 0.7142857 0.8571429 1.0000000
R> diff(Fn(x))
[1] 0.1428571 0.1428571 0.1428571 0.1428571 0.1428571 0.1428571
R> quantile(x, Fn(x))
14.28571% 28.57143% 42.85714% 57.14286% 71.42857% 85.71429%      100% 
 1.857143  2.714286  3.571429  4.214286  4.928571  6.142857  7.000000 
R> quantile(x, Fn(x), type = 1) 
14.28571% 28.57143% 42.85714% 57.14286% 71.42857% 85.71429%      100% 
      1.0       2.0       3.0       4.0       4.5       6.0       7.0 

Standardization

Feature standardization considered harmful

Eleven quick tips for finding research data

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006038

An archive of 1000+ datasets distributed with R

https://vincentarelbundock.github.io/Rdatasets/

Box(Box, whisker & outlier)

An example for a graphical explanation. File:Boxplot.svg, File:Geom boxplot.png

> x=c(0,4,15, 1, 6, 3, 20, 5, 8, 1, 3)
> summary(x)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
      0       2       4       6       7      20 
> sort(x)
 [1]  0  1  1  3  3  4  5  6  8 15 20
> y <- boxplot(x, col = 'grey')
> t(y$stats)
     [,1] [,2] [,3] [,4] [,5]
[1,]    0    2    4    7    8
# the extreme of the lower whisker, the lower hinge, the median, 
# the upper hinge and the extreme of the upper whisker

# https://en.wikipedia.org/wiki/Quartile#Example_1
> summary(c(6, 7, 15, 36, 39, 40, 41, 42, 43, 47, 49))
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   6.00   25.50   40.00   33.18   42.50   49.00
  • The lower and upper edges of box (also called the lower/upper hinge) is determined by the first and 3rd quartiles (2 and 7 in the above example).
    • 2 = median(c(0, 1, 1, 3, 3, 4)) = (1+3)/2
    • 7 = median(c(4, 5, 6, 8, 15, 20)) = (6+8)/2
    • IQR = 7 - 2 = 5
  • The thick dark horizon line is the median (4 in the example).
  • Outliers are defined by (the empty circles in the plot)
    • Observations larger than 3rd quartile + 1.5 * IQR (7+1.5*5=14.5) and
    • smaller than 1st quartile - 1.5 * IQR (2-1.5*5=-5.5).
    • Note that the cutoffs are not shown in the Box plot.
  • Whisker (defined using the cutoffs used to define outliers)
    • Upper whisker is defined by the largest "data" below 3rd quartile + 1.5 * IQR (8 in this example). Note Upper whisker is NOT defined as 3rd quartile + 1.5 * IQR.
    • Lower whisker is defined by the smallest "data" greater than 1st quartile - 1.5 * IQR (0 in this example). Note lower whisker is NOT defined as 1st quartile - 1.5 * IQR.
    • See another example below where we can see the whiskers fall on observations.

Note the wikipedia lists several possible definitions of a whisker. R uses the 2nd method (Tukey boxplot) to define whiskers.

Create boxplots from a list object

Normally we use a vector to create a single boxplot or a formula on a data to create boxplots.

But we can also use split() to create a list and then make boxplots.

Dot-box plot

File:Boxdot.svg

geom_boxplot

Note the geom_boxplot() does not create crossbars. See How to generate a boxplot graph with whisker by ggplot or this. A trick is to add the stat_boxplot() function.

Without jitter

ggplot(dfbox, aes(x=sample, y=expr)) +
  geom_boxplot() +
  theme(axis.text.x=element_text(color = "black", angle=30, vjust=.8, 
                                 hjust=0.8, size=6),  
        plot.title = element_text(hjust = 0.5)) +
  labs(title="", y = "", x = "") 

With jitter

ggplot(dfbox, aes(x=sample, y=expr)) +
  geom_boxplot(outlier.shape=NA) + #avoid plotting outliers twice
  geom_jitter(position=position_jitter(width=.2, height=0)) +
  theme(axis.text.x=element_text(color = "black", angle=30, vjust=.8, 
                                 hjust=0.8, size=6),  
        plot.title = element_text(hjust = 0.5)) +
  labs(title="", y = "", x = "") 

Why geom_boxplot identify more outliers than base boxplot?

What do hjust and vjust do when making a plot using ggplot? The value of hjust and vjust are only defined between 0 and 1: 0 means left-justified, 1 means right-justified.

Other boxplots

File:Lotsboxplot.png

Annotated boxplot

https://stackoverflow.com/a/38032281

stem and leaf plot

stem(). See R Tutorial.

Note that stem plot is useful when there are outliers.

> stem(x)

  The decimal point is 10 digit(s) to the right of the |

   0 | 00000000000000000000000000000000000000000000000000000000000000000000+419
   1 |
   2 |
   3 |
   4 |
   5 |
   6 |
   7 |
   8 |
   9 |
  10 |
  11 |
  12 | 9

> max(x)
[1] 129243100275
> max(x)/1e10
[1] 12.92431

> stem(y)

  The decimal point is at the |

  0 | 014478
  1 | 0
  2 | 1
  3 | 9
  4 | 8

> y
 [1] 3.8667356428 0.0001762708 0.7993462430 0.4181079732 0.9541728562
 [6] 4.7791262101 0.6899313108 2.1381289177 0.0541736818 0.3868776083

> set.seed(1234)
> z <- rnorm(10)*10
> z
 [1] -12.070657   2.774292  10.844412 -23.456977   4.291247   5.060559
 [7]  -5.747400  -5.466319  -5.644520  -8.900378
> stem(z)

  The decimal point is 1 digit(s) to the right of the |

  -2 | 3
  -1 | 2
  -0 | 9665
   0 | 345
   1 | 1

Box-Cox transformation

the Holy Trinity (LRT, Wald, Score tests)

Don't invert that matrix

Different matrix decompositions/factorizations

set.seed(1234)
x <- matrix(rnorm(10*2), nr= 10)
cmat <- cov(x); cmat
# [,1]       [,2]
# [1,]  0.9915928 -0.1862983
# [2,] -0.1862983  1.1392095

# cholesky decom
d1 <- chol(cmat)
t(d1) %*% d1  # equal to cmat
d1  # upper triangle
# [,1]       [,2]
# [1,] 0.9957875 -0.1870864
# [2,] 0.0000000  1.0508131

# svd
d2 <- svd(cmat)
d2$u %*% diag(d2$d) %*% t(d2$v) # equal to cmat
d2$u %*% diag(sqrt(d2$d))
# [,1]      [,2]
# [1,] -0.6322816 0.7692937
# [2,]  0.9305953 0.5226872

Model Estimation with R

Model Estimation by Example Demonstrations with R. Michael Clark

Regression

Regression

Non- and semi-parametric regression

Mean squared error

Splines

k-Nearest neighbor regression

  • class::knn()
  • k-NN regression in practice: boundary problem, discontinuities problem.
  • Weighted k-NN regression: want weight to be small when distance is large. Common choices - weight = kernel(xi, x)

Kernel regression

  • Instead of weighting NN, weight ALL points. Nadaraya-Watson kernel weighted average:

[math]\displaystyle{ \hat{y}_q = \sum c_{qi} y_i/\sum c_{qi} = \frac{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))*y_i}{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))} }[/math].

  • Choice of bandwidth [math]\displaystyle{ \lambda }[/math] for bias, variance trade-off. Small [math]\displaystyle{ \lambda }[/math] is over-fitting. Large [math]\displaystyle{ \lambda }[/math] can get an over-smoothed fit. Cross-validation.
  • Kernel regression leads to locally constant fit.
  • Issues with high dimensions, data scarcity and computational complexity.

Principal component analysis

See PCA.

Partial Least Squares (PLS)

[math]\displaystyle{ X = T P^\mathrm{T} + E }[/math]
[math]\displaystyle{ Y = U Q^\mathrm{T} + F }[/math]
where X is an [math]\displaystyle{ n \times m }[/math] matrix of predictors, Y is an [math]\displaystyle{ n \times p }[/math] matrix of responses; T and U are [math]\displaystyle{ n \times l }[/math] matrices that are, respectively, projections of X (the X score, component or factor matrix) and projections of Y (the Y scores); P and Q are, respectively, [math]\displaystyle{ m \times l }[/math] and [math]\displaystyle{ p \times l }[/math] orthogonal loading matrices; and matrices E and F are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of X and Y are made so as to maximise the covariance between T and U (projection matrices).

High dimension

dimRed package

dimRed package

Feature selection

Goodness-of-fit

Independent component analysis

ICA is another dimensionality reduction method.

ICA vs PCA

ICS vs FA

Robust independent component analysis

robustica: customizable robust independent component analysis 2022

Canonical correlation analysis

Non-negative CCA

Correspondence analysis

Non-negative matrix factorization

Optimization and expansion of non-negative matrix factorization

Nonlinear dimension reduction

The Specious Art of Single-Cell Genomics by Chari 2021

t-SNE

t-Distributed Stochastic Neighbor Embedding (t-SNE) is a technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets.

Perplexity parameter

  • Balance attention between local and global aspects of the dataset
  • A guess about the number of close neighbors
  • In a real setting is important to try different values
  • Must be lower than the number of input records
  • Interactive t-SNE ? Online. We see in addition to perplexity there are learning rate and max iterations.

Classifying digits with t-SNE: MNIST data

Below is an example from datacamp Advanced Dimensionality Reduction in R.

The mnist_sample is very small 200x785. Here (Exploring handwritten digit classification: a tidy analysis of the MNIST dataset) is a large data with 60k records (60000 x 785).

  1. Generating t-SNE features
    library(readr)
    library(dplyr)
    
    # 104MB
    mnist_raw <- read_csv("https://pjreddie.com/media/files/mnist_train.csv", col_names = FALSE)
    mnist_10k <- mnist_raw[1:10000, ]
    colnames(mnist_10k) <- c("label", paste0("pixel", 0:783))
    
    library(ggplot2)
    library(Rtsne)
    
    tsne <- Rtsne(mnist_10k[, -1], perplexity = 5)
    tsne_plot <- data.frame(tsne_x= tsne$Y[1:5000,1],
                            tsne_y = tsne$Y[1:5000,2],
                            digit = as.factor(mnist_10k[1:5000,]$label))
    # visualize obtained embedding
    ggplot(tsne_plot, aes(x= tsne_x, y = tsne_y, color = digit)) +
      ggtitle("MNIST embedding of the first 5K digits") +
      geom_text(aes(label = digit)) + theme(legend.position= "none")
    
  2. Computing centroids
    library(data.table)
    # Get t-SNE coordinates
    centroids <- as.data.table(tsne$Y[1:5000,])
    setnames(centroids, c("X", "Y"))
    centroids[, label := as.factor(mnist_10k[1:5000,]$label)]
    # Compute centroids
    centroids[, mean_X := mean(X), by = label]
    centroids[, mean_Y := mean(Y), by = label]
    centroids <- unique(centroids, by = "label")
    # visualize centroids
    ggplot(centroids, aes(x= mean_X, y = mean_Y, color = label)) +
      ggtitle("Centroids coordinates") + geom_text(aes(label = label)) +
      theme(legend.position = "none")
    
  3. Classifying new digits
    # Get new examples of digits 4 and 9
    distances <- as.data.table(tsne$Y[5001:10000,])
    setnames(distances, c("X" , "Y"))
    distances[, label := mnist_10k[5001:10000,]$label]
    distances <- distances[label == 4 | label == 9]
    # Compute the distance to the centroids
    distances[, dist_4 := sqrt(((X - centroids[label==4,]$mean_X) + 
                                (Y - centroids[label==4,]$mean_Y))^2)]
    dim(distances)
    # [1] 928   4
    distances[1:3, ]
    #            X        Y label   dist_4
    # 1: -15.90171 27.62270     4 1.494578
    # 2: -33.66668 35.69753     9 8.195562
    # 3: -16.55037 18.64792     9 8.128860
    
    # Plot distance to each centroid
    ggplot(distances, aes(x=dist_4, fill = as.factor(label))) + 
      geom_histogram(binwidth=5, alpha=.5, position="identity", show.legend = F)
    

Fashion MNIST data

  • fashion_mnist is only 500x785
  • keras has 60k x 785. Miniconda is required when we want to use the package.

tSNE vs PCA

Two groups example

suppressPackageStartupMessages({
  library(splatter)
  library(scater)
})

sim.groups <- splatSimulate(group.prob = c(0.5, 0.5), method = "groups",
                            verbose = FALSE)
sim.groups <- logNormCounts(sim.groups)
sim.groups <- runPCA(sim.groups)
plotPCA(sim.groups, colour_by = "Group") # 2 groups separated in PC1

sim.groups <- runTSNE(sim.groups)
plotTSNE(sim.groups, colour_by = "Group") # 2 groups separated in TSNE2

UMAP

GECO

GECO: gene expression clustering optimization app for non-linear data visualization of patterns

Visualize the random effects

http://www.quantumforest.com/2012/11/more-sense-of-random-effects/

Calibration

  • Search by image: graphical explanation of calibration problem
  • Calibration: the Achilles heel of predictive analytics Calster 2019
  • https://www.itl.nist.gov/div898/handbook/pmd/section1/pmd133.htm Calibration and calibration curve.
    • Y=voltage (observed), X=temperature (true/ideal). The calibration curve for a thermocouple is often constructed by comparing thermocouple (observed)output to relatively (true)precise thermometer data.
    • when a new temperature is measured with the thermocouple, the voltage is converted to temperature terms by plugging the observed voltage into the regression equation and solving for temperature.
    • It is important to note that the thermocouple measurements, made on the secondary measurement scale, are treated as the response variable and the more precise thermometer results, on the primary scale, are treated as the predictor variable because this best satisfies the underlying assumptions (Y=observed, X=true) of the analysis.
    • Calibration interval
    • In almost all calibration applications the ultimate quantity of interest is the true value of the primary-scale measurement method associated with a measurement made on the secondary scale.
    • It seems the x-axis and y-axis have similar ranges in many application.
  • An Exercise in the Real World of Design and Analysis, Denby, Landwehr, and Mallows 2001. Inverse regression
  • How to determine calibration accuracy/uncertainty of a linear regression?
  • Linear Regression and Calibration Curves
  • Regression and calibration Shaun Burke
  • calibrate package
  • investr: An R Package for Inverse Estimation. Paper
  • The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models by Kattan and Gerds 2018. The following code demonstrates Figure 2.
    # Odds ratio =1 and calibrated model
    set.seed(666)
    x = rnorm(1000)           
    z1 = 1 + 0*x        
    pr1 = 1/(1+exp(-z1))
    y1 = rbinom(1000,1,pr1)  
    mean(y1) # .724, marginal prevalence of the outcome
    dat1 <- data.frame(x=x, y=y1)
    newdat1 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr1))
    
    # Odds ratio =1 and severely miscalibrated model
    set.seed(666)
    x = rnorm(1000)           
    z2 =  -2 + 0*x        
    pr2 = 1/(1+exp(-z2))  
    y2 = rbinom(1000,1,pr2)  
    mean(y2) # .12
    dat2 <- data.frame(x=x, y=y2)
    newdat2 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr2))
    
    library(riskRegression)
    lrfit1 <- glm(y ~ x, data = dat1, family = 'binomial')
    IPA(lrfit1, newdata = newdat1)
    #     Variable     Brier           IPA     IPA.gain
    # 1 Null model 0.1984710  0.000000e+00 -0.003160010
    # 2 Full model 0.1990982 -3.160010e-03  0.000000000
    # 3          x 0.1984800 -4.534668e-05 -0.003114664
    1 - 0.1990982/0.1984710
    # [1] -0.003160159
    
    lrfit2 <- glm(y ~ x, family = 'binomial')
    IPA(lrfit2, newdata = newdat1)
    #     Variable     Brier       IPA     IPA.gain
    # 1 Null model 0.1984710  0.000000 -1.859333763
    # 2 Full model 0.5674948 -1.859334  0.000000000
    # 3          x 0.5669200 -1.856437 -0.002896299
    1 - 0.5674948/0.1984710
    # [1] -1.859334
    From the simulated data, we see IPA = -3.16e-3 for a calibrated model and IPA = -1.86 for a severely miscalibrated model.

ROC curve

See ROC.

NRI (Net reclassification improvement)

Maximum likelihood

Difference of partial likelihood, profile likelihood and marginal likelihood

EM Algorithm

Mixture model

mixComp: Estimation of the Order of Mixture Distributions

MLE

Maximum Likelihood Distilled

Efficiency of an estimator

What does it mean by more “efficient” estimator

Inference

infer package

Generalized Linear Model

Link function

Link Functions versus Data Transforms

Extract coefficients, z, p-values

Use coef(summary(glmObject))

> coef(summary(glm.D93))
                 Estimate Std. Error       z value     Pr(>|z|)
(Intercept)  3.044522e+00  0.1708987  1.781478e+01 5.426767e-71
outcome2    -4.542553e-01  0.2021708 -2.246889e+00 2.464711e-02
outcome3    -2.929871e-01  0.1927423 -1.520097e+00 1.284865e-01
treatment2   1.337909e-15  0.2000000  6.689547e-15 1.000000e+00
treatment3   1.421085e-15  0.2000000  7.105427e-15 1.000000e+00

Quasi Likelihood

Quasi-likelihood is like log-likelihood. The quasi-score function (first derivative of quasi-likelihood function) is the estimating equation.

IRLS

Plot

https://strengejacke.wordpress.com/2015/02/05/sjplot-package-and-related-online-manuals-updated-rstats-ggplot/

Deviance, stats::deviance() and glmnet::deviance.glmnet() from R

## an example with offsets from Venables & Ripley (2002, p.189)
utils::data(anorexia, package = "MASS")

anorex.1 <- glm(Postwt ~ Prewt + Treat + offset(Prewt),
                family = gaussian, data = anorexia)
summary(anorex.1)

# Call:
#   glm(formula = Postwt ~ Prewt + Treat + offset(Prewt), family = gaussian, 
#       data = anorexia)
# 
# Deviance Residuals: 
#   Min        1Q    Median        3Q       Max  
# -14.1083   -4.2773   -0.5484    5.4838   15.2922  
# 
# Coefficients:
#   Estimate Std. Error t value Pr(>|t|)    
# (Intercept)  49.7711    13.3910   3.717 0.000410 ***
#   Prewt        -0.5655     0.1612  -3.509 0.000803 ***
#   TreatCont    -4.0971     1.8935  -2.164 0.033999 *  
#   TreatFT       4.5631     2.1333   2.139 0.036035 *  
#   ---
#   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# 
# (Dispersion parameter for gaussian family taken to be 48.69504)
# 
# Null deviance: 4525.4  on 71  degrees of freedom
# Residual deviance: 3311.3  on 68  degrees of freedom
# AIC: 489.97
# 
# Number of Fisher Scoring iterations: 2

deviance(anorex.1)
# [1] 3311.263
  • In glmnet package. The deviance is defined to be 2*(loglike_sat - loglike), where loglike_sat is the log-likelihood for the saturated model (a model with a free parameter per observation). Null deviance is defined to be 2*(loglike_sat -loglike(Null)); The NULL model refers to the intercept model, except for the Cox, where it is the 0 model. Hence dev.ratio=1-deviance/nulldev, and this deviance method returns (1-dev.ratio)*nulldev.
x=matrix(rnorm(100*2),100,2)
y=rnorm(100)
fit1=glmnet(x,y) 
deviance(fit1)  # one for each lambda
#  [1] 98.83277 98.53893 98.29499 98.09246 97.92432 97.78472 97.66883
#  [8] 97.57261 97.49273 97.41327 97.29855 97.20332 97.12425 97.05861
# ...
# [57] 96.73772 96.73770
fit2 <- glmnet(x, y, lambda=.1) # fix lambda
deviance(fit2)
# [1] 98.10212
deviance(glm(y ~ x))
# [1] 96.73762
sum(residuals(glm(y ~ x))^2)
# [1] 96.73762

Saturated model

Testing

Generalized Additive Models

Simulate data

Density plot

# plot a Weibull distribution with shape and scale
func <- function(x) dweibull(x, shape = 1, scale = 3.38)
curve(func, .1, 10)

func <- function(x) dweibull(x, shape = 1.1, scale = 3.38)
curve(func, .1, 10)

The shape parameter plays a role on the shape of the density function and the failure rate.

  • Shape <=1: density is convex, not a hat shape.
  • Shape =1: failure rate (hazard function) is constant. Exponential distribution.
  • Shape >1: failure rate increases with time

Simulate data from a specified density

Permuted block randomization

Permuted block randomization using simstudy

Correlated data

Clustered data with marginal correlations

Generating clustered data with marginal correlations

Signal to noise ratio/SNR

[math]\displaystyle{ SNR = \frac{\sigma^2_{signal}}{\sigma^2_{noise}} = \frac{Var(f(X))}{Var(e)} }[/math] if Y = f(X) + e
  • The SNR is related to the correlation of Y and f(X). Assume X and e are independent ([math]\displaystyle{ X \perp e }[/math]):
[math]\displaystyle{ \begin{align} Cor(Y, f(X)) &= Cor(f(X)+e, f(X)) \\ &= \frac{Cov(f(X)+e, f(X))}{\sqrt{Var(f(X)+e) Var(f(X))}} \\ &= \frac{Var(f(X))}{\sqrt{Var(f(X)+e) Var(f(X))}} \\ &= \frac{\sqrt{Var(f(X))}}{\sqrt{Var(f(X)) + Var(e))}} = \frac{\sqrt{SNR}}{\sqrt{SNR + 1}} \\ &= \frac{1}{\sqrt{1 + Var(e)/Var(f(X))}} = \frac{1}{\sqrt{1 + SNR^{-1}}} \end{align} }[/math] SnrVScor.png
Or [math]\displaystyle{ SNR = \frac{Cor^2}{1-Cor^2} }[/math]

Some examples of signal to noise ratio

Effect size, Cohen's d and volcano plot

[math]\displaystyle{ \theta = \frac{\mu_1 - \mu_2} \sigma, }[/math]

Treatment/control

  • simdata() from biospear package
  • data.gen() from ROCSI package. The response contains continuous, binary and survival outcomes. The input include prevalence of predictive biomarkers, effect size (beta) for prognostic biomarker, etc.

Cauchy distribution has no expectation

https://en.wikipedia.org/wiki/Cauchy_distribution

replicate(10, mean(rcauchy(10000)))

Dirichlet distribution

  • Dirichlet distribution
    • It is a multivariate generalization of the beta distribution
    • The Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.
  • dirmult::rdirichlet()

Multiple comparisons

Take an example, Suppose 550 out of 10,000 genes are significant at .05 level

  1. P-value < .05 ==> Expect .05*10,000=500 false positives
  2. False discovery rate < .05 ==> Expect .05*550 =27.5 false positives
  3. Family wise error rate < .05 ==> The probablity of at least 1 false positive <.05

According to Lifetime Risk of Developing or Dying From Cancer, there is a 39.7% risk of developing a cancer for male during his lifetime (in other words, 1 out of every 2.52 men in US will develop some kind of cancer during his lifetime) and 37.6% for female. So the probability of getting at least one cancer patient in a 3-generation family is 1-.6**3 - .63**3 = 0.95.

Flexible method

?GSEABenchmarkeR::runDE. Unadjusted (too few DE genes), FDR, and Bonferroni (too many DE genes) are applied depending on the proportion of DE genes.

Family-Wise Error Rate (FWER)

Bonferroni

False Discovery Rate/FDR

Suppose [math]\displaystyle{ p_1 \leq p_2 \leq ... \leq p_n }[/math]. Then

[math]\displaystyle{ \text{FDR}_i = \text{min}(1, n* p_i/i) }[/math].

So if the number of tests ([math]\displaystyle{ n }[/math]) is large and/or the original p value ([math]\displaystyle{ p_i }[/math]) is large, then FDR can hit the value 1.

However, the simple formula above does not guarantee the monotonicity property from the FDR. So the calculation in R is more complicated. See How Does R Calculate the False Discovery Rate.

Below is the histograms of p-values and FDR (BH adjusted) from a real data (Pomeroy in BRB-ArrayTools).

File:Hist bh.svg

And the next is a scatterplot w/ histograms on the margins from a null data.

File:Scatterhist.svg

q-value

q-value is defined as the minimum FDR that can be attained when calling that feature significant (i.e., expected proportion of false positives incurred when calling that feature significant).

If gene X has a q-value of 0.013 it means that 1.3% of genes that show p-values at least as small as gene X are false positives.

Another view: q-value = FDR adjusted p-value. A p-value of 5% means that 5% of all tests will result in false positives. A q-value of 5% means that 5% of significant results will result in false positives. here.

Double dipping

Double dipping

SAM/Significance Analysis of Microarrays

The percentile option is used to define the number of falsely called genes based on 'B' permutations. If we use the 90-th percentile, the number of significant genes will be less than if we use the 50-th percentile/median.

In BRCA dataset, using the 90-th percentile will get 29 genes vs 183 genes if we use median.

Required number of permutations for a permutation-based p-value

Multivariate permutation test

In BRCA dataset, using 80% confidence gives 116 genes vs 237 genes if we use 50% confidence (assuming maximum proportion of false discoveries is 10%). The method is published on EL Korn, JF Troendle, LM McShane and R Simon, Controlling the number of false discoveries: Application to high dimensional genomic data, Journal of Statistical Planning and Inference, vol 124, 379-398 (2004).

The role of the p-value in the multitesting problem

https://www.tandfonline.com/doi/full/10.1080/02664763.2019.1682128

String Permutations Algorithm

https://youtu.be/nYFd7VHKyWQ

combinat package

Find all Permutations

coin package: Resampling

Resampling Statistics

Empirical Bayes Normal Means Problem with Correlated Noise

Solving the Empirical Bayes Normal Means Problem with Correlated Noise Sun 2018

The package cashr and the source code of the paper

Bayes

Bayes factor

Empirical Bayes method

Naive Bayes classifier

Understanding Naïve Bayes Classifier Using R

MCMC

Speeding up Metropolis-Hastings with Rcpp

offset() function

Offset in Poisson regression

  1. We need to model rates instead of counts
  2. More generally, you use offsets because the units of observation are different in some dimension (different populations, different geographic sizes) and the outcome is proportional to that dimension.

An example from here

Y  <- c(15,  7, 36,  4, 16, 12, 41, 15)
N  <- c(4949, 3534, 12210, 344, 6178, 4883, 11256, 7125)
x1 <- c(-0.1, 0, 0.2, 0, 1, 1.1, 1.1, 1)
x2 <- c(2.2, 1.5, 4.5, 7.2, 4.5, 3.2, 9.1, 5.2)

glm(Y ~ offset(log(N)) + (x1 + x2), family=poisson) # two variables
# Coefficients:
# (Intercept)           x1           x2
#     -6.172       -0.380        0.109
#
# Degrees of Freedom: 7 Total (i.e. Null);  5 Residual
# Null Deviance:	    10.56
# Residual Deviance: 4.559 	AIC: 46.69
glm(Y ~ offset(log(N)) + I(x1+x2), family=poisson)  # one variable
# Coefficients:
# (Intercept)   I(x1 + x2)
#   -6.12652      0.04746
#
# Degrees of Freedom: 7 Total (i.e. Null);  6 Residual
# Null Deviance:	    10.56
# Residual Deviance: 8.001 	AIC: 48.13

Offset in Cox regression

An example from biospear::PCAlasso()

coxph(Surv(time, status) ~ offset(off.All), data = data)
# Call:  coxph(formula = Surv(time, status) ~ offset(off.All), data = data)
#
# Null model
#   log likelihood= -2391.736 
#   n= 500 

# versus without using offset()
coxph(Surv(time, status) ~ off.All, data = data)
# Call:
# coxph(formula = Surv(time, status) ~ off.All, data = data)
#
#          coef exp(coef) se(coef)    z    p
# off.All 0.485     1.624    0.658 0.74 0.46
#
# Likelihood ratio test=0.54  on 1 df, p=0.5
# n= 500, number of events= 438 
coxph(Surv(time, status) ~ off.All, data = data)$loglik
# [1] -2391.702 -2391.430    # initial coef estimate, final coef

Offset in linear regression

Overdispersion

https://en.wikipedia.org/wiki/Overdispersion

Var(Y) = phi * E(Y). If phi > 1, then it is overdispersion relative to Poisson. If phi <1, we have under-dispersion (rare).

Heterogeneity

The Poisson model fit is not good; residual deviance/df >> 1. The lack of fit maybe due to missing data, covariates or overdispersion.

Subjects within each covariate combination still differ greatly.

Consider Quasi-Poisson or negative binomial.

Test of overdispersion or underdispersion in Poisson models

https://stats.stackexchange.com/questions/66586/is-there-a-test-to-determine-whether-glm-overdispersion-is-significant

Poisson

Negative Binomial

The mean of the Poisson distribution can itself be thought of as a random variable drawn from the gamma distribution thereby introducing an additional free parameter.

Binomial

Count data

Zero counts

Bias

Bias in Small-Sample Inference With Count-Data Models Blackburn 2019

Survival data analysis

See Survival data analysis

Logistic regression

Simulate binary data from the logistic model

https://stats.stackexchange.com/questions/46523/how-to-simulate-artificial-data-for-logistic-regression

set.seed(666)
x1 = rnorm(1000)           # some continuous variables 
x2 = rnorm(1000)
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
pr = 1/(1+exp(-z))         # pass through an inv-logit function
y = rbinom(1000,1,pr)      # bernoulli response variable
 
#now feed it to glm:
df = data.frame(y=y,x1=x1,x2=x2)
glm( y~x1+x2,data=df,family="binomial")

Building a Logistic Regression model from scratch

https://www.analyticsvidhya.com/blog/2015/10/basics-logistic-regression

Algorithm didn’t converge & probabilities 0/1

Prediction

Odds ratio

  • https://en.wikipedia.org/wiki/Odds_ratio. It seems a larger OR does not imply a smaller Fisher's exact p-value. See an example on Fig 4 here.
  • Odds ratio = exp(coefficient). For example, if the coefficient for a predictor variable in your logistic regression model is 0.5, the odds ratio for that variable would be: exp(0.5) = 1.64. This means that, for every unit increase in the predictor variable, the odds of the binary outcome occurring increase by a factor of 1.64. A larger odds ratio indicates a stronger association between the predictor variable and the binary outcome, while a smaller odds ratio indicates a weaker association.
  • why the odds ratio is exp(coefficient) in logistic regression? The odds ratio is the exponent of the coefficient in a logistic regression model because the logistic regression model is based on the logit function, which is the natural logarithm of the odds ratio. The logit function takes the following form: logit(p) = log(p/(1-p)), where p is the probability of the binary outcome occurring.
  • Clinical example: Imagine that you are conducting a study to investigate the association between body mass index (BMI) and the risk of developing type 2 diabetes. Fit a logistic regression using BMI as the covariate. Calculate the odds ratio for the BMI variable: exp(coefficient) = 1.64. This means that, for every unit increase in BMI, the odds of a patient developing type 2 diabetes increase by a factor of 1.64.
  • Probability vs. odds: Probability and odds can differ from each other in many ways. For example, probability (of an event) typically appears as a percentage, while you can express odds as a fraction or ratio (the ratio of the number of ways the event can occur to the number of ways it cannot occur). Another difference is that probability uses a range that only exists between the numbers zero and one, while odds use a range that has no limits.
  • Calculate the odds ratio from the coefficient estimates; see this post.
    require(MASS)
    N  <- 100               # generate some data
    X1 <- rnorm(N, 175, 7)
    X2 <- rnorm(N,  30, 8)
    X3 <- abs(rnorm(N, 60, 30))
    Y  <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 12)
    
    # dichotomize Y and do logistic regression
    Yfac   <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi"))
    glmFit <- glm(Yfac ~ X1 + X2 + X3, family=binomial(link="logit"))
    
    exp(cbind(coef(glmFit), confint(glmFit)))  
    

AUC

A small introduction to the ROCR package

       predict.glm()             ROCR::prediction()     ROCR::performance()
glmobj ------------> predictTest -----------------> ROCPPred ---------> AUC
newdata                labels

Gompertz function

Medical applications

RCT

The design effect of a cluster randomized trial with baseline measurements

Subgroup analysis

Other related keywords: recursive partitioning, randomized clinical trials (RCT)

Interaction analysis

Statistical Learning

LDA (Fisher's linear discriminant), QDA

Bagging

Chapter 8 of the book.

  • Bootstrap mean is approximately a posterior average.
  • Bootstrap aggregation or bagging average: Average the prediction over a collection of bootstrap samples, thereby reducing its variance. The bagging estimate is defined by
[math]\displaystyle{ \hat{f}_{bag}(x) = \frac{1}{B}\sum_{b=1}^B \hat{f}^{*b}(x). }[/math]

Where Bagging Might Work Better Than Boosting

CLASSIFICATION FROM SCRATCH, BAGGING AND FORESTS 10/8

Boosting

AdaBoost

AdaBoost.M1 by Freund and Schapire (1997):

The error rate on the training sample is [math]\displaystyle{ \bar{err} = \frac{1}{N} \sum_{i=1}^N I(y_i \neq G(x_i)), }[/math]

Sequentially apply the weak classification algorithm to repeatedly modified versions of the data, thereby producing a sequence of weak classifiers [math]\displaystyle{ G_m(x), m=1,2,\dots,M. }[/math]

The predictions from all of them are combined through a weighted majority vote to produce the final prediction: [math]\displaystyle{ G(x) = sign[\sum_{m=1}^M \alpha_m G_m(x)]. }[/math] Here [math]\displaystyle{ \alpha_1,\alpha_2,\dots,\alpha_M }[/math] are computed by the boosting algorithm and weight the contribution of each respective [math]\displaystyle{ G_m(x) }[/math]. Their effect is to give higher influence to the more accurate classifiers in the sequence.

Dropout regularization

DART: Dropout Regularization in Boosting Ensembles

Gradient boosting

Gradient descent

Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function.

  • Gradient Descent in R by Econometric Sense. Example of using the trivial cost function 1.2 * (x-2)^2 + 3.2. R code is provided and visualization of steps is interesting! The unknown parameter is the learning rate.
    repeat until convergence {
      Xn+1 = Xn - α∇F(Xn) 
    }
    

    Where ∇F(x) would be the derivative for the cost function at hand and α is the learning rate.

The error function from a simple linear regression looks like

[math]\displaystyle{ \begin{align} Err(m,b) &= \frac{1}{N}\sum_{i=1}^n (y_i - (m x_i + b))^2, \\ \end{align} }[/math]

We compute the gradient first for each parameters.

[math]\displaystyle{ \begin{align} \frac{\partial Err}{\partial m} &= \frac{2}{n} \sum_{i=1}^n -x_i(y_i - (m x_i + b)), \\ \frac{\partial Err}{\partial b} &= \frac{2}{n} \sum_{i=1}^n -(y_i - (m x_i + b)) \end{align} }[/math]

The gradient descent algorithm uses an iterative method to update the estimates using a tuning parameter called learning rate.

new_m &= m_current - (learningRate * m_gradient) 
new_b &= b_current - (learningRate * b_gradient) 

After each iteration, derivative is closer to zero. Coding in R for the simple linear regression.

Gradient descent vs Newton's method

Classification and Regression Trees (CART)

Construction of the tree classifier

  • Node proportion
[math]\displaystyle{ p(1|t) + \dots + p(6|t) =1 }[/math] where [math]\displaystyle{ p(j|t) }[/math] define the node proportions (class proportion of class j on node t. Here we assume there are 6 classes.
  • Impurity of node t
[math]\displaystyle{ i(t) }[/math] is a nonnegative function [math]\displaystyle{ \phi }[/math] of the [math]\displaystyle{ p(1|t), \dots, p(6|t) }[/math] such that [math]\displaystyle{ \phi(1/6,1/6,\dots,1/6) }[/math] = maximumm [math]\displaystyle{ \phi(1,0,\dots,0)=0, \phi(0,1,0,\dots,0)=0, \dots, \phi(0,0,0,0,0,1)=0 }[/math]. That is, the node impurity is largest when all classes are equally mixed together in it, and smallest when the node contains only one class.
  • Gini index of impurity
[math]\displaystyle{ i(t) = - \sum_{j=1}^6 p(j|t) \log p(j|t). }[/math]
  • Goodness of the split s on node t
[math]\displaystyle{ \Delta i(s, t) = i(t) -p_Li(t_L) - p_Ri(t_R). }[/math] where [math]\displaystyle{ p_R }[/math] are the proportion of the cases in t go into the left node [math]\displaystyle{ t_L }[/math] and a proportion [math]\displaystyle{ p_R }[/math] go into right node [math]\displaystyle{ t_R }[/math].

A tree was grown in the following way: At the root node [math]\displaystyle{ t_1 }[/math], a search was made through all candidate splits to find that split [math]\displaystyle{ s^* }[/math] which gave the largest decrease in impurity;

[math]\displaystyle{ \Delta i(s^*, t_1) = \max_{s} \Delta i(s, t_1). }[/math]
  • Class character of a terminal node was determined by the plurality rule. Specifically, if [math]\displaystyle{ p(j_0|t)=\max_j p(j|t) }[/math], then t was designated as a class [math]\displaystyle{ j_0 }[/math] terminal node.

R packages

Partially additive (generalized) linear model trees

Supervised Classification, Logistic and Multinomial

Variable selection

Review

Variable selection – A review and recommendations for the practicing statistician by Heinze et al 2018.

Variable selection and variable importance plot

Variable selection and cross-validation

Mallow Cp

Mallows's Cp addresses the issue of overfitting. The Cp statistic calculated on a sample of data estimates the mean squared prediction error (MSPE).

[math]\displaystyle{ E\sum_j (\hat{Y}_j - E(Y_j\mid X_j))^2/\sigma^2, }[/math]

The Cp statistic is defined as

[math]\displaystyle{ C_p={SSE_p \over S^2} - N + 2P. }[/math]

Variable selection for mode regression

http://www.tandfonline.com/doi/full/10.1080/02664763.2017.1342781 Chen & Zhou, Journal of applied statistics ,June 2017

lmSubsets

lmSubsets: Exact variable-subset selection in linear regression. 2020

Permutation method

BASIC XAI with DALEX — Part 2: Permutation-based variable importance

Neural network

Support vector machine (SVM)

Quadratic Discriminant Analysis (qda), KNN

Machine Learning. Stock Market Data, Part 3: Quadratic Discriminant Analysis and KNN

KNN

KNN Algorithm Machine Learning

Regularization

Regularization is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting

Regularization: Ridge, Lasso and Elastic Net from datacamp.com. Bias and variance trade-off in parameter estimates was used to lead to the discussion.

Regularized least squares

https://en.wikipedia.org/wiki/Regularized_least_squares. Ridge/ridge/elastic net regressions are special cases.

Ridge regression

Since L2 norm is used in the regularization, ridge regression is also called L2 regularization.

ridge regression with glmnet

Hoerl and Kennard (1970a, 1970b) introduced ridge regression, which minimizes RSS subject to a constraint [math]\displaystyle{ \sum|\beta_j|^2 \le t }[/math]. Note that though ridge regression shrinks the OLS estimator toward 0 and yields a biased estimator [math]\displaystyle{ \hat{\beta} = (X^TX + \lambda X)^{-1} X^T y }[/math] where [math]\displaystyle{ \lambda=\lambda(t) }[/math], a function of t, the variance is smaller than that of the OLS estimator.

The solution exists if [math]\displaystyle{ \lambda \gt 0 }[/math] even if [math]\displaystyle{ n \lt p }[/math].

Ridge regression (L2 penalty) only shrinks the coefficients. In contrast, Lasso method (L1 penalty) tries to shrink some coefficient estimators to exactly zeros. This can be seen from comparing the coefficient path plot from both methods.

Geometrically (contour plot of the cost function), the L1 penalty (the sum of absolute values of coefficients) will incur a probability of some zero coefficients (i.e. some coefficient hitting the corner of a diamond shape in the 2D case). For example, in the 2D case (X-axis=[math]\displaystyle{ \beta_0 }[/math], Y-axis=[math]\displaystyle{ \beta_1 }[/math]), the shape of the L1 penalty [math]\displaystyle{ |\beta_0| + |\beta_1| }[/math] is a diamond shape whereas the shape of the L2 penalty ([math]\displaystyle{ \beta_0^2 + \beta_1^2 }[/math]) is a circle.

Lasso/glmnet, adaptive lasso and FAQs

glmnet

Lasso logistic regression

https://freakonometrics.hypotheses.org/52894

Lagrange Multipliers

A Simple Explanation of Why Lagrange Multipliers Works

How to solve lasso/convex optimization

Quadratic programming

Constrained optimization

Jaya Package. Jaya Algorithm is a gradient-free optimization algorithm. It can be used for Maximization or Minimization of a function for solving both constrained and unconstrained optimization problems. It does not contain any hyperparameters.

Highly correlated covariates

1. Elastic net

2. Group lasso

Grouped data

Other Lasso

Comparison by plotting

If we are running simulation, we can use the DALEX package to visualize the fitting result from different machine learning methods and the true model. See http://smarterpoland.pl/index.php/2018/05/ml-models-what-they-cant-learn.

Prediction

Prediction, Estimation, and Attribution Efron 2020

Postprediction inference/Inference based on predicted outcomes

Methods for correcting inference based on outcomes predicted by machine learning Wang 2020. postpi package.

SHAP/SHapley Additive exPlanation: feature importance for each class

Imbalanced/unbalanced Classification

See ROC.

Deep Learning

Tensor Flow (tensorflow package)

Biological applications

Machine learning resources

The Bias-Variance Trade-Off & "DOUBLE DESCENT" in the test error

https://twitter.com/daniela_witten/status/1292293102103748609 and an easy to read Thread Reader.

  • (Thread #17) The key point is with 20 DF, n=p, and there's exactly ONE least squares fit that has zero training error. And that fit happens to have oodles of wiggles.....
  • (Thread #18) but as we increase the DF so that p>n, there are TONS of interpolating least squares fits. The MINIMUM NORM least squares fit is the "least wiggly" of those zillions of fits. And the "least wiggly" among them is even less wiggly than the fit when p=n !!!
  • (Thread #19) "double descent" is happening b/c DF isn't really the right quantity for the the x-axis: like, the fact that we are choosing the minimum norm least squares fit actually means that the spline with 36 DF is **less** flexible than the spline with 20 DF.
  • (Thread #20) if had used a ridge penalty when fitting the spline (instead of least squares)? Well then we wouldn't have interpolated training set, we wouldn't have seen double descent, AND we would have gotten better test error (for the right value of the tuning parameter!)
  • (Thread #21) When we use (stochastic) gradient descent to fit a neural net, we are actually picking out the minimum norm solution!! So the spline example is a pretty good analogy for what is happening when we see double descent for neural nets.

Survival data

Deep learning for survival outcomes Steingrimsson, 2020

Randomization inference

Randomization test

What is a Randomization Test?

Model selection criteria

All models are wrong

All models are wrong from George Box.

MSE

Akaike information criterion/AIC

[math]\displaystyle{ \mathrm{AIC} \, = \, 2k - 2\ln(\hat L) }[/math], where k be the number of estimated parameters in the model.
  • Smaller is better
  • Akaike proposed to approximate the expectation of the cross-validated log likelihood [math]\displaystyle{ E_{test}E_{train} [log L(x_{test}| \hat{\beta}_{train})] }[/math] by [math]\displaystyle{ log L(x_{train} | \hat{\beta}_{train})-k }[/math].
  • Leave-one-out cross-validation is asymptotically equivalent to AIC, for ordinary linear regression models.
  • AIC can be used to compare two models even if they are not hierarchically nested.
  • AIC() from the stats package.
  • Model Selection in R (AIC Vs BIC). broom::glance() was used.
  • Generally resampling based measures such as cross-validation should be preferred over theoretical measures such as Aikake's Information Criteria. Understanding the Bias-Variance Tradeoff & Accurately Measuring Model Prediction Error.

BIC

[math]\displaystyle{ \mathrm{BIC} \, = \, \ln(n) \cdot 2k - 2\ln(\hat L) }[/math], where k be the number of estimated parameters in the model.

Overfitting

AIC vs AUC

What is the difference in what AIC and c-statistic (AUC) actually measure for model fit?

Roughly speaking:

  • AIC is telling you how good your model fits for a specific mis-classification cost.
  • AUC is telling you how good your model would work, on average, across all mis-classification costs.

Frank Harrell: AUC (C-index) has the advantage of measuring the concordance probability as you stated, aside from cost/utility considerations. To me the bottom line is the AUC should be used to describe discrimination of one model, not to compare 2 models. For comparison we need to use the most powerful measure: deviance and those things derived from deviance: generalized 𝑅2 and AIC.

Variable selection and model estimation

Proper variable selection: Use only training data or full data?

  • training observations to perform all aspects of model-fitting—including variable selection
  • make use of the full data set in order to obtain more accurate coefficient estimates (This statement is arguable)

Cross-Validation

References:

R packages:

Bias–variance tradeoff

PRESS statistic (LOOCV) in regression

The PRESS statistic (predicted residual error sum of squares) [math]\displaystyle{ \sum_i (y_i - \hat{y}_{i,-i})^2 }[/math] provides another way to find the optimal model in regression. See the formula for the ridge regression case.

LOOCV vs 10-fold CV in classification

  • Background: Variance of mean for correlated data. If the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is
[math]\displaystyle{ \operatorname{Var}\left(\overline{X}\right) = \frac{\sigma^2}{n} + \frac{n - 1}{n}\rho\sigma^2. }[/math]
This implies that the variance of the mean increases with the average of the correlations.

Monte carlo cross-validation

This method creates multiple random splits of the dataset into training and validation data. See Wikipedia.

  • It is not creating replicates of CV samples.
  • As the number of random splits approaches infinity, the result of repeated random sub-sampling validation tends towards that of leave-p-out cross-validation.

Difference between CV & bootstrapping

Differences between cross validation and bootstrapping to estimate the prediction error

  • CV tends to be less biased but K-fold CV has fairly large variance.
  • Bootstrapping tends to drastically reduce the variance but gives more biased results (they tend to be pessimistic).
  • The 632 and 632+ rules methods have been adapted to deal with the bootstrap bias
  • Repeated CV does K-fold several times and averages the results similar to regular K-fold

.632 and .632+ bootstrap

[math]\displaystyle{ Err_{.632} = 0.368 \overline{err} + 0.632 Err_{boot(1)} }[/math]
[math]\displaystyle{ \hat{E}^*[\phi_{\mathcal{F}}(S)] = .368 \hat{E}[\phi_{f}(S)] + 0.632 \hat{E}[\phi_{f_b}(S_{-b})] }[/math]
where [math]\displaystyle{ \hat{E}[\phi_{f}(S)] }[/math] is the naive estimate of [math]\displaystyle{ \phi_f }[/math] using the entire dataset.

Create partitions for cross-validation

n <- 42; nfold <- 5  # unequal partition
folds <- split(sample(1:n), rep(1:nfold, length = n))  # a list
sapply(folds, length)

cv.glmnet()

sample(rep(seq(nfolds), length = N))  # a vector
set.seed(1); sample(rep(seq(3), length = 20)) 
# [1] 1 1 1 2 1 1 2 2 2 3 3 2 3 1 3 3 3 1 2 2

Another way is to use replace=TRUE in sample() (not quite uniform compared to the last method, strange)

sample(1:nfolds, N, replace=TRUE) # a vector
set.seed(1); sample(1:3, 20, replace=TRUE
# [1] 1 3 1 2 1 3 3 2 2 3 3 1 1 1 2 2 2 2 3 1
table(.Last.value)
# .Last.value
# 1 2 3 
# 7 7 6 

Another simple example. Split the data into 70% training data and 30% testing data

mysplit <- sample(c(rep(0, 0.7 * nrow(df)), rep(1, nrow(df) - 0.7 * nrow(df))))
train <- df[mysplit == 0, ] 
test <- df[mysplit == 1, ]  

Create training/testing data

Nested resampling

Nested resampling is need when we want to tuning a model by using a grid search. The default settings of a model are likely not optimal for each data set out. So an inner CV has to be performed with the aim to find the best parameter set of a learner for each fold.

See a diagram at https://i.stack.imgur.com/vh1sZ.png

In BRB-ArrayTools -> class prediction with multiple methods, the alpha (significant level of threshold used for gene selection, 2nd option in individual genes) can be viewed as a tuning parameter for the development of a classifier.

Pre-validation/pre-validated predictor

  • Pre-validation and inference in microarrays Tibshirani and Efron, Statistical Applications in Genetics and Molecular Biology, 2002.
  • See glmnet vignette
  • http://www.stat.columbia.edu/~tzheng/teaching/genetics/papers/tib_efron.pdf#page=5. In each CV, we compute the estimate of the response. This estimate of the response will serve as a new predictor (pre-validated 'predictor' ) in the final fitting model.
  • P1101 of Sachs 2016. With pre-validation, instead of computing the statistic [math]\displaystyle{ \phi }[/math] for each of the held-out subsets ([math]\displaystyle{ S_{-b} }[/math] for the bootstrap or [math]\displaystyle{ S_{k} }[/math] for cross-validation), the fitted signature [math]\displaystyle{ \hat{f}(X_i) }[/math] is estimated for [math]\displaystyle{ X_i \in S_{-b} }[/math] where [math]\displaystyle{ \hat{f} }[/math] is estimated using [math]\displaystyle{ S_{b} }[/math]. This process is repeated to obtain a set of pre-validated 'signature' estimates [math]\displaystyle{ \hat{f} }[/math]. Then an association measure [math]\displaystyle{ \phi }[/math] can be calculated using the pre-validated signature estimates and the true outcomes [math]\displaystyle{ Y_i, i = 1, \ldots, n }[/math].
  • Another description from the paper The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection. The prevalidation method is a variant of cross-validation. We then use [math]\displaystyle{ (y_i, \hat{\eta}_i) }[/math] to compute the measures described above. The cross-validated linear predictor for each patient is derived independently of the observed response of the patient, and hence the “prevalidated” dataset Embedded Image can essentially be treated as a “new dataset.” Therefore, this procedure provides valid assessment of the predictive performance of the model. To get stable results, we run 10× 10-fold cross-validation for real data analysis.
  • In CV, left-out samples = hold-out cases = test set

Custom cross validation

Cross validation vs regularization

When Cross-Validation is More Powerful than Regularization

Cross-validation with confidence (CVC)

JASA 2019 by Jing Lei, pdf, code

Correlation data

Cross-Validation for Correlated Data Rabinowicz, JASA 2020

Bias in Error Estimation

Bias due to unsupervised preprocessing

On the cross-validation bias due to unsupervised preprocessing 2022. Below I follow the practice from Biowulf to install Mamba. In this example, the 'project1' subfolder (2.0 GB) is located in '~/conda/envs' directory.

$ which python3
/usr/bin/python3

$ wget https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh
$ bash Mambaforge-Linux-x86_64.sh -p /home/brb/conda -b
$ source ~/conda/etc/profile.d/conda.sh && source ~/conda/etc/profile.d/mamba.sh
$ mkdir -p ~/bin
$ cat <<'__EOF__' > ~/bin/myconda
__conda_setup="$('/home/$USER/conda/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
    eval "$__conda_setup"
else
    if [ -f "/home/$USER/conda/etc/profile.d/conda.sh" ]; then
        . "/home/$USER/conda/etc/profile.d/conda.sh"
    else
        export PATH="/home/$USER/conda/bin:$PATH"
    fi
fi
unset __conda_setup

if [ -f "/home/$USER/conda/etc/profile.d/mamba.sh" ]; then
    . "/home/$USER/conda/etc/profile.d/mamba.sh"
fi
__EOF__
$ source ~/bin/myconda

$ export MAMBA_NO_BANNER=1
$ mamba create -n project1 python=3.7 numpy scipy scikit-learn mkl-service mkl_random pandas matplotlib
$ mamba activate project1
$ which python  # /home/brb/conda/envs/project1/bin/python

$ git clone https://github.com/mosco/unsupervised-preprocessing.git
$ cd unsupervised-preprocessing/
$ python    # Ctrl+d to quit
$ mamba deactivate

Pitfalls of applying machine learning in genomics

Navigating the pitfalls of applying machine learning in genomics 2022

Bootstrap

See Bootstrap

Clustering

See Clustering.

Cross-sectional analysis

  • https://en.wikipedia.org/wiki/Cross-sectional_study. The opposite of cross-sectional analysis is longitudinal analysis.
  • Cross-sectional analysis refers to a type of research method in which data is collected at a single point in time from a group of individuals, organizations, or other units of analysis. This approach contrasts with longitudinal studies, which follow the same group of individuals or units over an extended period of time.
    • In a cross-sectional analysis, researchers typically collect data from a sample of individuals or units that are representative of the population of interest. This data can then be used to examine patterns, relationships, or differences among the units at a specific point in time.
    • Cross-sectional analysis is commonly used in fields such as sociology, psychology, public health, and economics to study topics such as demographics, health behaviors, income inequality, and social attitudes. While cross-sectional analysis can provide valuable insights into the characteristics of a population at a given point in time, it cannot establish causality or determine changes over time.

Mixed Effect Model

  • Linear mixed effects models (video) by Clapham. Output for y ~ x + (x|group) model.
    y ~ x + (1|group)  # random intercepts, same slope for groups
    
    y ~ x + (x|group)  # random intercepts & slopes for groups
    
    y ~ color + (color|green/gray) # nested random effects
    
    y ~ color + (color|green) + (color|gray) # crossed random effects
    
  • linear mixed effects models in R lme4
  • Using Random Effects in Models by rcompanion
    library(nlme)
    lme(y ~ 1 + randonm = ~1 | Random) # one-way random model
    
    lme(y ~ Fix + random = ~1 | Random) # two-way mixed effect model
    
    # https://stackoverflow.com/a/36415354
    library(lme4)
    fit <- lmer(mins ~ Fix1 + Fix2 + (1|Random1) + (1|Random2) +
                    (1|Year/Month), REML=FALSE)
    

Repeated measure

Trajectories

  • An example: a model that there are random intercepts and slopes for each subject: [math]\displaystyle{ y_{ij} = \beta_0 + \beta_1 x_{ij} + b_{0i} + b_{1i} x_{ij} + e_{ij} }[/math]
    • [math]\displaystyle{ y_{ij} }[/math] is the reaction time for subject i at time j
    • [math]\displaystyle{ \beta_0, \beta_1 }[/math] are fixed intercept and slope
    • [math]\displaystyle{ x_{ij} }[/math] is the number of days since the start of the study for subject i at time j
    • [math]\displaystyle{ b_{0i}, b_{1i} }[/math] are the random intercept & slope for subject i
    • [math]\displaystyle{ e_{ij} }[/math] is the residual error for subject i at time j
    > library(lme4)
    > data(sleepstudy) # from lme4
    
    > # Fit a mixed effects model with random intercepts and slopes for each subject
    > model <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
    > 
    > # Plot the fitted model
    > plot(model, type = "l")
    
    > summary(model)
    Linear mixed model fit by REML ['lmerMod']
    Formula: Reaction ~ Days + (Days | Subject)
       Data: sleepstudy
    
    REML criterion at convergence: 1743.6
    
    Scaled residuals: 
        Min      1Q  Median      3Q     Max 
    -3.9536 -0.4634  0.0231  0.4634  5.1793 
    
    Random effects:
     Groups   Name        Variance Std.Dev. Corr
     Subject  (Intercept) 612.10   24.741       
              Days         35.07    5.922   0.07
     Residual             654.94   25.592       
    Number of obs: 180, groups:  Subject, 18
    
    Fixed effects:
                Estimate Std. Error t value
    (Intercept)  251.405      6.825  36.838
    Days          10.467      1.546   6.771
    
    Correlation of Fixed Effects:
         (Intr)
    Days -0.138
    • The fixed effect estimate for Days is 10.4673, which means that on average, reaction times decrease by 10.4673 milliseconds per day.
    • The random intercept variance is 612.09, which means that there is considerable variation in the baseline reaction times between subjects.
    • The random slope variance is 35.07, which means that there is considerable variation in the rate of change of reaction times over time between subjects.
    • It seems no p-values are returned. See Three ways to get parameter-specific p-values from lmer & How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
    • To exclude the random slope term, we can use (1 | Subject)
    • To exclude the random intercept term, we can use (0 + Days | Subject)
    • To exclude the fix intercept term, we can use y ~ -1 + x + (Days | Subject)
    library(nlme)
    model2 <- lme(Reaction ~ Days, random = ~ Days | Subject, data = sleepstudy)
    
    > summary(model2)
    Linear mixed-effects model fit by REML
      Data: sleepstudy 
           AIC      BIC    logLik
      1755.628 1774.719 -871.8141
    
    Random effects:
     Formula: ~Days | Subject
     Structure: General positive-definite, Log-Cholesky parametrization
                StdDev    Corr  
    (Intercept) 24.740241 (Intr)
    Days         5.922103 0.066 
    Residual    25.591843       
    
    Fixed effects:  Reaction ~ Days 
                    Value Std.Error  DF  t-value
    (Intercept) 251.40510  6.824516 161 36.83853
    Days         10.46729  1.545783 161  6.77151
                p-value
    (Intercept)       0
    Days              0
     Correlation: 
         (Intr)
    Days -0.138
    
    Standardized Within-Group Residuals:
            Min          Q1         Med          Q3 
    -3.95355735 -0.46339976  0.02311783  0.46339621 
            Max 
     5.17925089 
    
    Number of Observations: 180
    Number of Groups: 18
    • To exclude the random slope term, we can use random = ~1 | Subject.
    • To exclude the random intercept, we can use random = ~Days - 1 | Subject.

    Graphical display using facet_wrap().

    Sleepstudy.png

  • lmerTest - Tests in Linear Mixed Effects Models. The ranova() function in the lmerTest package is used to compute an ANOVA-like table with tests of random-effect terms in a mixed-effects model. Paper in JSS 2017.
    # Fixed effect
    > anova(model) 
    Type III Analysis of Variance Table with Satterthwaite's method
         Sum Sq Mean Sq NumDF DenDF F value    Pr(>F)    
    Days  30031   30031     1    17  45.853 3.264e-06 ***
    ---
    Signif. codes:  
    0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
    
    > anova(model2) 
                numDF denDF   F-value p-value
    (Intercept)     1   161 1454.0766  <.0001
    Days            1   161   45.8534  <.0001
    
    # Random effect
    > ranova(model)
    ANOVA-like table for random-effects: Single term deletions
    
    Model:
    Reaction ~ Days + (Days | Subject)
                             npar  logLik    AIC    LRT Df
    <none>                      6 -871.81 1755.6          
    Days in (Days | Subject)    4 -893.23 1794.5 42.837  2
                             Pr(>Chisq)    
    <none>                                 
    Days in (Days | Subject)   4.99e-10 ***
    ---
    Signif. codes:  
    0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
    

variancePartition

variancePartition Quantify and interpret divers of variation in multilevel gene expression experiments

Entropy

[math]\displaystyle{ \begin{align} Entropy &= \sum \log(1/p(x)) p(x) = \sum Surprise P(Surprise) \end{align} }[/math]

Definition

Entropy is defined by -log2(p) where p is a probability. Higher entropy represents higher unpredictable of an event.

Some examples:

  • Fair 2-side die: Entropy = -.5*log2(.5) - .5*log2(.5) = 1.
  • Fair 6-side die: Entropy = -6*1/6*log2(1/6) = 2.58
  • Weighted 6-side die: Consider pi=.1 for i=1,..,5 and p6=.5. Entropy = -5*.1*log2(.1) - .5*log2(.5) = 2.16 (less unpredictable than a fair 6-side die).

Use

When entropy was applied to the variable selection, we want to select a class variable which gives a largest entropy difference between without any class variable (compute entropy using response only) and with that class variable (entropy is computed by adding entropy in each class level) because this variable is most discriminative and it gives most information gain. For example,

  • entropy (without any class)=.94,
  • entropy(var 1) = .69,
  • entropy(var 2)=.91,
  • entropy(var 3)=.725.

We will choose variable 1 since it gives the largest gain (.94 - .69) compared to the other variables (.94 -.91, .94 -.725).

Why is picking the attribute with the most information gain beneficial? It reduces entropy, which increases predictability. A decrease in entropy signifies an decrease in unpredictability, which also means an increase in predictability.

Consider a split of a continuous variable. Where should we cut the continuous variable to create a binary partition with the highest gain? Suppose cut point c1 creates an entropy .9 and another cut point c2 creates an entropy .1. We should choose c2.

Related

In addition to information gain, gini (dʒiːni) index is another metric used in decision tree. See wikipedia page about decision tree learning.

Ensembles

Bagging

Draw N bootstrap samples and summary the results (averaging for regression problem, majority vote for classification problem). Decrease variance without changing bias. Not help much with underfit or high bias models.

Random forest

Boosting

Instead of selecting data points randomly with the boostrap, it favors the misclassified points.

Algorithm:

  • Initialize the weights
  • Repeat
    • resample with respect to weights
    • retrain the model
    • recompute weights

Since boosting requires computation in iterative and bagging can be run in parallel, bagging has an advantage over boosting when the data is very large.

Time series

p-values

p-values

Distribution of p values in medical abstracts

nominal p-value and Empirical p-values

(nominal) alpha level

Conventional methodology for statistical testing is, in advance of undertaking the test, to set a NOMINAL ALPHA CRITERION LEVEL (often 0.05). The outcome is classified as showing STATISTICAL SIGNIFICANCE if the actual ALPHA (probability of the outcome under the null hypothesis) is no greater than this NOMINAL ALPHA CRITERION LEVEL.

Normality assumption

Violating the normality assumption may be the lesser of two evils

Second-Generation p-Values

An Introduction to Second-Generation p-Values Blume et al, 2020

Small p-value due to very large sample size

T-statistic

See T-statistic.

ANOVA

See ANOVA.

Goodness of fit

Chi-square tests

Fitting distribution

Fitting distributions with R

Normality distribution check

Anderson-Darling Test in R (Quick Normality Check)

Kolmogorov-Smirnov test

Contingency Tables

How to Measure Contingency-Coefficient (Association Strength). gplots::balloonplot() and corrplot::corrplot() .

What statistical test should I do

What statistical test should I do?

Measure the association in a contingency table

  • Phi coefficient: The Phi coefficient is a measure of association that is used for 2x2 contingency tables. It ranges from -1 to 1, with 0 indicating no association and values close to -1 or 1 indicating a strong association. The formula for Phi coefficient is: Phi = (ad - bc) / sqrt((a+b)(c+d)(a+c)(b+d)), where a, b, c, and d are the frequency counts in the four cells of the contingency table.
  • Cramer's V: Cramer's V is a measure of association that is used for contingency tables of any size. It ranges from 0 to 1, with 0 indicating no association and values close to 1 indicating a strong association. The formula for Cramer's V is: V = sqrt(Chi-Square / (n*(min(r,c)-1))), where Chi-Square is the Chi-Square statistic, n is the total sample size, and r and c are the number of rows and columns in the contingency table.
  • Odds ratio: The odds ratio is a measure of association that is commonly used in medical research and epidemiology. It compares the odds of an event occurring in one group compared to another group. The odds ratio can be calculated as: OR = (a/b) / (c/d), where a, b, c, and d are the frequency counts in the four cells of the contingency table. An odds ratio of 1 indicates no association, while values greater than 1 indicate a positive association and values less than 1 indicate a negative association.

Odds ratio and Risk ratio

  • Odds ratio and Risk ratio/relative risk.
    • In practice the odds ratio is commonly used for case-control studies, as the relative risk cannot be estimated.
    • Relative risk is used in the statistical analysis of the data of ecological, cohort, medical and intervention studies, to estimate the strength of the association between exposures (treatments or risk factors) and outcomes.
  • Odds Ratio Interpretation Quick Guide
  • The odds ratio is often used to evaluate the strength of the association between two binary variables and to compare the risk of an event occurring between two groups.
    • An odds ratio greater than 1 indicates that the event is more likely to occur in the first group, while an odds ratio less than 1 indicates that the event is more likely to occur in the second group.
    • In general, a larger odds ratio indicates a stronger association between the two variables, while a smaller odds ratio indicates a weaker association.
  • The ratio of the odds of an event occurring in one group to the odds of it occurring in another group
                            Treatment  | Control   
    -------------------------------------------------
    Event occurs         |   A         |   B       
    -------------------------------------------------
    Event does not occur |   C         |   D       
    -------------------------------------------------
    Odds                 |   A/C       |   B/D
    -------------------------------------------------
    Risk                 |   A/(A+C)   |   B/(B+D)
    
    • Odds Ratio = (A / C) / (B / D) = (AD) / (BC)
    • Risk Ratio = (A / (A+C)) / (C / (B+D))
  • Real example. In a study published in the Journal of the American Medical Association, researchers investigated the association between the use of nonsteroidal anti-inflammatory drugs (NSAIDs) and the risk of developing gastrointestinal bleeding. Suppose odds ratio = 2.5 and risk ratio is 1.5. The interpretation of the results in this study is as follows:
    • The odds ratio of 2.5 indicates that the odds of gastrointestinal bleeding are 2.5 times higher in the group of patients taking NSAIDs compared to the group of patients not taking NSAIDs.
    • The risk ratio of 1.5 indicates that the risk of gastrointestinal bleeding is 1.5 times higher in the group of patients taking NSAIDs compared to the group of patients not taking NSAIDs.
    • In this example, both the odds ratio and the risk ratio indicate a significant association between NSAID use and the risk of gastrointestinal bleeding. However, the risk ratio is lower than the odds ratio, indicating that the overall prevalence of gastrointestinal bleeding in the study population is relatively low.
  • What is the main difference in the interpretation of odds ratio and risk ratio?
    • Odds are a measure of the probability of an event occurring, expressed as the ratio of the number of ways the event can occur to the number of ways it cannot occur. For example, if the probability of an event occurring is 0.5 (or 50%), the odds of the event occurring would be 1:1 (or 1 to 1).
    • Risk is a measure of the probability of an event occurring, expressed as the ratio of the number of events that occur to the total number of events. For example, if 10 out of 100 people experience an event, the risk of the event occurring would be 10%.
    • The main difference between the two measures is that the odds ratio is more sensitive to changes in the frequency of the event, while the risk ratio is more sensitive to changes in the overall prevalence of the event.
    • This means that the odds ratio is more useful for comparing the odds of an event occurring between two groups when the event is relatively rare, while the risk ratio is more useful for comparing the risk of an event occurring between two groups when the event is more common.

Hypergeometric, One-tailed Fisher exact test

         drawn   | not drawn | 
-------------------------------------
white |   x      |           | m
-------------------------------------
black |  k-x     |           | n
-------------------------------------
      |   k      |           | m+n

For example, k=100, m=100, m+n=1000,

> 1 - phyper(10, 100, 10^3-100, 100, log.p=F)
[1] 0.4160339
> a <- dhyper(0:10, 100, 10^3-100, 100)
> cumsum(rev(a))
  [1] 1.566158e-140 1.409558e-135 3.136408e-131 3.067025e-127 1.668004e-123 5.739613e-120 1.355765e-116
  [8] 2.325536e-113 3.018276e-110 3.058586e-107 2.480543e-104 1.642534e-101  9.027724e-99  4.175767e-96
 [15]  1.644702e-93  5.572070e-91  1.638079e-88  4.210963e-86  9.530281e-84  1.910424e-81  3.410345e-79
 [22]  5.447786e-77  7.821658e-75  1.013356e-72  1.189000e-70  1.267638e-68  1.231736e-66  1.093852e-64
 [29]  8.900857e-63  6.652193e-61  4.576232e-59  2.903632e-57  1.702481e-55  9.240350e-54  4.650130e-52
 [36]  2.173043e-50  9.442985e-49  3.820823e-47  1.441257e-45  5.074077e-44  1.669028e-42  5.134399e-41
 [43]  1.478542e-39  3.989016e-38  1.009089e-36  2.395206e-35  5.338260e-34  1.117816e-32  2.200410e-31
 [50]  4.074043e-30  7.098105e-29  1.164233e-27  1.798390e-26  2.617103e-25  3.589044e-24  4.639451e-23
 [57]  5.654244e-22  6.497925e-21  7.042397e-20  7.198582e-19  6.940175e-18  6.310859e-17  5.412268e-16
 [64]  4.377256e-15  3.338067e-14  2.399811e-13  1.626091e-12  1.038184e-11  6.243346e-11  3.535115e-10
 [71]  1.883810e-09  9.442711e-09  4.449741e-08  1.970041e-07  8.188671e-07  3.193112e-06  1.167109e-05
 [78]  3.994913e-05  1.279299e-04  3.828641e-04  1.069633e-03  2.786293e-03  6.759071e-03  1.525017e-02
 [85]  3.196401e-02  6.216690e-02  1.120899e-01  1.872547e-01  2.898395e-01  4.160339e-01  5.550192e-01
 [92]  6.909666e-01  8.079129e-01  8.953150e-01  9.511926e-01  9.811343e-01  9.942110e-01  9.986807e-01
 [99]  9.998018e-01  9.999853e-01  1.000000e+00

# Density plot
plot(0:100, dhyper(0:100, 100, 10^3-100, 100), type='h')

File:Dhyper.svg

Moreover,

  1 - phyper(q=10, m, n, k) 
= 1 - sum_{x=0}^{x=10} phyper(x, m, n, k)
= 1 - sum(a[1:11]) # R's index starts from 1.

Another example is the data from the functional annotation tool in DAVID.

               | gene list | not gene list | 
-------------------------------------------------------
pathway        |   3  (q)  |               | 40 (m)
-------------------------------------------------------
not in pathway |  297      |               | 29960 (n)
-------------------------------------------------------
               |  300 (k)  |               | 30000

The one-tailed p-value from the hypergeometric test is calculated as 1 - phyper(3-1, 40, 29960, 300) = 0.0074.

Fisher's exact test

Following the above example from the DAVID website, the following R command calculates the Fisher exact test for independence in 2x2 contingency tables.

> fisher.test(matrix(c(3, 40, 297, 29960), nr=2)) #  alternative = "two.sided" by default

        Fisher's Exact Test for Count Data

data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.008853
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
  1.488738 23.966741
sample estimates:
odds ratio
  7.564602

> fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="greater")

        Fisher's Exact Test for Count Data

data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.008853
alternative hypothesis: true odds ratio is greater than 1
95 percent confidence interval:
 1.973   Inf
sample estimates:
odds ratio
  7.564602

> fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="less")

        Fisher's Exact Test for Count Data

data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.9991
alternative hypothesis: true odds ratio is less than 1
95 percent confidence interval:
  0.00000 20.90259
sample estimates:
odds ratio
  7.564602

Fisher's exact test in R: independence test for a small sample

From the documentation of fisher.test

Usage:
     fisher.test(x, y = NULL, workspace = 200000, hybrid = FALSE,
                 control = list(), or = 1, alternative = "two.sided",
                 conf.int = TRUE, conf.level = 0.95,
                 simulate.p.value = FALSE, B = 2000)
  • For 2 by 2 cases, p-values are obtained directly using the (central or non-central) hypergeometric distribution.
  • For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one.
  • The alternative for a one-sided test is based on the odds ratio, so ‘alternative = "greater"’ is a test of the odds ratio being bigger than ‘or’.
  • Two-sided tests are based on the probabilities of the tables, and take as ‘more extreme’ all tables with probabilities less than or equal to that of the observed table, the p-value being the sum of such probabilities.

Chi-square independence test

  • https://en.wikipedia.org/wiki/Chi-squared_test.
    • Chi-Square = Σ[(O - E)^2 / E]
    • We can see expected_{ij} = n_{i.}*n_{.j}/n_{..}
    • The Chi-Square test statistic follows a Chi-Square distribution with degrees of freedom equal to (r-1) x (c-1)
    • The Chi-Square test is generally a two-sided test, meaning that it tests for a significant difference between the observed and expected frequencies in both directions (i.e., either a greater than or less than difference).
  • Chi-square test of independence by hand
> chisq.test(matrix(c(14,0,4,10), nr=2), correct=FALSE)

	Pearson's Chi-squared test

data:  matrix(c(14, 0, 4, 10), nr = 2)
X-squared = 15.556, df = 1, p-value = 8.012e-05

# How about the case if expected=0 for some elements?
> chisq.test(matrix(c(14,0,4,0), nr=2), correct=FALSE)

	Pearson's Chi-squared test

data:  matrix(c(14, 0, 4, 0), nr = 2)
X-squared = NaN, df = 1, p-value = NA

Warning message:
In chisq.test(matrix(c(14, 0, 4, 0), nr = 2), correct = FALSE) :
  Chi-squared approximation may be incorrect

Exploring the underlying theory of the chi-square test through simulation - part 2

The result of Fisher exact test and chi-square test can be quite different.

# https://myweb.uiowa.edu/pbreheny/7210/f15/notes/9-24.pdf#page=4
R> Job <- matrix(c(16,48,67,21,0,19,53,88), nr=2, byrow=T)
R> dimnames(Job) <- list(A=letters[1:2],B=letters[1:4])
R> fisher.test(Job)

	Fisher's Exact Test for Count Data

data:  Job
p-value < 2.2e-16
alternative hypothesis: two.sided

R> chisq.test(c(16,48,67,21), c(0,19,53,88))

	Pearson's Chi-squared test

data:  c(16, 48, 67, 21) and c(0, 19, 53, 88)
X-squared = 12, df = 9, p-value = 0.2133

Warning message:
In chisq.test(c(16, 48, 67, 21), c(0, 19, 53, 88)) :
  Chi-squared approximation may be incorrect

GSEA

See GSEA.

McNemar’s test on paired nominal data

https://en.wikipedia.org/wiki/McNemar%27s_test

R

Contingency Tables In R. Two-Way Tables, Mosaic plots, Proportions of the Contingency Tables, Rows and Columns Totals, Statistical Tests, Three-Way Tables, Cochran-Mantel-Haenszel (CMH) Methods.

Case control study

Confidence vs Credibility Intervals

http://freakonometrics.hypotheses.org/18117

T-distribution vs normal distribution

Power analysis/Sample Size determination

See Power.

Common covariance/correlation structures

See psu.edu. Assume covariance [math]\displaystyle{ \Sigma = (\sigma_{ij})_{p\times p} }[/math]

  • Diagonal structure: [math]\displaystyle{ \sigma_{ij} = 0 }[/math] if [math]\displaystyle{ i \neq j }[/math].
  • Compound symmetry: [math]\displaystyle{ \sigma_{ij} = \rho }[/math] if [math]\displaystyle{ i \neq j }[/math].
  • First-order autoregressive AR(1) structure: [math]\displaystyle{ \sigma_{ij} = \rho^{|i - j|} }[/math].
    rho <- .8
    p <- 5
    blockMat <- rho ^ abs(matrix(1:p, p, p, byrow=T) - matrix(1:p, p, p))
  • Banded matrix: [math]\displaystyle{ \sigma_{ii}=1, \sigma_{i,i+1}=\sigma_{i+1,i} \neq 0, \sigma_{i,i+2}=\sigma_{i+2,i} \neq 0 }[/math] and [math]\displaystyle{ \sigma_{ij}=0 }[/math] for [math]\displaystyle{ |i-j| \ge 3 }[/math].
  • Spatial Power
  • Unstructured Covariance
  • Toeplitz structure

To create blocks of correlation matrix, use the "%x%" operator. See kronecker().

covMat <- diag(n.blocks) %x% blockMat

Counter/Special Examples

Math myths

Correlated does not imply independence

Suppose X is a normally-distributed random variable with zero mean. Let Y = X^2. Clearly X and Y are not independent: if you know X, you also know Y. And if you know Y, you know the absolute value of X.

The covariance of X and Y is

  Cov(X,Y) = E(XY) - E(X)E(Y) = E(X^3) - 0*E(Y) = E(X^3)
           = 0, 

because the distribution of X is symmetric around zero. Thus the correlation r(X,Y) = Cov(X,Y)/Sqrt[Var(X)Var(Y)] = 0, and we have a situation where the variables are not independent, yet have (linear) correlation r(X,Y) = 0.

This example shows how a linear correlation coefficient does not encapsulate anything about the quadratic dependence of Y upon X.

Significant p value but no correlation

Post where p-value = 1.18e-06 but cor=0.067. p-value does not say anything about the size of r.

Spearman vs Pearson correlation

Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship. https://stats.stackexchange.com/questions/8071/how-to-choose-between-pearson-and-spearman-correlation

Testing using Student's t-distribution cor.test() (T-distribution with n-1 d.f.). The normality assumption is used in test. For estimation, it affects the unbiased and efficiency. See Sensitivity to the data distribution.

x=(1:100);  
y=exp(x);                        
cor(x,y, method='spearman') # 1
cor(x,y, method='pearson')  # .25

Spearman vs Wilcoxon

By this post

  • Wilcoxon used to compare categorical versus non-normal continuous variable
  • Spearman's rho used to compare two continuous (including ordinal) variables that one or both aren't normally distributed

Spearman vs Kendall correlation

Anscombe quartet

Four datasets have almost same properties: same mean in X, same mean in Y, same variance in X, (almost) same variance in Y, same correlation in X and Y, same linear regression.

File:Anscombe quartet 3.svg

The real meaning of spurious correlations

https://nsaunders.wordpress.com/2017/02/03/the-real-meaning-of-spurious-correlations/

library(ggplot2)
 
set.seed(123)
spurious_data <- data.frame(x = rnorm(500, 10, 1),
                            y = rnorm(500, 10, 1),
                            z = rnorm(500, 30, 3))
cor(spurious_data$x, spurious_data$y)
# [1] -0.05943856
spurious_data %>% ggplot(aes(x, y)) + geom_point(alpha = 0.3) + 
theme_bw() + labs(title = "Plot of y versus x for 500 observations with N(10, 1)")

cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
# [1] 0.4517972
spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) +
 theme_bw() + geom_smooth(method = "lm") + 
scale_color_gradientn(colours = c("red", "white", "blue")) + 
labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 3)")

spurious_data$z <- rnorm(500, 30, 6)
cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
# [1] 0.8424597
spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) + 
theme_bw() + geom_smooth(method = "lm") + 
scale_color_gradientn(colours = c("red", "white", "blue")) + 
labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 6)")

Time series

Structural change

Structural Changes in Global Warming

AR(1) processes and random walks

Spurious correlations and random walks

Measurement Error model

Polya Urn Model

The Pólya Urn Model: A simple Simulation of “The Rich get Richer”

Dictionary

Statistical guidance

Statistical guidance to authors at top-ranked scientific journals: A cross-disciplinary assessment

Books, learning material

Social

JSM

Following

COPSS

考普斯會長獎 COPSS

美國國家科學院 United States National Academy of Sciences/NAS

美國國家科學院