Statistics: Difference between revisions

From 太極
Jump to navigation Jump to search
(571 intermediate revisions by 2 users not shown)
Line 5: Line 5:
* [https://en.wikipedia.org/wiki/Egon_Pearson Egon Pearson] (1895-1980): son of Karl Pearson
* [https://en.wikipedia.org/wiki/Egon_Pearson Egon Pearson] (1895-1980): son of Karl Pearson
* [https://en.wikipedia.org/wiki/Jerzy_Neyman Jerzy Neyman] (1894-1981): type 1 error
* [https://en.wikipedia.org/wiki/Jerzy_Neyman Jerzy Neyman] (1894-1981): type 1 error
* [https://www.youtube.com/playlist?list=PLt_pNkbycxqahVksaNnjz3M6759xHIZ-r Ten Statistical Ideas that Changed the World]


= Statistics for biologists =
== The most important statistical ideas of the past 50 years ==
http://www.nature.com/collections/qghhqm
[https://arxiv.org/pdf/2012.00174.pdf What are the most important statistical ideas of the past 50 years?], [https://www.tandfonline.com/doi/full/10.1080/01621459.2021.1938081 JASA 2021]


= Transform sample values to their percentiles =
= Some Advice =
https://stackoverflow.com/questions/21219447/calculating-percentile-of-dataset-column
* [http://www.nature.com/collections/qghhqm Statistics for biologists]
<syntaxhighlight lang='bash'>
* [https://www.bmj.com/content/379/bmj-2022-072883 On the 12th Day of Christmas, a Statistician Sent to Me . . .], [https://tinyurl.com/yzpv2uu6 The abridged 1-page print version].
set.seed(1234)
x <- rnorm(10)
x
# [1] -1.2070657  0.2774292  1.0844412 -2.3456977  0.4291247  0.5060559
# [7] -0.5747400 -0.5466319 -0.5644520 -0.8900378
ecdf(x)(x)
# [1] 0.2 0.7 1.0 0.1 0.8 0.9 0.4 0.6 0.5 0.3


rank(x)
= Data =
# [1]  2  7 10  1  8  9  4  6  5  3
</syntaxhighlight>


= Box(Box and whisker) plot in R =
== Rules for initial data analysis ==
See
[https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009819 Ten simple rules for initial data analysis]
* https://en.wikipedia.org/wiki/Box_plot
* https://owi.usgs.gov/blog/boxplots/ (ggplot2 is used)
* https://flowingdata.com/2008/02/15/how-to-read-and-use-a-box-and-whisker-plot/
* [https://en.wikipedia.org/wiki/Quartile Quartile] from Wikipedia. The quartiles returned from R are the same as the method defined by Method 2 described in Wikipedia.


An example for a graphical explanation.
== Types of probabilities ==
<syntaxhighlight lang='rsplus'>
See this [https://twitter.com/5_utr/status/1688730481171279872?s=20 illustration]
> x=c(0,4,15, 1, 6, 3, 20, 5, 8, 1, 3)
> summary(x)
  Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
      0      2      4      6      7      20  
> sort(x)
[1] 0  1  1  3  3  4  5  6  8 15 20
> boxplot(x, col = 'grey')


# https://en.wikipedia.org/wiki/Quartile#Example_1
== Exploratory Analysis (EDA) ==
> summary(c(6, 7, 15, 36, 39, 40, 41, 42, 43, 47, 49))
* [https://soroosj.netlify.app/2020/09/26/penguins-cluster/ Kmeans Clustering of Penguins]
  Min. 1st Qu. Median    Mean 3rd Qu.    Max.  
* [https://cran.r-project.org/web/packages/skimr/index.html skimr] package
  6.00  25.50  40.00  33.18  42.50  49.00
** [https://github.com/agstn/dataxray dataxray] package - An interactive table interface (of skimr) for data summaries. [https://www.r-bloggers.com/2023/01/cut-your-eda-time-into-5-minutes-with-exploratory-dataxray-analysis-edxa/ Cut your EDA time into 5 minutes with Exploratory DataXray Analysis (EDXA)]
</syntaxhighlight>
* [https://medium.com/@jchen001/20-useful-r-packages-you-may-not-know-about-54d57fe604f3 20 Useful R Packages You May Not Know Of]
[[File:Boxplot.svg|300px]]
* [https://twitter.com/ItaiYanai/status/1612627199332433922 12 guidelines for data exploration and analysis with the right attitude for discovery]


* The lower and upper edges of box is determined by the first and 3rd '''quartiles''' (2 and 7 in the above example).
== Kurtosis ==
** 2 = median(c(0,  1,  1,  3,  3,  4)) = (1+3)/2
[https://finnstats.com/index.php/2021/06/08/kurtosis-in-r/ Kurtosis in R-What do you understand by Kurtosis?]
** 7 = median(c(4,  5,  6,  8, 15, 20)) = (6+8)/2
** IQR = 7 - 2 = 5
* The thick dark horizon line is the '''median''' (4 in the example).
* '''Outliers''' are defined by (the empty circles in the plot)
** Observations larger than 3rd quartile + 1.5 * IQR (7+1.5*5=14.5) and
** smaller than 1st quartile - 1.5 * IQR (2-1.5*5=-5.5). 
** Note that ''the cutoffs are not shown in the Box plot''.
* Whisker (defined using the cutoffs used to define outliers)
** '''Upper whisker''' is defined by '''the largest "data" below 3rd quartile + 1.5 * IQR''' (8 in this example), and
** '''Lower whisker''' is defined by '''the smallest "data" greater than 1st quartile - 1.5 * IQR''' (0 in this example).
** See another example below where we can see the whiskers fall on observations.


Note the [http://en.wikipedia.org/wiki/Box_plot wikipedia] lists several possible definitions of a whisker. R uses the 2nd method (Tukey boxplot) to define whiskers.
== Phi coefficient ==
<ul>
<li>[https://en.wikipedia.org/wiki/Phi_coefficient Phi coefficient]. Its values is [-1, 1]. A value of zero means that the binary variables are not positively or negatively associated.
* [https://finnstats.com/index.php/2021/07/24/how-to-calculate-phi-coefficient-in-r/ How to Calculate Phi Coefficient in R]. It is a measurement of the degree of association between two binary variables.


== Create boxplots from a list object ==
<li>[https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_V Cramér’s V]. Its value is [0, 1]. A value of zero indicates that there is no association between the two variables. This means that knowing the value of one variable does not help predict the value of the other variable.
Normally we use a vector to create a single boxplot or a formula on a data to create boxplots.  
* [https://www.statology.org/interpret-cramers-v/ How to Interpret Cramer’s V (With Examples)]
<pre>
library(vcd)
cramersV <- assocstats(table(x, y))$cramer
</pre>
</ul>
 
== Coefficient of variation (CV) ==
Motivating the coefficient of variation (CV) for beginners:
 
* Boss: Measure it 5 times.
* You: 8, 8, 9, 6, and 8''
* B: SD=1. Make it three times more precise!
* Y:  0.20 0.20 0.23 0.15 0.20 meters. SD=0.3!
* B: All you did was change to meters! Report the CV instead!
* Y: Damn it.
<pre>
R> sd(c(8, 8, 9, 6, 8))
[1] 1.095445
R> sd(c(8, 8, 9, 6, 8)*2.54/100)
[1] 0.02782431
</pre>
 
== Agreement ==


But we can also use [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/split split()] to create a list and then make boxplots.
=== Pitfalls ===
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5654219/ Common pitfalls in statistical analysis: Measures of agreement] 2017


== Dot-box plot ==
=== Cohen's Kappa statistic (2-class) ===
* http://civilstat.com/2012/09/the-grammar-of-graphics-notes-on-first-reading/
* [https://en.wikipedia.org/wiki/Cohen%27s_kappa Cohen's kappa]. Cohen's kappa measures the agreement between two raters who each classify N items into C mutually exclusive categories.
* http://www.r-graph-gallery.com/89-box-and-scatter-plot-with-ggplot2/
* [https://stats.stackexchange.com/a/437418 Fleiss kappa vs Cohen kappa].
* http://www.sthda.com/english/wiki/ggplot2-box-plot-quick-start-guide-r-software-and-data-visualization
* Cohen’s kappa is calculated based on the '''confusion matrix'''. However, in contrast to calculating overall accuracy, Cohen’s kappa takes '''imbalance''' in class distribution into account and can therefore be more complex to interpret.
* [https://designdatadecisions.wordpress.com/2015/06/09/graphs-in-r-overlaying-data-summaries-in-dotplots/ Graphs in R – Overlaying Data Summaries in Dotplots]. Note that for some reason, the boxplot will cover the dots when we save the plot to an svg or a png file. So an alternative solution is to change the order <syntaxhighlight lang='rsplus'>
** [https://towardsdatascience.com/cohens-kappa-what-it-is-when-to-use-it-and-how-to-avoid-its-pitfalls-e42447962bbc Cohen’s Kappa: What it is, when to use it, and how to avoid its pitfalls]
par(cex.main=0.9,cex.lab=0.8,font.lab=2,cex.axis=0.8,font.axis=2,col.axis="grey50")
** [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7019105/ Normalization Methods on Single-Cell RNA-seq Data: An Empirical Survey] Lytal 2020
boxplot(weight ~ feed, data = chickwts, range=0, whisklty = 0, staplelty = 0)
par(new = TRUE)
stripchart(weight ~ feed, data = chickwts, xlim=c(0.5,6.5), vertical=TRUE, method="stack", offset=0.8, pch=19,
main = "Chicken weights after six weeks", xlab = "Feed Type", ylab = "Weight (g)")
</syntaxhighlight>


[[File:Boxdot.svg|300px]]
=== Fleiss Kappa statistic (more than two raters) ===
* https://en.wikipedia.org/wiki/Fleiss%27_kappa
* Fleiss kappa (more than two raters) to test interrater reliability or to evaluate the repeatability and stability of models ('''robustness'''). This was used by [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-03791-0 Cancer prognosis prediction] of Zheng 2020. '' "In our case, each trained model is designed to be a rater to assign the affiliation of each variable (gene or pathway). We conducted 20 replications of fivefold cross validation. As such, we had 100 trained models, or 100 raters in total, among which the agreement was measured by the Fleiss kappa..." ''
* [https://www.datanovia.com/en/lessons/fleiss-kappa-in-r-for-multiple-categorical-variables/ Fleiss’ Kappa in R: For Multiple Categorical Variables]. '''irr::kappam.fleiss()''' was used.
* Kappa statistic vs ICC
** [https://stats.stackexchange.com/a/64997 ICC and Kappa totally disagree]
** [https://www.sciencedirect.com/science/article/pii/S1556086415318876 Measures of Interrater Agreement] by Mandrekar 2011. '' "In certain clinical studies, agreement between the raters is assessed for a clinical outcome that is measured on a continuous scale. In such instances, intraclass correlation is calculated as a measure of agreement between the raters. Intraclass correlation is equivalent to weighted kappa under certain conditions, see the study by Fleiss and Cohen6, 7 for details." ''


== Other boxplots ==
=== ICC: intra-class correlation ===
See [[ICC|ICC]]


[[File:Lotsboxplot.png|250px]]
=== Compare two sets of p-values ===
https://stats.stackexchange.com/q/155407


= stem and leaf plot =
== Computing different kinds of correlations ==
[https://stat.ethz.ch/R-manual/R-devel/library/graphics/html/stem.html stem()]. See [http://www.r-tutor.com/elementary-statistics/quantitative-data/stem-and-leaf-plot R Tutorial].
[https://github.com/easystats/correlation correlation] package


Note that stem plot is useful when there are outliers.
=== Partial correlation ===
<syntaxhighlight lang='rsplus'>
[https://en.wikipedia.org/wiki/Partial_correlation Partial correlation]
> stem(x)


  The decimal point is 10 digit(s) to the right of the |
== Association is not causation ==
* [https://rafalab.github.io/dsbook/association-is-not-causation.html Association is not causation]
* [https://www.statology.org/correlation-does-not-imply-causation-examples/ Correlation Does Not Imply Causation: 5 Real-World Examples]
* Reasons Why Correlation Does Not Imply Causation
** Third-Variable Problem: There may be an unseen third variable that is influencing both correlated variables. For example, ice cream sales and drowning incidents might be correlated because both increase during the summer, but neither causes the other.
** Reverse Causation: The direction of cause and effect might be opposite to what we assume. For example, one might assume that stress causes poor health (which it can), but it’s also possible that poor health increases stress.
** Coincidence: Sometimes, correlations occur purely by chance, especially if the sample size is large or if many variables are tested.
** Complex Interactions: The relationship between variables can be influenced by a complex interplay of multiple factors that correlation alone cannot unpack.
* Examples
** Example of Correlation without Causation: There is a correlation between the number of fire trucks at a fire scene and the amount of damage caused by the fire. However, this does not mean that the fire trucks cause the damage; rather, larger fires both require more fire trucks and cause more damage.
** Example of Potential Misinterpretation: Studies might find a correlation between coffee consumption and heart disease. Without further investigation, one might mistakenly conclude that drinking coffee causes heart disease. However, it could be that people who drink a lot of coffee are more likely to smoke, and smoking is the actual cause of heart disease.


  0 | 00000000000000000000000000000000000000000000000000000000000000000000+419
== Predictive power score ==
  1 |
* https://cran.r-project.org/web/packages/ppsr/index.html
  2 |
* [https://paulvanderlaken.com/2021/03/02/ppsr-live-on-cran/ ppsr live on CRAN!]
  3 |
 
  4 |
== Transform sample values to their percentiles ==
  5 |
<ul>
  6 |
<li>[https://stat.ethz.ch/R-manual/R-devel/library/stats/html/ecdf.html ecdf()]
  7 |
<li>[https://stat.ethz.ch/R-manual/R-devel/library/stats/html/quantile.html quantile()]
  8 |
* An [https://github.com/cran/TreatmentSelection/blob/master/R/evaluate.trtsel.R  example] from the TreatmentSelection package where "type = 1" was used.
  9 |
{{Pre}}
  10 |
R> x <- c(1,2,3,4,4.5,6,7)
  11 |
R> Fn <- ecdf(x)
  12 | 9
R> Fn    # a *function*
Empirical CDF
Call: ecdf(x)
x[1:7] =      1,      2,      3,  ...,      6,      7
R> Fn(x)  # returns the percentiles for x
[1] 0.1428571 0.2857143 0.4285714 0.5714286 0.7142857 0.8571429 1.0000000
R> diff(Fn(x))
[1] 0.1428571 0.1428571 0.1428571 0.1428571 0.1428571 0.1428571
R> quantile(x, Fn(x))
14.28571% 28.57143% 42.85714% 57.14286% 71.42857% 85.71429%      100%
1.857143  2.714286  3.571429  4.214286  4.928571  6.142857  7.000000
R> quantile(x, Fn(x), type = 1)
14.28571% 28.57143% 42.85714% 57.14286% 71.42857% 85.71429%      100%
      1.0      2.0      3.0      4.0      4.5      6.0      7.0


> max(x)
R> x <- c(2, 6, 8, 10, 20)
[1] 129243100275
R> Fn <- ecdf(x)
> max(x)/1e10
R> Fn(x)
[1] 12.92431
[1] 0.2 0.4 0.6 0.8 1.0
</pre>
<li>[https://www.thoughtco.com/what-is-a-percentile-3126238 Definition of a Percentile in Statistics and How to Calculate It]
<li>https://en.wikipedia.org/wiki/Percentile
<li>[https://www.statology.org/percentile-vs-quartile-vs-quantile/ Percentile vs. Quartile vs. Quantile: What’s the Difference?]
* Percentiles: Range from 0 to 100.
* Quartiles: Range from 0 to 4.
* Quantiles: Range from any value to any other value.
</ul>


> stem(y)
== Standardization ==
[https://davidlindelof.com/feature-standardization-considered-harmful/ Feature standardization considered harmful]


  The decimal point is at the |
== Eleven quick tips for finding research data ==
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006038


  0 | 014478
== An archive of 1000+ datasets distributed with R ==
  1 | 0
https://vincentarelbundock.github.io/Rdatasets/
  2 | 1
  3 | 9
  4 | 8


> y
== Data and global ==
[1] 3.8667356428 0.0001762708 0.7993462430 0.4181079732 0.9541728562
* Age Structure from [https://ourworldindata.org/age-structure One Data in World]. '''Our World in Data''' is a non-profit organization that provides free and open access to data and insights on how the world is changing across 115 topics.
[6] 4.7791262101 0.6899313108 2.1381289177 0.0541736818 0.3868776083


> set.seed(1234)
= Box(Box, whisker & outlier) =
> z <- rnorm(10)*10
* https://en.wikipedia.org/wiki/Box_plot, [https://en.wikipedia.org/wiki/Box_plot#/media/File:Boxplot_vs_PDF.svg Boxplot and a probability density function (pdf) of a Normal Population] for a good annotation.
> z
* https://owi.usgs.gov/blog/boxplots/ (ggplot2 is used, graph-assisting explanation)
[1] -12.070657  2.774292  10.844412 -23.456977  4.291247  5.060559
* https://flowingdata.com/2008/02/15/how-to-read-and-use-a-box-and-whisker-plot/
[7]  -5.747400  -5.466319  -5.644520  -8.900378
* [https://en.wikipedia.org/wiki/Quartile Quartile] from Wikipedia. The quartiles returned from R are the same as the method defined by Method 2 described in Wikipedia.
> stem(z)
* [https://www.rforecology.com/post/2022-04-06-how-to-make-a-boxplot-in-r/ How to make a boxplot in R]. The '''whiskers''' of a box and whisker plot are the dotted lines outside of the grey box. These end at the minimum and maximum values of your data set, '''excluding outliers'''.


  The decimal point is 1 digit(s) to the right of the |
An example for a graphical explanation. [[:File:Boxplot.svg]], [[:File:Geom boxplot.png]]
{{Pre}}
> x=c(0,4,15, 1, 6, 3, 20, 5, 8, 1, 3)
> summary(x)
  Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
      0      2      4      6      7      20
> sort(x)
[1]  0  1  1 3  3  4  5  6  8 15 20
> y <- boxplot(x, col = 'grey')
> t(y$stats)
    [,1] [,2] [,3] [,4] [,5]
[1,]    0    2    4    7    8
# the extreme of the lower whisker, the lower hinge, the median,
# the upper hinge and the extreme of the upper whisker


  -2 | 3
# https://en.wikipedia.org/wiki/Quartile#Example_1
  -1 | 2
> summary(c(6, 7, 15, 36, 39, 40, 41, 42, 43, 47, 49))
  -0 | 9665
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
   0 | 345
   6.00  25.50  40.00  33.18  42.50  49.00
   1 | 1
</pre>
</syntaxhighlight>


= Box-Cox transformation =
* The lower and upper edges of box (also called the lower/upper '''hinge''') is determined by the first and 3rd '''quartiles''' (2 and 7 in the above example).
* [https://en.wikipedia.org/wiki/Power_transform#Box%E2%80%93Cox_transformation Power transformation]
** 2 = median(c(0,  1,  1,  3,  3,  4)) = (1+3)/2
* [http://denishaine.wordpress.com/2013/03/11/veterinary-epidemiologic-research-linear-regression-part-3-box-cox-and-matrix-representation/ Finding transformation for normal distribution]
** 7 = median(c(4,  5,  6,  8, 15, 20)) = (6+8)/2
** IQR = 7 - 2 = 5
* The thick dark horizon line is the '''median''' (4 in the example).
* '''Outliers''' are defined by (the empty circles in the plot)
** Observations larger than 3rd quartile + 1.5 * IQR (7+1.5*5=14.5) and
** smaller than 1st quartile - 1.5 * IQR (2-1.5*5=-5.5). 
** Note that ''the cutoffs are not shown in the Box plot''.
* Whisker (defined using the cutoffs used to define outliers)
** '''Upper whisker''' is defined by '''the largest "data" below 3rd quartile + 1.5 * IQR''' (8 in this example). Note Upper whisker is NOT defined as 3rd quartile + 1.5 * IQR.
** '''Lower whisker''' is defined by '''the smallest "data" greater than 1st quartile - 1.5 * IQR''' (0 in this example). Note lower whisker is NOT defined as 1st quartile - 1.5 * IQR.
** See another example below where we can see the whiskers fall on observations.


= the Holy Trinity (LRT, Wald, Score tests) =
Note the [http://en.wikipedia.org/wiki/Box_plot wikipedia] lists several possible definitions of a whisker. R uses the 2nd method (Tukey boxplot) to define whiskers.
* https://en.wikipedia.org/wiki/Likelihood_function which includes '''profile likelihood''' and '''partial likelihood'''
* [http://data.princeton.edu/wws509/notes/a1.pdf Review of the likelihood theory]
* [http://www.tandfonline.com/doi/full/10.1080/00031305.2014.955212#abstract?ai=rv&mi=3be122&af=R The “Three Plus One” Likelihood-Based Test Statistics: Unified Geometrical and Graphical Interpretations]
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5969114/ Variable selection – A review and recommendations for the practicing statistician] by Heinze et al 2018.
** Score test is step-up. Score test is typically used in forward steps to screen covariates currently not included in a model for their ability to improve model.
** Wald test is step-down. Wald test starts at the full model. It evaluate the significance of a variable by comparing the ratio of its estimate and its standard error with an appropriate t distribution (for linear models) or standard normal distribution (for logistic or Cox regression).  
** Likelihood ratio tests provide the best control over nuisance parameters by maximizing the likelihood over them both in H0 model and H1 model. In particular, if several coefficients are being tested simultaneously, LRTs for model comparison are preferred over Wald or score tests.
* R packages
** lmtest package, [https://www.rdocumentation.org/packages/lmtest/versions/0.9-37/topics/waldtest waldtest()] and [https://www.rdocumentation.org/packages/lmtest/versions/0.9-37/topics/lrtest lrtest()].


= Don't invert that matrix  =
== Create boxplots from a list object ==
* http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/
Normally we use a vector to create a single boxplot or a formula on a data to create boxplots.  
* http://civilstat.com/2015/07/dont-invert-that-matrix-why-and-how/


== Different matrix decompositions/factorizations ==
But we can also use [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/split split()] to create a list and then make boxplots.


* [https://en.wikipedia.org/wiki/QR_decomposition QR decomposition], [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/qr qr()]
== Dot-box plot ==
* [https://en.wikipedia.org/wiki/LU_decomposition LU decomposition], [https://www.rdocumentation.org/packages/Matrix/versions/1.2-14/topics/lu lu()] from the 'Matrix' package
* http://civilstat.com/2012/09/the-grammar-of-graphics-notes-on-first-reading/
* [https://en.wikipedia.org/wiki/Cholesky_decomposition Cholesky decomposition], [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/chol chol()]
* http://www.r-graph-gallery.com/89-box-and-scatter-plot-with-ggplot2/
* [https://en.wikipedia.org/wiki/Singular-value_decomposition Singular value decomposition], [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/svd svd()]  
* http://www.sthda.com/english/wiki/ggplot2-box-plot-quick-start-guide-r-software-and-data-visualization
* [https://designdatadecisions.wordpress.com/2015/06/09/graphs-in-r-overlaying-data-summaries-in-dotplots/ Graphs in R – Overlaying Data Summaries in Dotplots]. Note that for some reason, the boxplot will cover the dots when we save the plot to an svg or a png file. So an alternative solution is to change the order <syntaxhighlight lang='rsplus'>
par(cex.main=0.9,cex.lab=0.8,font.lab=2,cex.axis=0.8,font.axis=2,col.axis="grey50")
boxplot(weight ~ feed, data = chickwts, range=0, whisklty = 0, staplelty = 0)
par(new = TRUE)
stripchart(weight ~ feed, data = chickwts, xlim=c(0.5,6.5), vertical=TRUE, method="stack", offset=0.8, pch=19,
main = "Chicken weights after six weeks", xlab = "Feed Type", ylab = "Weight (g)")
</syntaxhighlight>
 
[[:File:Boxdot.svg]]


<syntaxhighlight lang='rsplus'>
== geom_boxplot ==
set.seed(1234)
Note the geom_boxplot() does not create crossbars. See
x <- matrix(rnorm(10*2), nr= 10)
[https://community.rstudio.com/t/how-to-generate-a-boxplot-graph-with-whisker-by-ggplot/15619/4 How to generate a boxplot graph with whisker by ggplot] or [https://stackoverflow.com/a/13003038 this]. A trick is to add the '''stat_boxplot'''() function.
cmat <- cov(x); cmat
# [,1]       [,2]
# [1,]  0.9915928 -0.1862983
# [2,] -0.1862983 1.1392095


# cholesky decom
Without jitter
d1 <- chol(cmat)
<pre>
t(d1) %*% d1  # equal to cmat
ggplot(dfbox, aes(x=sample, y=expr)) +
d1  # upper triangle
  geom_boxplot() +
# [,1]      [,2]
  theme(axis.text.x=element_text(color = "black", angle=30, vjust=.8,
# [1,] 0.9957875 -0.1870864
                                hjust=0.8, size=6),
# [2,] 0.0000000  1.0508131
        plot.title = element_text(hjust = 0.5)) +
  labs(title="", y = "", x = "")
</pre>


# svd
With jitter
d2 <- svd(cmat)
<pre>
d2$u %*% diag(d2$d) %*% t(d2$v) # equal to cmat
ggplot(dfbox, aes(x=sample, y=expr)) +
d2$u %*% diag(sqrt(d2$d))
  geom_boxplot(outlier.shape=NA) + #avoid plotting outliers twice
# [,1]      [,2]
  geom_jitter(position=position_jitter(width=.2, height=0)) +
# [1,] -0.6322816 0.7692937
  theme(axis.text.x=element_text(color = "black", angle=30, vjust=.8,  
# [2,] 0.9305953 0.5226872
                                hjust=0.8, size=6),   
</syntaxhighlight>
        plot.title = element_text(hjust = 0.5)) +
  labs(title="", y = "", x = "")
</pre>


= Linear Regression =
[https://stackoverflow.com/a/21794246 Why geom_boxplot identify more outliers than base boxplot?]
[https://leanpub.com/regmods Regression Models for Data Science in R] by Brian Caffo


Comic https://xkcd.com/1725/
[https://stackoverflow.com/a/7267364 What do hjust and vjust do when making a plot using ggplot?] The value of hjust and vjust are only defined between 0 and 1: 0 means left-justified, 1 means right-justified.


== Different models (in R) ==
== Other boxplots ==
http://www.quantide.com/raccoon-ch-1-introduction-to-linear-models-with-r/


== dummy.coef.lm() in R ==
[[:File:Lotsboxplot.png]]
Extracts coefficients in terms of the original levels of the coefficients rather than the coded variables.


== model.matrix, design matrix ==
== Annotated boxplot ==
[https://github.com/csoneson/ExploreModelMatrix ExploreModelMatrix]: Explore design matrices interactively with R/Shiny
https://stackoverflow.com/a/38032281


== Contrasts in linear regression ==
= stem and leaf plot =
* Page 147 of Modern Applied Statistics with S (4th ed)
[https://stat.ethz.ch/R-manual/R-devel/library/graphics/html/stem.html stem()]. See [http://www.r-tutor.com/elementary-statistics/quantitative-data/stem-and-leaf-plot R Tutorial].
* https://biologyforfun.wordpress.com/2015/01/13/using-and-interpreting-different-contrasts-in-linear-models-in-r/ This explains the meanings of 'treatment', 'helmert' and 'sum' contrasts.
* [http://rstudio-pubs-static.s3.amazonaws.com/65059_586f394d8eb84f84b1baaf56ffb6b47f.html A (sort of) Complete Guide to Contrasts in R] by Rose Maier <syntaxhighlight lang='rsplus'>
mat


##      constant NLvMH  NvL  MvH
Note that stem plot is useful when there are outliers.
## [1,]        1  -0.5  0.5  0.0
{{Pre}}
## [2,]        1  -0.5 -0.5  0.0
> stem(x)
## [3,]        1  0.5  0.0  0.5
## [4,]        1  0.5  0.0 -0.5
mat <- mat[ , -1]


model7 <- lm(y ~ dose, data=data, contrasts=list(dose=mat) )
  The decimal point is 10 digit(s) to the right of the |
summary(model7)


## Coefficients:
  0 | 00000000000000000000000000000000000000000000000000000000000000000000+419
##            Estimate Std. Error t value Pr(>|t|)    
  1 |
## (Intercept)  118.578      1.076 110.187  < 2e-16 ***
  2 |
## doseNLvMH      3.179      2.152  1.477  0.14215    
   3 |
## doseNvL      -8.723      3.044  -2.866  0.00489 **
  4 |
## doseMvH      13.232      3.044   4.347 2.84e-05 ***
  5 |
  6 |
   7 |
  8 |
  9 |
  10 |
  11 |
   12 | 9


# double check your contrasts
> max(x)
attributes(model7$qr$qr)$contrasts
[1] 129243100275
## $dose
> max(x)/1e10
##      NLvMH  NvL  MvH
[1] 12.92431
## None  -0.5  0.5  0.0
## Low  -0.5 -0.5  0.0
## Med    0.5  0.0  0.5
## High  0.5  0.0 -0.5
 
library(dplyr)
dose.means <- summarize(group_by(data, dose), y.mean=mean(y))
dose.means
## Source: local data frame [4 x 2]
##
##  dose  y.mean
## 1 None 112.6267
## 2  Low 121.3500
## 3  Med 126.7839
## 4 High 113.5517


# The coefficient estimate for the first contrast (3.18) equals the average of
> stem(y)
# the last two groups (126.78 + 113.55 /2 = 120.17) minus the average of
# the first two groups (112.63 + 121.35 /2 = 116.99).
</syntaxhighlight>


== Multicollinearity ==
  The decimal point is at the |
* [https://datascienceplus.com/multicollinearity-in-r/ Multicollinearity in R]
* [https://www.rdocumentation.org/packages/stats/versions/3.5.1/topics/alias alias]: Find Aliases (Dependencies) In A Model
<syntaxhighlight lang='rsplus'>
> op <- options(contrasts = c("contr.helmert", "contr.poly"))
> npk.aov <- aov(yield ~ block + N*P*K, npk)
> alias(npk.aov)
Model :
yield ~ block + N * P * K


Complete :
  0 | 014478
        (Intercept) block1 block2 block3 block4 block5 N1    P1    K1    N1:P1 N1:K1 P1:K1
  1 | 0
N1:P1:K1    0          1   1/3   1/6  -3/10   -1/5      0    0    0    0    0    0
  2 | 1
  3 | 9
   4 | 8


> options(op)
> y
</syntaxhighlight>
[1] 3.8667356428 0.0001762708 0.7993462430 0.4181079732 0.9541728562
[6] 4.7791262101 0.6899313108 2.1381289177 0.0541736818 0.3868776083


== Exposure ==
> set.seed(1234)
https://en.mimi.hu/mathematics/exposure_variable.html
> z <- rnorm(10)*10
> z
[1] -12.070657  2.774292  10.844412 -23.456977  4.291247  5.060559
[7]  -5.747400  -5.466319  -5.644520  -8.900378
> stem(z)


Independent variable = predictor = explanatory = exposure variable
  The decimal point is 1 digit(s) to the right of the |


== Confounders, confounding ==
  -2 | 3
* https://en.wikipedia.org/wiki/Confounding
  -1 | 2
** [https://academic.oup.com/jamia/article/21/2/308/723853 A method for controlling complex confounding effects in the detection of adverse drug reactions using electronic health records]. It provides a rule to identify a confounder.
  -0 | 9665
* http://anythingbutrbitrary.blogspot.com/2016/01/how-to-create-confounders-with.html (R example)
  0 | 345
* [http://www.cantab.net/users/filimon/cursoFCDEF/will/logistic_confound.pdf Logistic Regression: Confounding and Colinearity]
  1 | 1
* [https://stats.stackexchange.com/questions/192591/identifying-a-confounder?rq=1 Identifying a confounder]
</pre>
* [https://stats.stackexchange.com/questions/38326/is-it-possible-to-have-a-variable-that-acts-as-both-an-effect-modifier-and-a-con Is it possible to have a variable that acts as both an effect modifier and a confounder?]
* [https://stats.stackexchange.com/questions/34644/which-test-to-use-to-check-if-a-possible-confounder-impacts-a-0-1-result Which test to use to check if a possible confounder impacts a 0 / 1 result?]
* [https://genomebiology.biomedcentral.com/articles/10.1186/s13059-019-1700-9 Addressing confounding artifacts in reconstruction of gene co-expression networks] Parsana 2019
 
== Causal inference ==
* https://en.wikipedia.org/wiki/Causal_inference
* [http://www.rebeccabarter.com/blog/2017-07-05-confounding/ Confounding in causal inference: what is it, and what to do about it?]
 
== Confidence interval vs prediction interval ==
Confidence intervals tell you about how well you have determined the mean E(Y). Prediction intervals tell you where you can expect to see the next data point sampled. That is, CI is computed using Var(E(Y|X)) and PI is computed using Var(E(Y|X) + e).


* http://www.graphpad.com/support/faqid/1506/
= Box-Cox transformation =
* http://en.wikipedia.org/wiki/Prediction_interval
* [https://en.wikipedia.org/wiki/Power_transform#Box%E2%80%93Cox_transformation Power transformation]
* http://robjhyndman.com/hyndsight/intervals/
* [http://denishaine.wordpress.com/2013/03/11/veterinary-epidemiologic-research-linear-regression-part-3-box-cox-and-matrix-representation/ Finding transformation for normal distribution]
* https://stat.duke.edu/courses/Fall13/sta101/slides/unit7lec3H.pdf
* https://datascienceplus.com/prediction-interval-the-wider-sister-of-confidence-interval/


== Heteroskedasticity ==
= CLT/Central limit theorem =
[http://www.brodrigues.co/blog/2018-07-08-rob_stderr/ Dealing with heteroskedasticity; regression with robust standard errors using R]
[https://en.wikipedia.org/wiki/Central_limit_theorem Central limit theorem]


== Linear regression with Map Reduce ==
== Delta method ==
https://freakonometrics.hypotheses.org/53269
[[Delta|Delta]]


= Non- and semi-parametric regression =
== Sample median, x-percentiles ==
* [https://mathewanalytics.com/2018/03/05/semiparametric-regression-in-r/ Semiparametric Regression in R]
<ul>
* https://socialsciences.mcmaster.ca/jfox/Courses/Oxford-2005/R-nonparametric-regression.html
<li>[https://stats.stackexchange.com/questions/45124/central-limit-theorem-for-sample-medians Central limit theorem for sample medians]


== Mean squared error ==
<li>For the q-th sample quantile in sufficiently large samples, we get that it will approximately have a normal distribution with mean the <math>𝑞</math>th population quantile <math>𝑥_𝑞</math> and variance <math>𝑞(1−𝑞)/(𝑛𝑓_𝑋(𝑥_𝑞)^2)</math>.
[https://www.statworx.com/de/blog/simulating-the-bias-variance-tradeoff-in-r/ Simulating the bias-variance tradeoff in R]
Hence for the '''median''' (𝑞=1/2), the variance in sufficiently large samples will be approximately <math>1/(4𝑛𝑓_𝑋(m)^2)</math>.


== Splines ==
<li>For example for an exponential distribution with a rate parameter <math>\lambda >0</math>, the pdf is <math>f(x)=\lambda \exp(-\lambda x)</math>. The population median <math>m</math> is the value such as <math>F(m)=.5</math>. So <math>m=log(2)/\lambda</math>. For large n, the '''sample median''' <math>\tilde{X}</math> will be approximately normal distributed around the population median <math>m</math>, but with the asymptotic variance given by <math>Var(\tilde{X}) \approx \frac{1}{4nf(m)^2} </math> where <math>f(m)</math> is the PDF evaluated at the median <math>m=\log(2)/\lambda</math>. For the exponential distribution with rate <math>\lambda</math>, we have <math>f(m) = \lambda e^{-\lambda m} = \lambda/2</math>. Substituting this into the expression for the variance we have <math>Var(\tilde{X}) \approx \frac{1}{n\lambda^2} </math>.
* https://en.wikipedia.org/wiki/B-spline
* [https://www.r-bloggers.com/cubic-and-smoothing-splines-in-r/ Cubic and Smoothing Splines in R]. '''bs()''' is for cubic spline and '''smooth.spline()''' is for smoothing spline.
* [https://www.rdatagen.net/post/generating-non-linear-data-using-b-splines/ Can we use B-splines to generate non-linear data?]
* [https://stats.stackexchange.com/questions/29400/spline-fitting-in-r-how-to-force-passing-two-data-points How to force passing two data points?] ([https://cran.r-project.org/web/packages/cobs/index.html cobs] package)
* https://www.rdocumentation.org/packages/cobs/versions/1.3-3/topics/cobs


== k-Nearest neighbor regression ==
<li>For normal distribution with mean <math>\mu</math> and variance <math>\sigma^2</math>. The '''sample median''' has a limiting distribution of normal with mean <math>\mu</math> and variance <math> \frac{1}{4nf(m)^2} = \frac{\pi \sigma^2}{2n} </math>.
* k-NN regression in practice: boundary problem, discontinuities problem.
* Weighted k-NN regression: want weight to be small when distance is large. Common choices - weight = kernel(xi, x)


== Kernel regression ==
<li>Some references:
* Instead of weighting NN, weight ALL points. Nadaraya-Watson kernel weighted average:
* "Mathematical Statistics" by Jun Shao
<math>\hat{y}_q = \sum c_{qi} y_i/\sum c_{qi} = \frac{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))*y_i}{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))} </math>.
* "Probability and Statistics" by DeGroot and Schervish
* Choice of bandwidth <math>\lambda</math> for bias, variance trade-off. Small <math>\lambda</math> is over-fitting. Large <math>\lambda</math> can get an over-smoothed fit. '''Cross-validation'''.
* "Order Statistics" by H.A. David and H.N. Nagaraja
* Kernel regression leads to locally constant fit.
</ul>
* Issues with high dimensions, data scarcity and computational complexity.


= Principal component analysis =
= the Holy Trinity (LRT, Wald, Score tests) =  
== R source code ==
* https://en.wikipedia.org/wiki/Likelihood_function which includes '''profile likelihood''' and '''partial likelihood'''
<pre>
* [http://data.princeton.edu/wws509/notes/a1.pdf Review of the likelihood theory]
> stats:::prcomp.default
* [http://www.tandfonline.com/doi/full/10.1080/00031305.2014.955212#abstract?ai=rv&mi=3be122&af=R The “Three Plus One” Likelihood-Based Test Statistics: Unified Geometrical and Graphical Interpretations]
function (x, retx = TRUE, center = TRUE, scale. = FALSE, tol = NULL,
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5969114/ Variable selection – A review and recommendations for the practicing statistician] by Heinze et al 2018.
    ...)
** [https://en.wikipedia.org/wiki/Score_test '''Score test'''] is step-up. Score test is typically used in forward steps to screen covariates currently not included in a model for their ability to improve model.
{
** [https://en.wikipedia.org/wiki/Wald_test '''Wald test'''] is step-down. Wald test starts at the full model. It evaluate the significance of a variable by comparing the ratio of its estimate and its standard error with an appropriate '''T distribution (for linear models)''' or '''standard normal distribution (for logistic or Cox regression)'''.
    x <- as.matrix(x)
** [https://en.wikipedia.org/wiki/Likelihood-ratio_test '''Likelihood ratio tests'''] provide the best control over nuisance parameters by maximizing the likelihood over them both in H0 model and H1 model. In particular, if several coefficients are being tested simultaneously, LRTs for model comparison are preferred over Wald or score tests.
    x <- scale(x, center = center, scale = scale.)
    cen <- attr(x, "scaled:center")
    sc <- attr(x, "scaled:scale")
    if (any(sc == 0))
        stop("cannot rescale a constant/zero column to unit variance")
    s <- svd(x, nu = 0)
    s$d <- s$d/sqrt(max(1, nrow(x) - 1))
    if (!is.null(tol)) {
        rank <- sum(s$d > (s$d[1L] * tol))
        if (rank < ncol(x)) {
            s$v <- s$v[, 1L:rank, drop = FALSE]
            s$d <- s$d[1L:rank]
        }
    }
    dimnames(s$v) <- list(colnames(x), paste0("PC", seq_len(ncol(s$v))))
    r <- list(sdev = s$d, rotation = s$v, center = if (is.null(cen)) FALSE else cen,  
        scale = if (is.null(sc)) FALSE else sc)
    if (retx)
        r$x <- x %*% s$v
    class(r) <- "prcomp"
    r
}
<bytecode: 0x000000003296c7d8>
<environment: namespace:stats>
</pre>


== R example ==
* R packages
http://genomicsclass.github.io/book/pages/pca_svd.html
** [https://cran.r-project.org/web/packages/lmtest/ lmtest] package, [https://www.rdocumentation.org/packages/lmtest/versions/0.9-37/topics/waldtest waldtest()] and [https://www.rdocumentation.org/packages/lmtest/versions/0.9-37/topics/lrtest lrtest()]. [https://finnstats.com/index.php/2021/11/24/likelihood-ratio-test-in-r/ Likelihood Ratio Test in R with Example]
<syntaxhighlight lang='rsplus'>
** [https://cran.r-project.org/web/packages/aod/index.html aod] package. [https://www.statology.org/wald-test-in-r/ How to Perform a Wald Test in R]
pc <- prcomp(x)
** [https://cran.r-project.org/web/packages/survey/index.html survey] package. regTermTest()
group <- as.numeric(tab$Tissue)
** [https://cran.r-project.org/web/packages/nlWaldTest/index.html nlWaldTest] package.
plot(pc$x[, 1], pc$x[, 2], col = group, main = "PCA", xlab = "PC1", ylab = "PC2")
</syntaxhighlight>
The meaning of colors can be found by '''palette()'''.  


# black
* [https://stats.stackexchange.com/a/503720 Likelihood ratio test multiplying by 2]. Hint: Approximate the log-likelihood for the '''true value of the parameter''' using the Taylor expansion around the '''MLE'''.
# red
# green3
# blue
# cyan
# magenta
# yellow
# gray


== PCA and SVD ==
* Wald statistic relationship to Z-statistic: The Wald statistic is essentially the square of the Z-statistic. In other words, a Wald statistic is computed as Z squared. However, '''there is a key difference in the denominator of these statistics: the Z-statistic uses the null standard error (calculated using the hypothesized value), while the Wald statistic uses the standard error evaluated at the maximum likelihood estimate'''.
Using the SVD to perform PCA makes much better sense numerically than forming the covariance matrix to begin with, since the formation of <math>X X^T</math> can cause loss of precision.
** [https://stats.stackexchange.com/questions/60074/wald-test-for-logistic-regression Wald test for logistic regression]
** [https://stats.stackexchange.com/questions/152630/wald-test-and-z-test Wald Test and Z Test]
** [https://stats.stackexchange.com/questions/609613/what-is-the-difference-between-z-value-and-the-wald-statistic-in-the-summary-fun What is the difference between z-value and the Wald statistic in the summary function of the Cox Proportional Hazards model of the “survival” package?]


http://math.stackexchange.com/questions/3869/what-is-the-intuitive-relationship-between-svd-and-pca
= Don't invert that matrix  =
* http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/
* http://civilstat.com/2015/07/dont-invert-that-matrix-why-and-how/


=== AIC/BIC in estimating the number of components ===
== Different matrix decompositions/factorizations ==
[https://projecteuclid.org/euclid.aos/1525313075 Consistency of AIC and BIC in estimating the number of significant components in high-dimensional principal component analysis]


== Related to Factor Analysis ==
* [https://en.wikipedia.org/wiki/QR_decomposition QR decomposition], [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/qr qr()]
* http://www.aaronschlegel.com/factor-analysis-introduction-principal-component-method-r/.  
* [https://en.wikipedia.org/wiki/LU_decomposition LU decomposition], [https://www.rdocumentation.org/packages/Matrix/versions/1.2-14/topics/lu lu()] from the 'Matrix' package
* http://support.minitab.com/en-us/minitab/17/topic-library/modeling-statistics/multivariate/principal-components-and-factor-analysis/differences-between-pca-and-factor-analysis/
* [https://en.wikipedia.org/wiki/Cholesky_decomposition Cholesky decomposition], [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/chol chol()]
* [https://en.wikipedia.org/wiki/Singular-value_decomposition Singular value decomposition], [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/svd svd()]


In short,
{{Pre}}
# In Principal Components Analysis, the components are calculated as linear combinations of the original variables. In Factor Analysis, the original variables are defined as linear combinations of the factors.
set.seed(1234)
# In Principal Components Analysis, the goal is to explain as much of the total variance in the variables as possible. The goal in Factor Analysis is to explain the covariances or correlations between the variables.
x <- matrix(rnorm(10*2), nr= 10)
# Use Principal Components Analysis to reduce the data into a smaller number of components. Use Factor Analysis to understand what constructs underlie the data.
cmat <- cov(x); cmat
# [,1]      [,2]
# [1,]  0.9915928 -0.1862983
# [2,] -0.1862983  1.1392095


== Calculated by Hand ==
# cholesky decom
http://strata.uga.edu/software/pdf/pcaTutorial.pdf
d1 <- chol(cmat)
t(d1) %*% d1  # equal to cmat
d1  # upper triangle
# [,1]      [,2]
# [1,] 0.9957875 -0.1870864
# [2,] 0.0000000  1.0508131


== Do not scale your matrix ==
# svd
https://privefl.github.io/blog/(Linear-Algebra)-Do-not-scale-your-matrix/
d2 <- svd(cmat)
 
d2$u %*% diag(d2$d) %*% t(d2$v) # equal to cmat
== Visualization ==
d2$u %*% diag(sqrt(d2$d))
* [http://oracledmt.blogspot.com/2007/06/way-cooler-pca-and-visualization-linear.html PCA and Visualization]
# [,1]     [,2]
* Scree plots from the [http://www.sthda.com/english/wiki/factominer-and-factoextra-principal-component-analysis-visualization-r-software-and-data-mining FactoMineR] package (based on ggplot2)
# [1,] -0.6322816 0.7692937
 
# [2,] 0.9305953 0.5226872
== What does it do if we choose center=FALSE in prcomp()? ==
 
In USArrests data, use center=FALSE gives a better scatter plot of the first 2 PCA components.
<pre>
x1 = prcomp(USArrests)  
x2 = prcomp(USArrests, center=F)
plot(x1$x[,1], x1$x[,2]# looks random
windows(); plot(x2$x[,1], x2$x[,2]) # looks good in some sense
</pre>
</pre>


== Relation to [http://en.wikipedia.org/wiki/Multidimensional_scaling Multidimensional scaling/MDS] ==
= Model Estimation with R =
With no missing data, classical MDS (Euclidean distance metric) is the same as PCA.  
[https://m-clark.github.io/models-by-example/ Model Estimation by Example] Demonstrations with R. Michael Clark


Comparisons are [http://www.sequentix.de/gelquest/help/principal_coordinates_analysis.htm here].
= Regression =
[[Regression|Regression]]


Differences are asked/answered on [http://stats.stackexchange.com/questions/14002/whats-the-difference-between-principal-components-analysis-and-multidimensional stackexchange.com]. The post also answered the question when these two are the same.
= Non- and semi-parametric regression =
* [https://mathewanalytics.com/2018/03/05/semiparametric-regression-in-r/ Semiparametric Regression in R]
* https://socialsciences.mcmaster.ca/jfox/Courses/Oxford-2005/R-nonparametric-regression.html


[https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/isoMDS.html isoMDS] (Non-metric)
== Mean squared error ==
* [https://www.statworx.com/de/blog/simulating-the-bias-variance-tradeoff-in-r/ Simulating the bias-variance tradeoff in R]
* [https://alemorales.info/post/variance-estimators/ Estimating variance: should I use n or n - 1? The answer is not what you think]


[https://stat.ethz.ch/R-manual/R-devel/library/stats/html/cmdscale.html cmdscale] (Metric)
== Splines ==
* https://en.wikipedia.org/wiki/B-spline
* [https://www.r-bloggers.com/cubic-and-smoothing-splines-in-r/ Cubic and Smoothing Splines in R]. '''bs()''' is for cubic spline and '''smooth.spline()''' is for smoothing spline.
* [https://www.rdatagen.net/post/generating-non-linear-data-using-b-splines/ Can we use B-splines to generate non-linear data?]
* [https://stats.stackexchange.com/questions/29400/spline-fitting-in-r-how-to-force-passing-two-data-points How to force passing two data points?] ([https://cran.r-project.org/web/packages/cobs/index.html cobs] package)
* https://www.rdocumentation.org/packages/cobs/versions/1.3-3/topics/cobs


== Matrix factorization methods ==
== k-Nearest neighbor regression ==
http://joelcadwell.blogspot.com/2015/08/matrix-factorization-comes-in-many.html Review of principal component analysis (PCA), K-means clustering, nonnegative matrix factorization (NMF) and archetypal analysis (AA).
* [https://www.rdocumentation.org/packages/class/versions/7.3-21/topics/knn class::knn()]
* k-NN regression in practice: boundary problem, discontinuities problem.
* Weighted k-NN regression: want weight to be small when distance is large. Common choices - weight = kernel(xi, x)


== Number of components ==
== Kernel regression ==
[https://statisticaloddsandends.wordpress.com/2018/10/15/obtaining-the-number-of-components-from-cross-validation-of-principal-components-regression/ Obtaining the number of components from cross validation of principal components regression]
* Instead of weighting NN, weight ALL points. Nadaraya-Watson kernel weighted average:
 
<math>\hat{y}_q = \sum c_{qi} y_i/\sum c_{qi} = \frac{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))*y_i}{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))} </math>.
= Partial Least Squares (PLS) =
* Choice of bandwidth <math>\lambda</math> for bias, variance trade-off. Small <math>\lambda</math> is over-fitting. Large <math>\lambda</math> can get an over-smoothed fit. '''Cross-validation'''.
* Kernel regression leads to locally constant fit.
* Issues with high dimensions, data scarcity and computational complexity.
 
= Principal component analysis =
See [[PCA|PCA]].
 
= Partial Least Squares (PLS) =
* [https://twitter.com/slavov_n/status/1642570040737402881 Accounting for measurement errors with total least squares]. Demonstrate the bias of the PLS.
* https://en.wikipedia.org/wiki/Partial_least_squares_regression. The general underlying model of multivariate PLS is
* https://en.wikipedia.org/wiki/Partial_least_squares_regression. The general underlying model of multivariate PLS is
:<math>X = T P^\mathrm{T} + E</math>
:<math>X = T P^\mathrm{T} + E</math>
:<math>Y = U Q^\mathrm{T} + F</math>
:<math>Y = U Q^\mathrm{T} + F</math>
where {{mvar|X}} is an <math>n \times m</math> matrix of predictors, {{mvar|Y}} is an <math>n \times p</math> matrix of responses; {{mvar|T}} and {{mvar|U}} are <math>n \times l</math> matrices that are, respectively, '''projections''' of {{mvar|X}} (the X '''score''', ''component'' or '''factor matrix''') and projections of {{mvar|Y}} (the ''Y scores''); {{mvar|P}} and {{mvar|Q}} are, respectively, <math>m \times l</math> and <math>p \times l</math> orthogonal '''loading matrices'''; and matrices {{mvar|E}} and {{mvar|F}} are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of {{mvar|X}} and {{mvar|Y}} are made so as to maximise the '''covariance''' between {{mvar|T}} and {{mvar|U}} (projection matrices).
:where {{mvar|X}} is an <math>n \times m</math> matrix of predictors, {{mvar|Y}} is an <math>n \times p</math> matrix of responses; {{mvar|T}} and {{mvar|U}} are <math>n \times l</math> matrices that are, respectively, '''projections''' of {{mvar|X}} (the X '''score''', ''component'' or '''factor matrix''') and projections of {{mvar|Y}} (the ''Y scores''); {{mvar|P}} and {{mvar|Q}} are, respectively, <math>m \times l</math> and <math>p \times l</math> orthogonal '''loading matrices'''; and matrices {{mvar|E}} and {{mvar|F}} are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of {{mvar|X}} and {{mvar|Y}} are made so as to maximise the '''covariance''' between {{mvar|T}} and {{mvar|U}} (projection matrices).
* [https://www.gokhanciflikli.com/post/learning-brexit/ Supervised vs. Unsupervised Learning: Exploring Brexit with PLS and PCA]
* [https://www.gokhanciflikli.com/post/learning-brexit/ Supervised vs. Unsupervised Learning: Exploring Brexit with PLS and PCA]
* [https://cran.r-project.org/web/packages/pls/index.html pls] R package
* [https://cran.r-project.org/web/packages/pls/index.html pls] R package
* [https://cran.r-project.org/web/packages/plsRcox/index.html plsRcox] R package (archived). See [[R#install_a_tar.gz_.28e.g._an_archived_package.29_from_a_local_directory|here]] for the installation.
* [https://cran.r-project.org/web/packages/plsRcox/index.html plsRcox] R package (archived). See [[R#install_a_tar.gz_.28e.g._an_archived_package.29_from_a_local_directory|here]] for the installation.
* [https://web.stanford.edu/~hastie/ElemStatLearn//printings/ESLII_print12.pdf#page=101 PLS, PCR (principal components regression) and ridge regression tend to behave similarly]. Ridge regression may be preferred because it shrinks smoothly, rather than in discrete steps.
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-3310-7 So you think you can PLS-DA?]. Compare PLS with PCA.
* [https://cran.r-project.org/web/packages/plsRglm/index.html plsRglm] package - Partial Least Squares Regression for Generalized Linear Models


[https://web.stanford.edu/~hastie/ElemStatLearn//printings/ESLII_print12.pdf#page=101 PLS, PCR (principal components regression) and ridge regression tend to behave similarly]. Ridge regression may be preferred because it shrinks smoothly, rather than in discrete steps.
= High dimension =
* [https://projecteuclid.org/euclid.aos/1547197242 Partial least squares prediction in high-dimensional regression] Cook and Forzani, 2019
* [https://arxiv.org/pdf/1912.06667v1.pdf#:~:text=Patient-derived High dimensional precision medicine from patient-derived xenografts] JASA 2020


= High dimension =
== dimRed package ==
[https://projecteuclid.org/euclid.aos/1547197242 Partial least squares prediction in high-dimensional regression] Cook and Forzani, 2019
[https://cran.r-project.org/web/packages/dimRed/index.html dimRed] package
 
== Feature selection ==
* https://en.wikipedia.org/wiki/Feature_selection
* [https://seth-dobson.github.io/a-feature-preprocessing-workflow/ A Feature Preprocessing Workflow]
* [https://doi.org/10.1080/01621459.2020.1783274 Model-Free Feature Screening and FDR Control With Knockoff Features] and [https://arxiv.org/pdf/1908.06597v2.pdf pdf]. The proposed method is based on the '''projection correlation''' which measures the dependence between two random vectors.
 
== Goodness-of-fit ==
* [https://onlinelibrary.wiley.com/doi/10.1002/sim.8968 A simple yet powerful test for assessing goodness‐of‐fit of high‐dimensional linear models] Zhang 2021
* [https://www.tandfonline.com/doi/full/10.1080/02664763.2021.2017413 Pearson's goodness-of-fit tests for sparse distributions] Chang 2021


= [https://en.wikipedia.org/wiki/Independent_component_analysis Independent component analysis] =
= [https://en.wikipedia.org/wiki/Independent_component_analysis Independent component analysis] =
Line 460: Line 470:


== ICS vs FA ==
== ICS vs FA ==
== Robust independent component analysis ==
[https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-022-05043-9 robustica: customizable robust independent component analysis] 2022
= Canonical correlation analysis =
* https://en.wikipedia.org/wiki/Canonical_correlation. If we have two vectors ''X''&nbsp;=&nbsp;(''X''<sub>1</sub>,&nbsp;...,&nbsp;''X''<sub>''n''</sub>) and ''Y''&nbsp;=&nbsp;(''Y''<sub>1</sub>,&nbsp;...,&nbsp;''Y''<sub>''m''</sub>)  of random variables, and there are correlations among the variables, then canonical-correlation analysis will find linear combinations of ''X'' and ''Y'' which have maximum correlation with each other.
* [https://stats.idre.ucla.edu/r/dae/canonical-correlation-analysis/ R data analysis examples]
* [https://online.stat.psu.edu/stat505/book/export/html/682 Canonical Correlation Analysis] from psu.edu
* see the [https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/cancor cancor] function in base R; canocor in the [https://cran.r-project.org/web/packages/calibrate/ calibrate] package; and the [https://cran.r-project.org/web/packages/CCA/index.html CCA] package.
* [https://cmdlinetips.com/2020/12/canonical-correlation-analysis-in-r/ Introduction to Canonical Correlation Analysis (CCA) in R]
== Non-negative CCA ==
* https://cran.r-project.org/web/packages/nscancor/
* [https://www.mdpi.com/2076-3417/12/13/6596/html Pan-Cancer Analysis for Immune Cell Infiltration and Mutational Signatures Using Non-Negative Canonical Correlation Analysis] 2022. Non-negative constraints that force all input elements and coefficients to be zero or positive values.


= [https://en.wikipedia.org/wiki/Correspondence_analysis Correspondence analysis] =
= [https://en.wikipedia.org/wiki/Correspondence_analysis Correspondence analysis] =
https://francoishusson.wordpress.com/2017/07/18/multiple-correspondence-analysis-with-factominer/ and the book [https://www.crcpress.com/Exploratory-Multivariate-Analysis-by-Example-Using-R-Second-Edition/Husson-Le-Pages/p/book/9781138196346?tab=rev Exploratory Multivariate Analysis by Example Using R]
* [https://en.wikipedia.org/wiki/Principal_component_analysis#Correspondence_analysis Relationship of PCA and Correspondence analysis]
* [http://www.sthda.com/english/articles/31-principal-component-methods-in-r-practical-guide/113-ca-correspondence-analysis-in-r-essentials/ CA - Correspondence Analysis in R: Essentials]
* [https://www.displayr.com/math-correspondence-analysis/ Understanding the Math of Correspondence Analysis], [https://www.displayr.com/interpret-correspondence-analysis-plots-probably-isnt-way-think/ How to Interpret Correspondence Analysis Plots]
* https://francoishusson.wordpress.com/2017/07/18/multiple-correspondence-analysis-with-factominer/ and the book [https://www.crcpress.com/Exploratory-Multivariate-Analysis-by-Example-Using-R-Second-Edition/Husson-Le-Pages/p/book/9781138196346?tab=rev Exploratory Multivariate Analysis by Example Using R]


= t-SNE =
= Non-negative matrix factorization =
t-Distributed Stochastic Neighbor Embedding (t-SNE) is a technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets.
[https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-3312-5 Optimization and expansion of non-negative matrix factorization]


* https://distill.pub/2016/misread-tsne/
= Nonlinear dimension reduction =
[https://www.biorxiv.org/content/10.1101/2021.08.25.457696v1 The Specious Art of Single-Cell Genomics] by Chari 2021
 
== t-SNE ==
'''t-Distributed Stochastic Neighbor Embedding''' (t-SNE) is a technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets.
 
* [https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#t-distributed_stochastic_neighbor_embedding Wikipedia]
* [https://youtu.be/NEaUSP4YerM StatQuest: t-SNE, Clearly Explained]
* https://lvdmaaten.github.io/tsne/
* https://lvdmaaten.github.io/tsne/
* [https://rpubs.com/Saskia/520216 Workshop: Dimension reduction with R] Saskia Freytag
* Application to [http://amp.pharm.mssm.edu/archs4/data.html ARCHS4]
* Application to [http://amp.pharm.mssm.edu/archs4/data.html ARCHS4]
* [https://www.codeproject.com/tips/788739/visualization-of-high-dimensional-data-using-t-sne Visualization of High Dimensional Data using t-SNE with R]
* [https://www.codeproject.com/tips/788739/visualization-of-high-dimensional-data-using-t-sne Visualization of High Dimensional Data using t-SNE with R]
* http://blog.thegrandlocus.com/2018/08/a-tutorial-on-t-sne-1
* http://blog.thegrandlocus.com/2018/08/a-tutorial-on-t-sne-1
* [https://intobioinformatics.wordpress.com/2019/05/30/quick-and-easy-t-sne-analysis-in-r/ Quick and easy t-SNE analysis in R]
* [https://intobioinformatics.wordpress.com/2019/05/30/quick-and-easy-t-sne-analysis-in-r/ Quick and easy t-SNE analysis in R]. [https://bioconductor.org/packages/devel/bioc/html/M3C.html M3C] package was used.
* [https://link.springer.com/protocol/10.1007%2F978-1-0716-0301-7_8 Visualization of Single Cell RNA-Seq Data Using t-SNE in R]. [https://cran.r-project.org/web/packages/Seurat/index.html Seurat] (both Seurat and M3C call [https://cran.r-project.org/web/packages/Rtsne/index.html Rtsne]) package was used.
* [https://github.com/berenslab/rna-seq-tsne The art of using t-SNE for single-cell transcriptomics]
* [https://www.frontiersin.org/articles/10.3389/fgene.2020.00041/full Normalization Methods on Single-Cell RNA-seq Data: An Empirical Survey]
* [https://github.com/jdonaldson/rtsne An R package for t-SNE (pure R implementation)]
* [https://pair-code.github.io/understanding-umap/ Understanding UMAP] by Andy Coenen, Adam Pearce. Note that the Fashion MNIST data was used to explain what a global structure means (it means similar categories (such as sandal, sneaker, and ankle boot)).
*#  Hyperparameters really matter
*# Cluster sizes in a UMAP plot mean nothing
*# Distances between clusters might not mean anything
*# Random noise doesn’t always look random.
*# You may need more than one plot
 
=== Perplexity parameter ===
* Balance attention between local and global aspects of the dataset
* A guess about the number of close neighbors
* In a real setting is important to try different values
* Must be lower than the number of input records
* [https://jef.works/tsne-online/ Interactive t-SNE ? Online]. We see in addition to '''perplexity''' there are '''learning rate''' and '''max iterations'''.
 
=== Classifying digits with t-SNE: MNIST data ===


= Visualize the random effects =
Below is an example from datacamp [https://learn.datacamp.com/courses/advanced-dimensionality-reduction-in-r Advanced Dimensionality Reduction in R].
http://www.quantumforest.com/2012/11/more-sense-of-random-effects/
 
The mnist_sample is very small 200x785. Here ([http://varianceexplained.org/r/digit-eda/ Exploring handwritten digit classification: a tidy analysis of the MNIST dataset]) is a large data with 60k records (60000 x 785).
 
<ol>
<li>Generating t-SNE features
<pre>
library(readr)
library(dplyr)


= [https://en.wikipedia.org/wiki/Calibration_(statistics) Calibration] =
# 104MB
* [https://stats.stackexchange.com/questions/43053/how-to-determine-calibration-accuracy-uncertainty-of-a-linear-regression How to determine calibration accuracy/uncertainty of a linear regression?]
mnist_raw <- read_csv("https://pjreddie.com/media/files/mnist_train.csv", col_names = FALSE)
* [https://chem.libretexts.org/Textbook_Maps/Analytical_Chemistry/Book%3A_Analytical_Chemistry_2.0_(Harvey)/05_Standardizing_Analytical_Methods/5.4%3A_Linear_Regression_and_Calibration_Curves Linear Regression and Calibration Curves]
mnist_10k <- mnist_raw[1:10000, ]
* [https://www.webdepot.umontreal.ca/Usagers/sauves/MonDepotPublic/CHM%203103/LCGC%20Eur%20Burke%202001%20-%202%20de%204.pdf Regression and calibration] Shaun Burke
colnames(mnist_10k) <- c("label", paste0("pixel", 0:783))
* [https://cran.r-project.org/web/packages/calibrate calibrate] package
 
* [https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-018-0029-2 The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models] by Kattan and Gerds 2018. The following code demonstrates Figure 2. <syntaxhighlight lang='rsplus'>
library(ggplot2)
# Odds ratio =1 and calibrated model
library(Rtsne)
set.seed(666)
 
x = rnorm(1000)          
tsne <- Rtsne(mnist_10k[, -1], perplexity = 5)
z1 = 1 + 0*x       
tsne_plot <- data.frame(tsne_x= tsne$Y[1:5000,1],
pr1 = 1/(1+exp(-z1))
                        tsne_y = tsne$Y[1:5000,2],
y1 = rbinom(1000,1,pr1) 
                        digit = as.factor(mnist_10k[1:5000,]$label))
mean(y1) # .724, marginal prevalence of the outcome
# visualize obtained embedding
dat1 <- data.frame(x=x, y=y1)
ggplot(tsne_plot, aes(x= tsne_x, y = tsne_y, color = digit)) +
newdat1 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr1))
  ggtitle("MNIST embedding of the first 5K digits") +
  geom_text(aes(label = digit)) + theme(legend.position= "none")
</pre></li>
<li>Computing centroids
<pre>
library(data.table)
# Get t-SNE coordinates
centroids <- as.data.table(tsne$Y[1:5000,])
setnames(centroids, c("X", "Y"))
centroids[, label := as.factor(mnist_10k[1:5000,]$label)]
# Compute centroids
centroids[, mean_X := mean(X), by = label]
centroids[, mean_Y := mean(Y), by = label]
centroids <- unique(centroids, by = "label")
# visualize centroids
ggplot(centroids, aes(x= mean_X, y = mean_Y, color = label)) +
  ggtitle("Centroids coordinates") + geom_text(aes(label = label)) +
  theme(legend.position = "none")
</pre></li>
<li>Classifying new digits
<pre>
# Get new examples of digits 4 and 9
distances <- as.data.table(tsne$Y[5001:10000,])
setnames(distances, c("X" , "Y"))
distances[, label := mnist_10k[5001:10000,]$label]
distances <- distances[label == 4 | label == 9]
# Compute the distance to the centroids
distances[, dist_4 := sqrt(((X - centroids[label==4,]$mean_X) +  
                            (Y - centroids[label==4,]$mean_Y))^2)]
dim(distances)
# [1] 928  4
distances[1:3, ]
#            X        Y label  dist_4
# 1: -15.90171 27.62270    4 1.494578
# 2: -33.66668 35.69753    9 8.195562
# 3: -16.55037 18.64792    9 8.128860
 
# Plot distance to each centroid
ggplot(distances, aes(x=dist_4, fill = as.factor(label))) +
  geom_histogram(binwidth=5, alpha=.5, position="identity", show.legend = F)
</pre></li>
</ol>


# Odds ratio =1 and severely miscalibrated model
=== Fashion MNIST data ===
set.seed(666)
* fashion_mnist is only 500x785
x = rnorm(1000)         
* [https://tensorflow.rstudio.com/reference/keras/dataset_fashion_mnist/ keras] has 60k x 785. Miniconda is required when we want to use the package.
z2 =  -2 + 0*x       
pr2 = 1/(1+exp(-z2)) 
y2 = rbinom(1000,1,pr2) 
mean(y2) # .12
dat2 <- data.frame(x=x, y=y2)
newdat2 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr2))


library(riskRegression)
=== tSNE vs PCA ===
lrfit1 <- glm(y ~ x, data = dat1, family = 'binomial')
* [https://medium.com/analytics-vidhya/pca-vs-t-sne-17bcd882bf3d PCA vs t-SNE: which one should you use for visualization]. This uses MNIST dataset for a comparison.
IPA(lrfit1, newdata = newdat1)
* [https://www.subioplatform.com/info_casestudy/338/why-pca-on-bulk-rna-seq-and-t-sne-on-scrna-seq Why PCA on bulk RNA-Seq and t-SNE on scRNA-Seq?]
#    Variable    Brier          IPA    IPA.gain
* [https://support.bioconductor.org/p/97594/ What to use: PCA or tSNE dimension reduction in DESeq2 analysis?] (with discussion)
# 1 Null model 0.1984710  0.000000e+00 -0.003160010
* [https://stats.stackexchange.com/a/249520 Are there cases where PCA is more suitable than t-SNE?]
# 2 Full model 0.1990982 -3.160010e-03  0.000000000
* [https://stats.stackexchange.com/a/502392 How to interpret data not separated by PCA but by T-sne/UMAP]
# 3          x 0.1984800 -4.534668e-05 -0.003114664
* [https://towardsdatascience.com/dimensionality-reduction-for-data-visualization-pca-vs-tsne-vs-umap-be4aa7b1cb29 Dimensionality Reduction for Data Visualization: PCA vs TSNE vs UMAP vs LDA]
1 - 0.1990982/0.1984710
# [1] -0.003160159


lrfit2 <- glm(y ~ x, family = 'binomial')
=== Two groups example ===
IPA(lrfit2, newdata = newdat1)
* [http://www.bioconductor.org/packages/release/bioc/vignettes/splatter/inst/doc/splatter.html#61_Simulating_groups Simulating groups]
#    Variable    Brier      IPA    IPA.gain
<pre>
# 1 Null model 0.1984710  0.000000 -1.859333763
suppressPackageStartupMessages({
# 2 Full model 0.5674948 -1.859334  0.000000000
  library(splatter)
# 3          x 0.5669200 -1.856437 -0.002896299
  library(scater)
1 - 0.5674948/0.1984710
})
# [1] -1.859334
</syntaxhighlight> From the simulated data, we see IPA = -3.16e-3 for a calibrated model and IPA = -1.86 for a severely miscalibrated model.


= ROC curve and Brier score =
sim.groups <- splatSimulate(group.prob = c(0.5, 0.5), method = "groups",
* Binary case:
                            verbose = FALSE)
** Y = true '''positive''' rate = sensitivity,
sim.groups <- logNormCounts(sim.groups)
** X = false '''positive''' rate = 1-specificity
sim.groups <- runPCA(sim.groups)
* Area under the curve AUC from the [https://en.wikipedia.org/wiki/Receiver_operating_characteristic wikipedia]: the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').
plotPCA(sim.groups, colour_by = "Group") # 2 groups separated in PC1
:<math> A = \int_{\infty}^{-\infty} \mbox{TPR}(T) \mbox{FPR}'(T) \, dT = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} I(T'>T)f_1(T') f_0(T) \, dT' \, dT = P(X_1 > X_0) </math>
where <math> X_1 </math> is the score for a positive instance and <math> X_0 </math> is the score for a negative instance, and <math>f_0</math> and <math>f_1</math> are probability densities as defined in previous section.
* [https://datascienceplus.com/interpretation-of-the-auc/ Interpretation of the AUC]. A small toy example (n=12=4+8) was used to calculate the exact probability <math>P(X_1 > X_0) </math> (4*8=32 all combinations).
** It is a discrimination measure which tells us how well we can classify patients in two groups: those with and those without the outcome of interest.
** Since the measure is based on ranks, it is not sensitive to systematic errors in the calibration of the quantitative tests.
** The AUC can be defined as '''The probability that a randomly selected case will have a higher test result than a randomly selected control'''.
** Plot of sensitivity/specificity (y-axis) vs cutoff points of the biomarker
** The Mann-Whitney U test statistic (or Wilcoxon or Kruskall-Wallis test statistic) is equivalent to the AUC (Mason, 2002)
** The p-value of the Mann-Whitney U test can thus safely be used to test whether the AUC differs significantly from 0.5 (AUC of an uninformative test).
* [https://stackoverflow.com/questions/4903092/calculate-auc-in-r Calculate AUC by hand]. AUC is equal to the '''probability that a true positive is scored greater than a true negative.'''
* [https://stats.stackexchange.com/questions/145566/how-to-calculate-area-under-the-curve-auc-or-the-c-statistic-by-hand How to calculate Area Under the Curve (AUC), or the c-statistic, by hand or by R]
* Introduction to the [https://hopstat.wordpress.com/2014/12/19/a-small-introduction-to-the-rocr-package/ ROCR] package. [https://datascienceplus.com/machine-learning-logistic-regression-for-credit-modelling-in-r/ Add threshold labels]
* http://freakonometrics.hypotheses.org/9066, http://freakonometrics.hypotheses.org/20002
* [http://www.joyofdata.de/blog/illustrated-guide-to-roc-and-auc/ Illustrated Guide to ROC and AUC]
* [http://blog.revolutionanalytics.com/2016/08/roc-curves-in-two-lines-of-code.html ROC Curves in Two Lines of R Code]
* [https://staesthetic.wordpress.com/2014/04/14/gini-roc-auc-and-accuracy/ Gini and AUC]. Gini = 2*AUC-1.
* Generally, an AUC value over 0.7 is indicative of a model that can distinguish between the two outcomes well. An AUC of 0.5 tells us that the model is a random classifier, and it cannot distinguish between the two outcomes.


== Survival data ==
sim.groups <- runTSNE(sim.groups)
'Survival Model Predictive Accuracy and ROC Curves' by Heagerty & Zheng 2005
plotTSNE(sim.groups, colour_by = "Group") # 2 groups separated in TSNE2
* Recall '''Sensitivity=''' <math>P(\hat{p_i} > c | Y_i=1)</math>, '''Specificity=''' <math>P(\hat{p}_i \le c | Y_i=0</math>), <math>Y_i</math> is binary outcomes, <math>\hat{p}_i</math> is a prediction, <math>c</math> is a criterion for classifying the prediction as positive (<math>\hat{p}_i > c</math>) or negative (<math>\hat{p}_i \le c </math>).
</pre>
* For survival data, we need to use a fixed time/horizon (''t'') to classify the data as either a case or a control. Following Heagerty and Zheng's definition (Incident/dynamic), '''Sensitivity(c, t)=''' <math>P(M_i > c | T_i = t)</math>, '''Specificity=''' <math>P(M_i \le c | T_i > 0</math>) where ''' ''M'' ''' is a marker value or <math>Z^T \beta</math>. Here sensitivity measures the expected fraction of subjects with a marker greater than ''c'' among the subpopulation of individuals who die at time ''t'', while specificity measures the fraction of subjects with a marker less than or equal to ''c'' among those who survive beyond time t.
* The AUC measures the '''probability that the marker value for a randomly selected case exceeds the marker value for a randomly selected control'''
* ROC curves are useful for comparing the discriminatory capacity of different potential biomarkers.


== Confusion matrix, Sensitivity/Specificity/Accuracy ==
== UMAP ==
* [https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Uniform_manifold_approximation_and_projection Uniform manifold approximation and projection]
* https://cran.r-project.org/web/packages/umap/index.html
* [https://intobioinformatics.wordpress.com/2019/06/08/running-umap-for-data-visualisation-in-r/ Running UMAP for data visualisation in R]
* [https://juliasilge.com/blog/cocktail-recipes-umap/ PCA and UMAP with tidymodels]
* https://arxiv.org/abs/1802.03426
* https://www.biorxiv.org/content/early/2018/04/10/298430
* [https://poissonisfish.com/2020/11/14/umap-clustering-in-python/ UMAP clustering in Python]
* [https://juliasilge.com/blog/un-voting/ Dimensionality reduction of #TidyTuesday United Nations voting patterns], [https://juliasilge.com/blog/billboard-100/ Dimensionality reduction for #TidyTuesday Billboard Top 100 songs]. The [https://cran.r-project.org/web/packages/embed/index.html embed] package was used.
* [https://tonyelhabr.rbind.io/post/dimensionality-reduction-and-clustering/ Tired: PCA + kmeans, Wired: UMAP + GMM]
* [https://www.nature.com/articles/s41596-020-00409-w Tutorial: guidelines for the computational analysis of single-cell RNA sequencing data] Andrews 2020.
**  One shortcoming of both t-SNE and UMAP is that they both require a user-defined hyperparameter, and the result can be sensitive to the value chosen. Moreover, the methods are stochastic, and providing a good initialization can significantly improve the results of both algorithms.
** '''Neither visualization algorithm preserves cell-cell distances, so the resulting embedding should not be used directly by downstream analysis methods such as clustering or pseudotime inference'''.
* [https://youtu.be/eN0wFzBA4Sc?t=53 UMAP Dimension Reduction, Main Ideas!!!], [https://youtu.be/jth4kEvJ3P8 UMAP: Mathematical Details (clearly explained!!!)]
* [https://towardsdatascience.com/how-exactly-umap-works-13e3040e1668 How Exactly UMAP Works] (open it in an incognito window]
* [https://statquest.gumroad.com/l/nixkdy t-SNE and UMAP Study Guide]
* [https://twitter.com/lpachter/status/1440696798218100753 UMAP monkey]


{| border="1" style="border-collapse:collapse; text-align:center;"
== GECO ==
|-
[https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-03951-2 GECO: gene expression clustering optimization app for non-linear data visualization of patterns]
|                    ||  || colspan="2" | Predict  ||
|-
|                    ||  ||  1      ||    0      ||
|-
| rowspan="2" | True || 1 ||  TP    ||    FN    || Sens=TP/(TP+FN)=Recall <br/> FNR=FN/(TP+FN)
|-
|    0              ||  FP    ||    TN    || Spec=TN/(FP+TN)
|-
|                    ||    ||  PPV=TP/(TP+FP) <br/> FDR=FP/(TP+FP)||  NPV=TN/(FN+TN) ||  N = TP + FP + FN + TN
|}


* Sensitivity = TP / (TP + FN) = Recall
= Visualize the random effects =
* Specificity = TN / (TN + FP)
http://www.quantumforest.com/2012/11/more-sense-of-random-effects/
* Accuracy = (TP + TN) / N
* False discovery rate FDR = FP / (TP + FP)
* False negative rate FNR = FN / (TP + FN)
* [https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values Positive predictive value (PPV)] = TP / # positive calls = TP / (TP + FP) = 1 - FDR
* Negative predictive value (NPV) = TN /  # negative calls = TN / (FN + TN)
* Prevalence = (TP + FN) / N.
* Note that PPV & NPV can also be computed from sensitivity, specificity, and prevalence:
** [https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values#cite_note-AltmanBland1994-2 PPV is directly proportional to the prevalence of the disease or condition.].
** For example, in the extreme case if the prevalence =1, then PPV is always 1.
::<math> \text{PPV} = \frac{\text{sensitivity} \times \text{prevalence}}{\text{sensitivity} \times \text{prevalence}+(1-\text{specificity}) \times (1-\text{prevalence})} </math>
::<math> \text{NPV} = \frac{\text{specificity} \times (1-\text{prevalence})}{(1-\text{sensitivity}) \times \text{prevalence}+\text{specificity} \times (1-\text{prevalence})} </math>


== Precision recall curve ==
= [https://en.wikipedia.org/wiki/Calibration_(statistics) Calibration] =
* [https://en.wikipedia.org/wiki/Precision_and_recall Precision and recall]
** Y-axis: Precision = tp/(tp + fp) = PPV, large is better
** X-axis: Recall = tp/(tp + fn) = Sensitivity, large is better
* [http://pages.cs.wisc.edu/~jdavis/davisgoadrichcamera2.pdf The Relationship Between Precision-Recall and ROC Curves]. Remember ROC is defined as
** Y-axis: Sensitivity = tp/(tp + fn) = Recall
** X-axis: 1-Specificity = fp/(fp + tn)


== Incidence, Prevalence ==
* Search by image: graphical explanation of calibration problem
https://www.health.ny.gov/diseases/chronic/basicstat.htm
* Does calibrating classification models improve prediction?
** Calibrating a classification model can improve the reliability and accuracy of the '''predicted probabilities''', but it may not necessarily improve the '''overall prediction performance of the model''' in terms of metrics such as accuracy, precision, or recall.
** Calibration is about ensuring that the predicted probabilities from a model match the observed proportions of outcomes in the data. This can be important when the predicted probabilities are used to make decisions or when they are presented to users as a measure of confidence or uncertainty.
** However, calibrating a model does not change its ability to discriminate between positive and negative outcomes. In other words, calibration does not affect how well the model separates the classes, but rather how accurately it estimates the probabilities of class membership.
** In some cases, calibrating a model may improve its overall prediction performance by making the predicted probabilities more accurate. However, this is not always the case, and the impact of calibration on prediction performance may vary depending on the specific needs and goals of the analysis.


== Calculate area under curve by hand (using trapezoid), relation to concordance measure and the Wilcoxon–Mann–Whitney test ==
* A real-world example of calibration in machine learning is in the field of fraud detection. In this case, it might be desirable to have the model '''predict probabilities''' of data belonging to each possible '''class''' instead of crude class labels. Gaining access to '''probabilities''' is useful for a richer interpretation of the responses, analyzing the model shortcomings, or presenting the uncertainty to the end-users ². [https://wttech.blog/blog/2021/a-guide-to-model-calibration/ A guide to model calibration | Wunderman Thompson Technology].
* https://stats.stackexchange.com/a/146174
* [https://pubs.rsna.org/doi/pdf/10.1148/radiology.143.1.7063747 The meaning and use of the area under a receiver operating characteristic (ROC) curve] J A Hanley, B J McNeil 1982


== genefilter package and rowpAUCs function ==
* Another example where calibration is more important than prediction on new samples is in the field of medical diagnosis. In this case, it is important to have well-calibrated probabilities for the presence of a disease, so that doctors can make informed decisions about treatment. For example, if a diagnostic test predicts an 80% chance that a patient has a certain disease, doctors would expect that 80% of the time when such a prediction is made, the patient actually has the disease. This example does not mean that prediction on new samples is not feasible or not a concern, but rather that having well-calibrated probabilities is crucial for making accurate predictions and informed decisions.  
* [https://books.google.com/books?id=F3tAehmRHSwC&pg=PA99&lpg=PA99&dq=%22rowpAUCs%22+genefilter&source=bl&ots=QYRYDc45Dp&sig=6b29AsNivFPdyvcU1z3Okn121OU&hl=en&sa=X&ei=mFvCVN35NdaSsQSUqIKYCg&ved=0CE8Q6AEwCTgK#v=onepage&q=%22rowpAUCs%22%20genefilter&f=false rowpAUCs] function in genefilter package. The aim is to find potential biomarkers whose expression level is able to distinguish between two groups.
<pre>
# source("http://www.bioconductor.org/biocLite.R")
# biocLite("genefilter")
library(Biobase) # sample.ExpressionSet data
data(sample.ExpressionSet)


library(genefilter)
* [https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-019-1466-7 Calibration: the Achilles heel of predictive analytics] Calster 2019
r2 = rowpAUCs(sample.ExpressionSet, "sex", p=0.1)
* https://www.itl.nist.gov/div898/handbook/pmd/section1/pmd133.htm Calibration and '''calibration curve'''.
plot(r2[1]) # first gene, asking specificity = .9
** Y=voltage (''observed''), X=temperature (''true/ideal''). The calibration curve for a thermocouple is often constructed by comparing thermocouple ''(observed)output'' to relatively ''(true)precise'' thermometer data.
 
** when a new temperature is measured with the thermocouple, the voltage is converted to temperature terms by plugging the observed voltage into the regression equation and solving for temperature.  
r2 = rowpAUCs(sample.ExpressionSet, "sex", p=1.0)
** It is important to note that the thermocouple measurements, made on the ''secondary measurement scale'', are treated as the response variable and the more precise thermometer results, on the ''primary scale'', are treated as the predictor variable because this best satisfies the '''underlying assumptions''' (Y=observed, X=true) of the analysis.
plot(r2[1]) # it won't show pAUC
** '''Calibration interval'''
 
** In almost all calibration applications the ultimate quantity of interest is the true value of the primary-scale measurement method associated with a measurement made on the secondary scale.
r2 = rowpAUCs(sample.ExpressionSet, "sex", p=.999)
** It seems the x-axis and y-axis have similar ranges in many application.
plot(r2[1]) # pAUC is very close to AUC now
* An Exercise in the Real World of Design and Analysis, Denby, Landwehr, and Mallows 2001. Inverse regression
</pre>
* [https://stats.stackexchange.com/questions/43053/how-to-determine-calibration-accuracy-uncertainty-of-a-linear-regression How to determine calibration accuracy/uncertainty of a linear regression?]
* [https://chem.libretexts.org/Textbook_Maps/Analytical_Chemistry/Book%3A_Analytical_Chemistry_2.0_(Harvey)/05_Standardizing_Analytical_Methods/5.4%3A_Linear_Regression_and_Calibration_Curves Linear Regression and Calibration Curves]
* [https://www.webdepot.umontreal.ca/Usagers/sauves/MonDepotPublic/CHM%203103/LCGC%20Eur%20Burke%202001%20-%202%20de%204.pdf Regression and calibration] Shaun Burke
* [https://cran.r-project.org/web/packages/calibrate calibrate] package
* [https://cran.r-project.org/web/packages/investr/index.html investr]: An R Package for Inverse Estimation. [https://journal.r-project.org/archive/2014-1/greenwell-kabban.pdf Paper]
* [https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-018-0029-2 The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models] by Kattan and Gerds 2018. The following code demonstrates Figure 2. <syntaxhighlight lang='rsplus'>
# Odds ratio =1 and calibrated model
set.seed(666)
x = rnorm(1000)         
z1 = 1 + 0*x       
pr1 = 1/(1+exp(-z1))
y1 = rbinom(1000,1,pr1)
mean(y1) # .724, marginal prevalence of the outcome
dat1 <- data.frame(x=x, y=y1)
newdat1 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr1))


== Use and Misuse of the Receiver Operating Characteristic Curve in Risk Prediction ==
# Odds ratio =1 and severely miscalibrated model
http://circ.ahajournals.org/content/115/7/928
set.seed(666)
 
x = rnorm(1000)         
== Performance evaluation ==
z2 = -2 + 0*x       
* [https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.5727 Testing for improvement in prediction model performance] by Pepe et al 2013.
pr2 = 1/(1+exp(-z2)) 
 
y2 = rbinom(1000,1,pr2) 
== Some R packages ==
mean(y2) # .12
* [https://rviews.rstudio.com/2019/03/01/some-r-packages-for-roc-curves/ Some R Packages for ROC Curves]
dat2 <- data.frame(x=x, y=y2)
* [https://github.com/dariyasydykova/open_projects/tree/master/ROC_animation ROC animation]
newdat2 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr2))
 
 
== Comparison of two AUCs ==
library(riskRegression)
* [https://statcompute.wordpress.com/2018/12/25/statistical-assessments-of-auc/ Statistical Assessments of AUC]. This is using the '''pROC::roc.test''' function.
lrfit1 <- glm(y ~ x, data = dat1, family = 'binomial')
IPA(lrfit1, newdata = newdat1)
#    Variable    Brier          IPA    IPA.gain
# 1 Null model 0.1984710  0.000000e+00 -0.003160010
# 2 Full model 0.1990982 -3.160010e-03  0.000000000
# 3          x 0.1984800 -4.534668e-05 -0.003114664
1 - 0.1990982/0.1984710
# [1] -0.003160159
 
lrfit2 <- glm(y ~ x, family = 'binomial')
IPA(lrfit2, newdata = newdat1)
#    Variable    Brier      IPA    IPA.gain
# 1 Null model 0.1984710  0.000000 -1.859333763
# 2 Full model 0.5674948 -1.859334  0.000000000
# 3          x 0.5669200 -1.856437 -0.002896299
1 - 0.5674948/0.1984710
# [1] -1.859334
</syntaxhighlight> From the simulated data, we see IPA = -3.16e-3 for a calibrated model and IPA = -1.86 for a severely miscalibrated model.
 
= ROC curve =
See [[ROC|ROC]].


= [https://en.wikipedia.org/wiki/Net_reclassification_improvement NRI] (Net reclassification improvement) =
= [https://en.wikipedia.org/wiki/Net_reclassification_improvement NRI] (Net reclassification improvement) =
Line 633: Line 724:
= Maximum likelihood =
= Maximum likelihood =
[http://stats.stackexchange.com/questions/622/what-is-the-difference-between-a-partial-likelihood-profile-likelihood-and-marg Difference of partial likelihood, profile likelihood and marginal likelihood]
[http://stats.stackexchange.com/questions/622/what-is-the-difference-between-a-partial-likelihood-profile-likelihood-and-marg Difference of partial likelihood, profile likelihood and marginal likelihood]
== EM Algorithm ==
* https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm
* [https://stephens999.github.io/fiveMinuteStats/intro_to_em.html Introduction to EM: Gaussian Mixture Models]
== Mixture model ==
[https://cran.r-project.org/web/packages/mixComp/ mixComp]: Estimation of the Order of Mixture Distributions
== MLE ==
[https://cimentadaj.github.io/blog/2020-11-26-maximum-likelihood-distilled/maximum-likelihood-distilled/ Maximum Likelihood Distilled]
== Efficiency of an estimator ==
[https://stats.stackexchange.com/a/350362 What does it mean by more “efficient” estimator]
== Inference ==
[https://www.tidyverse.org/blog/2021/08/infer-1-0-0/ infer] package


= Generalized Linear Model =
= Generalized Linear Model =
Lectures from a course in [http://people.stat.sfu.ca/~raltman/stat851.html Simon Fraser University Statistics].  
* Lectures from a course in [http://people.stat.sfu.ca/~raltman/stat851.html Simon Fraser University Statistics].  
 
* [https://myweb.uiowa.edu/pbreheny/uk/teaching/760-s13/index.html Advanced Regression] from Patrick Breheny.
[https://petolau.github.io/Analyzing-double-seasonal-time-series-with-GAM-in-R/ Doing magic and analyzing seasonal time series with GAM (Generalized Additive Model) in R]
* [https://petolau.github.io/Analyzing-double-seasonal-time-series-with-GAM-in-R/ Doing magic and analyzing seasonal time series with GAM (Generalized Additive Model) in R]


== Link function ==
== Link function ==
[http://www.win-vector.com/blog/2019/07/link-functions-versus-data-transforms/ Link Functions versus Data Transforms]
[http://www.win-vector.com/blog/2019/07/link-functions-versus-data-transforms/ Link Functions versus Data Transforms]
== Extract coefficients, z, p-values ==
Use '''coef(summary(glmObject))'''
<pre>
> coef(summary(glm.D93))
                Estimate Std. Error      z value    Pr(>|z|)
(Intercept)  3.044522e+00  0.1708987  1.781478e+01 5.426767e-71
outcome2    -4.542553e-01  0.2021708 -2.246889e+00 2.464711e-02
outcome3    -2.929871e-01  0.1927423 -1.520097e+00 1.284865e-01
treatment2  1.337909e-15  0.2000000  6.689547e-15 1.000000e+00
treatment3  1.421085e-15  0.2000000  7.105427e-15 1.000000e+00
</pre>


== Quasi Likelihood ==
== Quasi Likelihood ==
Line 649: Line 768:
* [http://courses.washington.edu/b571/lectures/notes131-181.pdf U. Washington] and  [http://faculty.washington.edu/heagerty/Courses/b571/handouts/OverdispQL.pdf another lecture] focuses on overdispersion.
* [http://courses.washington.edu/b571/lectures/notes131-181.pdf U. Washington] and  [http://faculty.washington.edu/heagerty/Courses/b571/handouts/OverdispQL.pdf another lecture] focuses on overdispersion.
* [http://www.maths.usyd.edu.au/u/jchan/GLM/QuasiLikelihood.pdf This lecture] contains a table of quasi likelihood from common distributions.
* [http://www.maths.usyd.edu.au/u/jchan/GLM/QuasiLikelihood.pdf This lecture] contains a table of quasi likelihood from common distributions.
== IRLS ==
* [https://statisticaloddsandends.wordpress.com/2020/05/14/glmnet-v4-0-generalizing-the-family-parameter/ glmnet v4.0: generalizing the family parameter]
* [https://bwlewis.github.io/GLM/ Generalized linear models, abridged] (include algorithm and code)


== Plot ==
== Plot ==
Line 655: Line 778:
== [https://en.wikipedia.org/wiki/Deviance_(statistics) Deviance], stats::deviance() and glmnet::deviance.glmnet() from R ==
== [https://en.wikipedia.org/wiki/Deviance_(statistics) Deviance], stats::deviance() and glmnet::deviance.glmnet() from R ==
* '''It is a generalization of the idea of using the sum of squares of residuals (RSS) in ordinary least squares''' to cases where model-fitting is achieved by maximum likelihood. See [https://stats.stackexchange.com/questions/6581/what-is-deviance-specifically-in-cart-rpart What is Deviance? (specifically in CART/rpart)] to manually compute deviance and compare it with the returned value of the '''deviance()''' function from a linear regression. Summary: deviance() = RSS in linear models.
* '''It is a generalization of the idea of using the sum of squares of residuals (RSS) in ordinary least squares''' to cases where model-fitting is achieved by maximum likelihood. See [https://stats.stackexchange.com/questions/6581/what-is-deviance-specifically-in-cart-rpart What is Deviance? (specifically in CART/rpart)] to manually compute deviance and compare it with the returned value of the '''deviance()''' function from a linear regression. Summary: deviance() = RSS in linear models.
* [https://www.datascienceblog.net/post/machine-learning/interpreting_generalized_linear_models/ Interpreting Generalized Linear Models]
* [https://statisticaloddsandends.wordpress.com/2019/03/27/what-is-deviance/ What is deviance?] You can think of the deviance of a model as twice the negative log likelihood plus a constant.
* https://www.rdocumentation.org/packages/stats/versions/3.4.3/topics/deviance
* https://www.rdocumentation.org/packages/stats/versions/3.4.3/topics/deviance
* Likelihood ratio tests and the deviance http://data.princeton.edu/wws509/notes/a2.pdf#page=6  
* Likelihood ratio tests and the deviance http://data.princeton.edu/wws509/notes/a2.pdf#page=6  
* Deviance(y,muhat) = 2*(loglik_saturated - loglik_proposed)
* Deviance(y,muhat) = 2*(loglik_saturated - loglik_proposed)
* [http://r.qcbs.ca/workshop06/book-en/binomial-glm.html Binomial GLM] and the [https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/ls objects()] function that seems to be the same as str(, max=1).
* [https://stats.stackexchange.com/questions/108995/interpreting-residual-and-null-deviance-in-glm-r Interpreting Residual and Null Deviance in GLM R]
* [https://stats.stackexchange.com/questions/108995/interpreting-residual-and-null-deviance-in-glm-r Interpreting Residual and Null Deviance in GLM R]
** Null Deviance = 2(LL(Saturated Model) - LL(Null Model)) on df = df_Sat - df_Null. The '''null deviance''' shows how well the response variable is predicted by a model that includes only the intercept (grand mean).  
** Null Deviance = 2(LL(Saturated Model) - LL(Null Model)) on df = df_Sat - df_Null. The '''null deviance''' shows how well the response variable is predicted by a model that includes only the intercept (grand mean).  
Line 721: Line 847:
* The saturated model always has n parameters where n is the sample size.
* The saturated model always has n parameters where n is the sample size.
* [https://stats.stackexchange.com/questions/114073/logistic-regression-how-to-obtain-a-saturated-model Logistic Regression : How to obtain a saturated model]
* [https://stats.stackexchange.com/questions/114073/logistic-regression-how-to-obtain-a-saturated-model Logistic Regression : How to obtain a saturated model]
== Testing ==
* [https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssb.12369?campaign=wolearlyview Robust testing in generalized linear models by sign flipping score contributions]
* [https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssb.12371?campaign=wolearlyview Goodness‐of‐fit testing in high dimensional generalized linear models]
== Generalized Additive Models ==
* [https://www.seascapemodels.org/rstats/2021/03/27/common-GAM-problems.html How to solve common problems with GAMs]
* [https://www.mzes.uni-mannheim.de/socialsciencedatalab/article/gam/ Generalized Additive Models: Allowing for some wiggle room in your models]
* [https://www.rdatagen.net/post/2022-08-09-simulating-data-from-a-non-linear-function-by-specifying-some-points-on-the-curve/ Simulating data from a non-linear function by specifying a handful of points]
* [https://www.rdatagen.net/post/2022-11-01-modeling-secular-trend-in-crt-using-gam/ Modeling the secular trend in a cluster randomized trial using very flexible models]


= Simulate data =
= Simulate data =
* [https://rviews.rstudio.com/2020/09/09/fake-data-with-r/ Fake Data with R]
* Understanding statistics through programming: [https://twitter.com/domliebl/status/1469347307267182601?s=20 You don’t really understand a stochastic process until you know how to simulate it] - D.G. Kendall.
== Density plot ==
== Density plot ==
<syntaxhighlight lang='rsplus'>
{{Pre}}
# plot a Weibull distribution with shape and scale
# plot a Weibull distribution with shape and scale
func <- function(x) dweibull(x, shape = 1, scale = 3.38)
func <- function(x) dweibull(x, shape = 1, scale = 3.38)
Line 731: Line 870:
func <- function(x) dweibull(x, shape = 1.1, scale = 3.38)
func <- function(x) dweibull(x, shape = 1.1, scale = 3.38)
curve(func, .1, 10)
curve(func, .1, 10)
</syntaxhighlight>
</pre>


The shape parameter plays a role on the shape of the density function and the failure rate.
The shape parameter plays a role on the shape of the density function and the failure rate.
Line 742: Line 881:
* http://stackoverflow.com/questions/16134786/simulate-data-from-non-standard-density-function
* http://stackoverflow.com/questions/16134786/simulate-data-from-non-standard-density-function


== Signal to noise ratio ==
=== Permuted block randomization ===
[https://www.rdatagen.net/post/permuted-block-randomization-using-simstudy/ Permuted block randomization using simstudy]
 
== Correlated data ==
<ul>
<li> [https://predictivehacks.com/how-to-generate-correlated-data-in-r/ How To Generate Correlated Data In R]
<li> [https://www.r-bloggers.com/2023/02/flexible-correlation-generation-an-update-to-gencormat-in-simstudy/ Flexible correlation generation: an update to genCorMat in simstudy]
<li> [https://en.wikipedia.org/wiki/Cholesky_decomposition#Monte_Carlo_simulation Cholesky decomposition]
<pre>
set.seed(1)
n <- 1000
R <- matrix(c(1, 0.75, 0.75, 1), nrow=2)
M <- matrix(rnorm(2 * n), ncol=2)
M <- M %*% chol(R) # chol(R) is an upper triangular matrix
x <- M[, 1]  # First correlated vector
y <- M[, 2]
cor(x, y)
# 0.7502607
</pre>
</ul>
 
== Clustered data with marginal correlations ==
[https://www.rdatagen.net/post/2022-11-22-generating-cluster-data-with-marginal-correlations/ Generating clustered data with marginal correlations]
 
== Signal to noise ratio/SNR ==
* https://en.wikipedia.org/wiki/Signal-to-noise_ratio
* https://en.wikipedia.org/wiki/Signal-to-noise_ratio
* https://stats.stackexchange.com/questions/31158/how-to-simulate-signal-noise-ratio
* https://stats.stackexchange.com/questions/31158/how-to-simulate-signal-noise-ratio
: <math>\frac{\sigma^2_{signal}}{\sigma^2_{noise}} = \frac{Var(f(X))}{Var(e)} </math> if Y = f(X) + e
: <math>SNR = \frac{\sigma^2_{signal}}{\sigma^2_{noise}} = \frac{Var(f(X))}{Var(e)} </math> if Y = f(X) + e
* The SNR is related to the correlation of Y and f(X). Assume X and e are independent (<math>X \perp e </math>):
: <math>
\begin{align}
Cor(Y, f(X)) &= Cor(f(X)+e, f(X)) \\
          &= \frac{Cov(f(X)+e, f(X))}{\sqrt{Var(f(X)+e) Var(f(X))}} \\
          &= \frac{Var(f(X))}{\sqrt{Var(f(X)+e) Var(f(X))}} \\
          &= \frac{\sqrt{Var(f(X))}}{\sqrt{Var(f(X)) + Var(e))}} = \frac{\sqrt{SNR}}{\sqrt{SNR + 1}} \\
          &= \frac{1}{\sqrt{1 + Var(e)/Var(f(X))}} = \frac{1}{\sqrt{1 + SNR^{-1}}}
\end{align}
</math> [[File:SnrVScor.png|200px]]
: Or <math>SNR = \frac{Cor^2}{1-Cor^2} </math>
* Page 401 of ESLII (https://web.stanford.edu/~hastie/ElemStatLearn//) 12th print.
* Page 401 of ESLII (https://web.stanford.edu/~hastie/ElemStatLearn//) 12th print.


Line 752: Line 926:
* Yuan and Lin 2006: 1.8, 3
* Yuan and Lin 2006: 1.8, 3
* [https://academic.oup.com/biostatistics/article/19/3/263/4093306#123138354 A framework for estimating and testing qualitative interactions with applications to predictive biomarkers] Roth, Biostatistics, 2018
* [https://academic.oup.com/biostatistics/article/19/3/263/4093306#123138354 A framework for estimating and testing qualitative interactions with applications to predictive biomarkers] Roth, Biostatistics, 2018
* [https://stackoverflow.com/a/47232502 Matlab: computing signal to noise ratio (SNR) of two highly correlated time domain signals]


== Effect size, Cohen's d and volcano plot ==
== Effect size, Cohen's d and volcano plot ==
Line 758: Line 933:
: <math>\theta = \frac{\mu_1 - \mu_2} \sigma,</math>
: <math>\theta = \frac{\mu_1 - \mu_2} \sigma,</math>


* [https://learningstatisticswithr.com/book/hypothesistesting.html#effectsize Effect size, sample size and power] from Learning statistics with R: A tutorial for psychology students and other beginners.
* [https://learningstatisticswithr.com/book/hypothesistesting.html#effectsize Effect size, sample size and power] from ebook '''[https://learningstatisticswithr.com/book/ Learning statistics with R]''': A tutorial for psychology students and other beginners.
* [https://en.wikipedia.org/wiki/Effect_size#t-test_for_mean_difference_between_two_independent_groups t-statistic and Cohen's d] for the case of mean difference between two independent groups
* [https://en.wikipedia.org/wiki/Effect_size#t-test_for_mean_difference_between_two_independent_groups t-statistic and Cohen's d] for the case of mean difference between two independent groups
* [http://www.win-vector.com/blog/2019/06/cohens-d-for-experimental-planning/ Cohen’s D for Experimental Planning]
* [http://www.win-vector.com/blog/2019/06/cohens-d-for-experimental-planning/ Cohen’s D for Experimental Planning]
Line 764: Line 939:
** Y-axis: -log(p)
** Y-axis: -log(p)
** X-axis: log2 fold change OR effect size (Cohen's D). [https://twitter.com/biobenkj/status/1072141825568329728 An example] from RNA-Seq data.
** X-axis: log2 fold change OR effect size (Cohen's D). [https://twitter.com/biobenkj/status/1072141825568329728 An example] from RNA-Seq data.
== Treatment/control ==
* [https://github.com/cran/biospear/blob/master/R/simdata.R simdata()] from [https://cran.r-project.org/web/packages/biospear/index.html biospear] package
* [https://github.com/cran/ROCSI/blob/master/R/ROCSI.R#L598 data.gen()] from [https://cran.r-project.org/web//packages/ROCSI/index.html ROCSI] package. The response contains continuous, binary and survival outcomes. The input include prevalence of predictive biomarkers, effect size (beta) for prognostic biomarker, etc.
== Cauchy distribution has no expectation ==
https://en.wikipedia.org/wiki/Cauchy_distribution
<pre>
replicate(10, mean(rcauchy(10000)))
</pre>
== Dirichlet distribution ==
* [https://en.wikipedia.org/wiki/Dirichlet_distribution Dirichlet distribution]
** It is a multivariate generalization of the '''beta''' distribution
** The Dirichlet distribution is the conjugate prior of the categorical distribution and '''multinomial distribution'''.
* [https://cran.r-project.org/web/packages/dirmult/ dirmult]::rdirichlet()
== Relationships among probability distributions ==
https://en.wikipedia.org/wiki/Relationships_among_probability_distributions
== What is the probability that two persons have the same initials ==
[https://www.r-bloggers.com/2023/12/what-is-the-probability-that-two-persons-have-the-same-initials/ The post]. The probability that at least two persons have the same initials depends on the size of the group. For a team of 8 people, simulations suggest that the probability is close to 4.1%. This probability increases with the size of the group. If there are 1000 people in the room, [https://www.numerade.com/ask/question/whats-the-probability-that-someone-else-in-a-room-full-of-people-has-the-exact-same-3-initials-in-their-name-thats-in-another-persons-name-a-038-b-333-c-0057-d-0064/ the probability is almost 100%]. [https://math.stackexchange.com/a/606272 How many people do you need to guarantee that two of them have the same initals?]


= Multiple comparisons =
= Multiple comparisons =
Line 771: Line 969:
* [http://varianceexplained.org/statistics/interpreting-pvalue-histogram/ Plot a histogram of p-values], a post from varianceexplained.org. The anti-conservative histogram (tail on the RHS) is what we have typically seen in e.g. microarray gene expression data.
* [http://varianceexplained.org/statistics/interpreting-pvalue-histogram/ Plot a histogram of p-values], a post from varianceexplained.org. The anti-conservative histogram (tail on the RHS) is what we have typically seen in e.g. microarray gene expression data.
* [http://statistic-on-air.blogspot.com/2015/01/adjustment-for-multiple-comparison.html Comparison of different ways of multiple-comparison] in R.
* [http://statistic-on-air.blogspot.com/2015/01/adjustment-for-multiple-comparison.html Comparison of different ways of multiple-comparison] in R.
* [https://peerj.com/articles/10387/ Comparing multiple comparisons: practical guidance for choosing the best multiple comparisons test] Midway 2020


Take an example, Suppose 550 out of 10,000 genes are significant at .05 level
Take an example, Suppose 550 out of 10,000 genes are significant at .05 level
Line 779: Line 978:
According to [https://www.cancer.org/cancer/cancer-basics/lifetime-probability-of-developing-or-dying-from-cancer.html Lifetime Risk of Developing or Dying From Cancer], there is a 39.7% risk of developing a cancer for male during his lifetime (in other words, 1 out of every 2.52 men in US will develop some kind of cancer during his lifetime) and 37.6% for female. So the probability of getting at least one cancer patient in a 3-generation family is 1-.6**3 - .63**3 = 0.95.
According to [https://www.cancer.org/cancer/cancer-basics/lifetime-probability-of-developing-or-dying-from-cancer.html Lifetime Risk of Developing or Dying From Cancer], there is a 39.7% risk of developing a cancer for male during his lifetime (in other words, 1 out of every 2.52 men in US will develop some kind of cancer during his lifetime) and 37.6% for female. So the probability of getting at least one cancer patient in a 3-generation family is 1-.6**3 - .63**3 = 0.95.


== False Discovery Rate ==
== Flexible method ==
[https://rdrr.io/bioc/GSEABenchmarkeR/man/runDE.html ?GSEABenchmarkeR::runDE]. Unadjusted (too few DE genes), FDR, and Bonferroni (too many DE genes) are applied depending on the proportion of DE genes.
 
== Family-Wise Error Rate (FWER) ==
* https://en.wikipedia.org/wiki/Family-wise_error_rate
* [https://www.statology.org/family-wise-error-rate/ How to Estimate the Family-wise Error Rate]
* [https://rviews.rstudio.com/2019/10/02/multiple-hypothesis-testing/ Multiple Hypothesis Testing in R]
 
== Bonferroni ==
* https://en.wikipedia.org/wiki/Bonferroni_correction
* This correction method is the most conservative of all and due to its strict filtering, potentially increases the false negative rate which simply means rejecting true positives among false positives.
 
== False Discovery Rate/FDR ==
* https://en.wikipedia.org/wiki/False_discovery_rate
* https://en.wikipedia.org/wiki/False_discovery_rate
* Paper [http://www.stat.purdue.edu/~doerge/BIOINFORM.D/FALL06/Benjamini%20and%20Y%20FDR.pdf Definition] by Benjamini and Hochberg in JRSS B 1995.
* Paper [http://www.stat.purdue.edu/~doerge/BIOINFORM.D/FALL06/Benjamini%20and%20Y%20FDR.pdf Definition] by Benjamini and Hochberg in JRSS B 1995.
* [https://youtu.be/K8LQSvtjcEo False Discovery Rates, FDR, clearly explained] by StatQuest
* A [http://xkcd.com/882/ comic]
* A [http://xkcd.com/882/ comic]
* [http://www.nonlinear.com/support/progenesis/comet/faq/v2.0/pq-values.aspx A p-value of 0.05 implies that 5% of all tests will result in false positives. An FDR adjusted p-value (or q-value) of 0.05 implies that 5% of significant tests will result in false positives. The latter will result in fewer false positives].
* [https://stats.stackexchange.com/a/456087 How to interpret False Discovery Rate?]
* P-value vs false discovery rate vs family wise error rate. See [http://jtleek.com/talks 10 statistics tip] or [http://www.biostat.jhsph.edu/~jleek/teaching/2011/genomics/mt140688.pdf#page=14 Statistics for Genomics (140.688)] from Jeff Leek. Suppose 550 out of 10,000 genes are significant at .05 level
* P-value vs false discovery rate vs family wise error rate. See [http://jtleek.com/talks 10 statistics tip] or [http://www.biostat.jhsph.edu/~jleek/teaching/2011/genomics/mt140688.pdf#page=14 Statistics for Genomics (140.688)] from Jeff Leek. Suppose 550 out of 10,000 genes are significant at .05 level
** P-value < .05 implies expecting .05*10000 = 500 false positives
** P-value < .05 implies expecting .05*10000 = 500 false positives (if we consider 50 hallmark genesets, 50*.05=2.5)
** False discovery rate < .05 implies expecting .05*550 = 27.5 false positives
** False discovery rate < .05 implies expecting .05*550 = 27.5 false positives
** Family wise error rate (P (# of false positives ≥ 1)) < .05. See [https://riffyn.com/riffyn-blog/2017/10/29/family-wise-error-rate Understanding Family-Wise Error Rate]
** Family wise error rate (P (# of false positives ≥ 1)) < .05. See [https://riffyn.com/riffyn-blog/2017/10/29/family-wise-error-rate Understanding Family-Wise Error Rate]
Line 792: Line 1,006:
* [https://www.biorxiv.org/content/early/2018/10/31/458786 A practical guide to methods controlling false discoveries in computational biology] by Korthauer, et al 2018, [https://rdcu.be/bFEt2 BMC Genome Biology] 2019
* [https://www.biorxiv.org/content/early/2018/10/31/458786 A practical guide to methods controlling false discoveries in computational biology] by Korthauer, et al 2018, [https://rdcu.be/bFEt2 BMC Genome Biology] 2019
* [https://academic.oup.com/bioinformatics/advance-article/doi/10.1093/bioinformatics/btz191/5380770 onlineFDR]: an R package to control the false discovery rate for growing data repositories
* [https://academic.oup.com/bioinformatics/advance-article/doi/10.1093/bioinformatics/btz191/5380770 onlineFDR]: an R package to control the false discovery rate for growing data repositories
* [https://academic.oup.com/biostatistics/article/15/1/1/244509#2869827 An estimate of the science-wise false discovery rate and application to the top medical literature] Jager & Leek 2021
* The adjusted p-value (also known as the False Discovery Rate or FDR) and the raw p-value can be close under certain conditions. [https://stats.stackexchange.com/a/51159 study on multiple outcomes- do I adjust or not adjust p-values?]
** '''The number of tests is small''': When performing multiple hypothesis tests, the adjustment for multiple comparisons (like Bonferroni or Benjamini-Hochberg procedures) can have a smaller impact if the number of tests is small. This is because these adjustments are less stringent when fewer tests are conducted.
** '''The p-values are very small''': If the raw p-values are very small to begin with, then even after adjustment, they may still remain small.  This is especially true for methods that control the FDR, like the Benjamini-Hochberg procedure, which tend to be less conservative than methods controlling the Family-Wise Error Rate (FWER), like the Bonferroni correction.
** '''The tests are not independent''': Some p-value adjustment methods assume that the tests are independent. If this assumption is violated, the adjusted p-values may not be accurate.
* [https://predictivehacks.com/the-benjamini-hochberg-procedure-fdr-and-p-value-adjusted-explained/ The Benjamini-Hochberg Procedure (FDR) And P-Value Adjusted Explained]


Suppose <math>p_1 \leq p_2 \leq ... \leq p_n</math>. Then  
Suppose <math>p_1 \leq p_2 \leq ... \leq p_n</math>. Then  
Line 803: Line 1,023:
Below is the histograms of p-values and FDR (BH adjusted) from a real data (Pomeroy in BRB-ArrayTools).
Below is the histograms of p-values and FDR (BH adjusted) from a real data (Pomeroy in BRB-ArrayTools).


[[File:Hist bh.svg|350px]]  
[[:File:Hist bh.svg]]  


And the next is a scatterplot w/ histograms on the margins from a null data.
And the next is a scatterplot w/ histograms on the margins from a null data. The curve looks like f(x)=log(x).


[[File:Scatterhist.svg|350px]]
[[:File:Scatterhist.svg]]


== q-value ==
== q-value ==
* https://en.wikipedia.org/wiki/Q-value_(statistics)
* [https://divingintogeneticsandgenomics.rbind.io/post/understanding-p-value-multiple-comparisons-fdr-and-q-value/ Understanding p value, multiple comparisons, FDR and q value]
q-value is defined as the minimum FDR that can be attained when calling that '''feature''' significant (i.e., expected proportion of false positives incurred when calling that feature significant).
q-value is defined as the minimum FDR that can be attained when calling that '''feature''' significant (i.e., expected proportion of false positives incurred when calling that feature significant).


If gene X has a q-value of 0.013 it means that 1.3% of genes that show p-values at least as small as gene X are false positives.
If gene X has a q-value of 0.013 it means that 1.3% of genes that show p-values at least as small as gene X are false positives.
Another view: q-value = FDR adjusted p-value. A p-value of 5% means that 5% of all tests will result in false positives. A q-value of 5% means that 5% of significant results will result in false positives. [https://www.statisticshowto.datasciencecentral.com/q-value/ here].
== Double dipping ==
[[Heatmap#Double_dipping|Double dipping]]


== SAM/Significance Analysis of Microarrays ==
== SAM/Significance Analysis of Microarrays ==
Line 818: Line 1,046:


In BRCA dataset, using the 90-th percentile will get 29 genes vs 183 genes if we use median.
In BRCA dataset, using the 90-th percentile will get 29 genes vs 183 genes if we use median.
== Required number of permutations for a permutation-based p-value ==
* [https://en.wikipedia.org/wiki/Resampling_(statistics)#Permutation_tests Permutation tests]
* https://stats.stackexchange.com/a/80879
* Multinomial coefficient. [https://www.rdocumentation.org/packages/iterpc/versions/0.4.2/topics/multichoose multichoose()]
<syntaxhighlight lang='r'>
library("iterpc")
multichoose(c(3,1,1)) # [1] 20
multichoose(c(10,10)) |> log10()  # [1] 5.266599
multichoose(c(100,100), bigz = T) |> log10() # [1] 58.95688
multichoose(c(100,100,100), bigz = T) |> log10() # [1] 140.5758
</syntaxhighlight>


== Multivariate permutation test ==
== Multivariate permutation test ==
In BRCA dataset, using 80% confidence gives 116 genes vs 237 genes if we use 50% confidence (assuming maximum proportion of false discoveries is 10%). The method is published on [http://www.sciencedirect.com/science/article/pii/S0378375803002118 EL Korn, JF Troendle, LM McShane and R Simon, ''Controlling the number of false discoveries: Application to high dimensional genomic data'', Journal of Statistical Planning and Inference, vol 124, 379-398 (2004)].
In BRCA dataset, using 80% confidence gives 116 genes vs 237 genes if we use 50% confidence (assuming maximum proportion of false discoveries is 10%). The method is published on [http://www.sciencedirect.com/science/article/pii/S0378375803002118 EL Korn, JF Troendle, LM McShane and R Simon, ''Controlling the number of false discoveries: Application to high dimensional genomic data'', Journal of Statistical Planning and Inference, vol 124, 379-398 (2004)].
== The role of the p-value in the multitesting problem ==
https://www.tandfonline.com/doi/full/10.1080/02664763.2019.1682128


== String Permutations Algorithm ==
== String Permutations Algorithm ==
https://youtu.be/nYFd7VHKyWQ
https://youtu.be/nYFd7VHKyWQ
== combinat package ==
[https://predictivehacks.com/permutations-in-r/ Find all Permutations]
== [https://cran.r-project.org/web/packages/coin/index.html coin] package: Resampling ==
[https://www.statmethods.net/stats/resampling.html Resampling Statistics]


== Empirical Bayes Normal Means Problem with Correlated Noise ==
== Empirical Bayes Normal Means Problem with Correlated Noise ==
Line 856: Line 1,106:


An example from [http://rfunction.com/archives/223 here]
An example from [http://rfunction.com/archives/223 here]
<syntaxhighlight lang='rsplus'>
{{Pre}}
Y  <- c(15,  7, 36,  4, 16, 12, 41, 15)
Y  <- c(15,  7, 36,  4, 16, 12, 41, 15)
N  <- c(4949, 3534, 12210, 344, 6178, 4883, 11256, 7125)
N  <- c(4949, 3534, 12210, 344, 6178, 4883, 11256, 7125)
Line 878: Line 1,128:
# Null Deviance:     10.56
# Null Deviance:     10.56
# Residual Deviance: 8.001 AIC: 48.13
# Residual Deviance: 8.001 AIC: 48.13
</syntaxhighlight>
</pre>


== Offset in Cox regression ==
== Offset in Cox regression ==
An example from [https://github.com/cran/biospear/blob/master/R/PCAlasso.R biospear::PCAlasso()]
An example from [https://github.com/cran/biospear/blob/master/R/PCAlasso.R biospear::PCAlasso()]
<syntaxhighlight lang='rsplus'>
{{Pre}}
coxph(Surv(time, status) ~ offset(off.All), data = data)
coxph(Surv(time, status) ~ offset(off.All), data = data)
# Call:  coxph(formula = Surv(time, status) ~ offset(off.All), data = data)
# Call:  coxph(formula = Surv(time, status) ~ offset(off.All), data = data)
Line 902: Line 1,152:
coxph(Surv(time, status) ~ off.All, data = data)$loglik
coxph(Surv(time, status) ~ off.All, data = data)$loglik
# [1] -2391.702 -2391.430    # initial coef estimate, final coef
# [1] -2391.702 -2391.430    # initial coef estimate, final coef
</syntaxhighlight>
</pre>


== Offset in linear regression ==
== Offset in linear regression ==
Line 925: Line 1,175:
== Test of overdispersion or underdispersion in Poisson models ==
== Test of overdispersion or underdispersion in Poisson models ==
https://stats.stackexchange.com/questions/66586/is-there-a-test-to-determine-whether-glm-overdispersion-is-significant
https://stats.stackexchange.com/questions/66586/is-there-a-test-to-determine-whether-glm-overdispersion-is-significant
== Poisson ==
* https://en.wikipedia.org/wiki/Poisson_distribution
* [https://www.tandfonline.com/doi/abs/10.1080/00031305.2022.2046159 The “Poisson” Distribution: History, Reenactments, Adaptations]
* [https://www.zeileis.org/news/poisson/ The Poisson distribution: From basic probability theory to regression models]
* [https://www.dataquest.io/blog/tutorial-poisson-regression-in-r/ Tutorial:  Poisson Regression in R]
* We can use a '''quasipoisson''' model, which allows the variance to be proportional rather than equal to the mean. glm(, family="quasipoisson", ).
** [https://sscc.wisc.edu/sscc/pubs/glm-r/ Generalized Linear Models in R] from sscc.wisc.
** See the R code in the supplement of the paper [https://academic.oup.com/ije/article/46/1/348/2622842 Interrupted time series regression for the evaluation of public health interventions: a tutorial] 2016


== Negative Binomial ==
== Negative Binomial ==
Line 931: Line 1,190:
== Binomial ==
== Binomial ==
* [https://www.rdatagen.net/post/overdispersed-binomial-data/ Generating and modeling over-dispersed binomial data]
* [https://www.rdatagen.net/post/overdispersed-binomial-data/ Generating and modeling over-dispersed binomial data]
* [https://cran.r-project.org/web/packages/simstudy/index.html simstudy] package. The final data sets can represent data from '''randomized control trials''', '''repeated measure (longitudinal) designs''', and cluster randomized trials. Missingness can be generated using various mechanisms (MCAR, MAR, NMAR).
* [https://aosmith.rbind.io/2020/08/20/simulate-binomial-glmm/ Simulate! Simulate! - Part 4: A binomial generalized linear mixed model]
* [https://cran.r-project.org/web/packages/simstudy/index.html simstudy] package. The final data sets can represent data from '''randomized control trials''', '''repeated measure (longitudinal) designs''', and cluster randomized trials. Missingness can be generated using various mechanisms (MCAR, MAR, NMAR). [https://www.rdatagen.net/post/analyzing-a-binary-outcome-in-a-study-with-within-cluster-pair-matched-randomization/ Analyzing a binary outcome arising out of within-cluster, pair-matched randomization]. [https://www.rdatagen.net/post/generating-probabilities-for-ordinal-categorical-data/ Generating probabilities for ordinal categorical data].
** [https://www.rdatagen.net/post/2020-12-22-constrained-randomization-to-evaulate-the-vaccine-rollout-in-nursing-homes/ Constrained randomization to evaulate the vaccine rollout in nursing homes]
** [https://www.rdatagen.net/post/2021-01-05-coming-soon-new-feature-to-easily-generate-cumulative-odds-without-proportionality-assumption/ Coming soon: effortlessly generate ordinal data without assuming proportional odds]
** [https://www.rdatagen.net/post/2021-03-02-randomization-tests/ Randomization tests]
* [https://www.tandfonline.com/doi/full/10.1080/00031305.2024.2350445 Binomial Confidence Intervals for Rare Events: Importance of Defining Margin of Error Relative to Magnitude of Proportion]. Wald, Clopper-Pearson (exact), Wilson and Agresti-Coull.


= Count data =
= Count data =
Line 940: Line 1,204:
[https://amstat.tandfonline.com/doi/full/10.1080/00031305.2018.1564699 Bias in Small-Sample Inference With Count-Data Models] Blackburn 2019
[https://amstat.tandfonline.com/doi/full/10.1080/00031305.2018.1564699 Bias in Small-Sample Inference With Count-Data Models] Blackburn 2019


= Survival data =
= Survival data analysis =
* [https://www.mayo.edu/research/documents/tr53pdf/DOC-10027379 A Package for Survival Analysis in S] by Terry M. Therneau, 1999
See [[Survival_data|Survival data analysis]]
* https://web.stanford.edu/~lutian/coursepdf/stat331.HTML and https://web.stanford.edu/~lutian/coursepdf/ ([https://web.stanford.edu/~lutian/coursepdf/survweek5.pdf#page=7 3 types of tests]).
* http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf.
** How to manually compute the KM curve and by R
** Estimation of parametric survival function from joint likelihood in theory and R.
* http://data.princeton.edu/wws509/notes/c7s1.html
* http://data.princeton.edu/pop509/ParametricSurvival.pdf Parametric survival models with covariates (logT = alpha + sigma W) p8
** Weibull p2 where T ~ Weibull and W ~ Extreme value.
** Gamma p3 where T ~ Gamma and W ~ Generalized extreme value
** Generalized gamma p4,
** log normal p4 where T ~ lognormal and W ~ N(0,1)
** log logistic p4 where T ~ log logistic and W ~ standard logistic distribution.
* http://www.math.ucsd.edu/~rxu/math284/ (good cover) [http://www.math.ucsd.edu/~rxu/math284/review_lik.pdf#page=8 Wald test]
* http://www.stats.ox.ac.uk/~mlunn/
* https://www.openintro.org/download.php?file=survival_analysis_in_R&referrer=/stat/surv.php
* https://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf
* https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1065034/
* [https://rviews.rstudio.com/2017/09/25/survival-analysis-with-r/ Survival Analysis with R] from rviews.rstudio.com
* [http://bioconnector.org/workshops/r-survival.html#survival_analysis_in_r Survival Analysis with R] from bioconnector.og.


== [https://en.wikipedia.org/wiki/Censoring_(statistics) Censoring] ==
= Logistic regression =
[http://stat.wvu.edu/~rmnatsak/Note3_547.pdf Sample schemes of incomplete data]
* Type I censoring: the censoring time is fixed
* Type II censoring
* Random censoring
** Right censoring
** Left censoring
* Interval censoring
* Truncation


The most common is called '''right censoring''' and occurs when a participant does not have the event of interest during the study and thus their last observed follow-up time is less than their time to event. This can occur when a participant drops out before the study ends or when a participant is '''event free''' at the end of the observation period.
== Simulate binary data from the logistic model ==
https://stats.stackexchange.com/questions/46523/how-to-simulate-artificial-data-for-logistic-regression
{{Pre}}
set.seed(666)
x1 = rnorm(1000)          # some continuous variables
x2 = rnorm(1000)
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
pr = 1/(1+exp(-z))        # pass through an inv-logit function
y = rbinom(1000,1,pr)      # bernoulli response variable
#now feed it to glm:
df = data.frame(y=y,x1=x1,x2=x2)
glm( y~x1+x2,data=df,family="binomial")
</pre>


[https://en.wikipedia.org/wiki/Survival_analysis#Definitions_of_common_terms_in_survival_analysis Definitions of common terms in survival analysis]
== Building a Logistic Regression model from scratch ==
https://www.analyticsvidhya.com/blog/2015/10/basics-logistic-regression


* '''Event''': Death, disease occurrence, disease recurrence, recovery, or other experience of interest
== Algorithm didn’t converge & probabilities 0/1 ==
* '''Time''': The time from the beginning of an observation period (such as surgery or beginning treatment) to (i) an event, or (ii) end of the study, or (iii) loss of contact or withdrawal from the study.
* [https://statisticsglobe.com/r-glm-fit-warning-algorithm-not-converge-probabilities glm.fit Warning Messages in R: algorithm didn’t converge & probabilities 0/1]
* '''Censoring / Censored observation''': If a subject does not have an event during the observation time, they are described as censored. The subject is censored in the sense that nothing is observed or known about that subject after the time of censoring. A censored subject may or may not have an event after the end of observation time.
* [https://stackoverflow.com/a/8596547 Why am I getting "algorithm did not converge" and "fitted prob numerically 0 or 1" warnings with glm?]


In R, "status" should be called '''event status'''. status = 1 means event occurred. status = 0 means no event (censored). Sometimes the status variable has more than 2 states. We can uses "status != 0" to replace "status" in Surv() function.
== Prediction ==
* status=0/1/2 for censored, transplant and dead in survival::pbc data.
<ul>
* status=0/1/2 for censored, relapse and dead in randomForestSRC::follic data.
<li>[https://stackoverflow.com/a/36637603 Confused with the reference level in logistic regression in R]</li>
<li>[https://rstatisticsblog.com/data-science-in-action/machine-learning/binary-logistic-regression-with-r/ Binary Logistic Regression With R]. The prediction values returned from predict(fit, type = "response") are the probability that a new observation is from class 1 (instead of class 0); the second level. We can convert this probability into a class label by using ''ifelse(pred > 0.5, 1, 0)''. </li>
<li>[https://www.guru99.com/r-generalized-linear-model.html GLM in R: Generalized Linear Model with Example] </li>
<li>[https://www.machinelearningplus.com/machine-learning/logistic-regression-tutorial-examples-r/ Logistic Regression – A Complete Tutorial With Examples in R]. caret's downSample()/upSample() was used.
<pre>
library(caret)
table(oilType)
# oilType
#  A  B  C  D  E  F  G
# 37 26  3  7 11 10  2  
dim(fattyAcids)
# [1] 96  7
dim(upSample(fattyAcids, oilType))
# [1] 259  8
table(upSample(fattyAcids, oilType)$Class)
#  A  B  C  D  E  F  G
# 37 37 37 37 37 37 37
table(downSample(fattyAcids, oilType)$Class)
# A B C D E F G
# 2 2 2 2 2 2 2
</pre>
</li>
</ul>


== How to explore survival data ==
== Odds ratio ==
https://en.wikipedia.org/wiki/Survival_analysis#Survival_analysis_in_R
<ul>
<li> https://en.wikipedia.org/wiki/Odds_ratio. It seems a larger OR does not imply a smaller Fisher's exact p-value. See an example on Fig 4 [https://ascopubs.org/doi/figure/10.1200/PO.19.00345 here].
<li>Odds ratio = exp(coefficient). For example, if the coefficient for a predictor variable in your logistic regression model is 0.5, the odds ratio for that variable would be: exp(0.5) = 1.64. This means that, for every unit increase in the predictor variable, the '''odds''' of the binary outcome occurring increase by a factor of 1.64. A larger odds ratio indicates a stronger association between the predictor variable and the binary outcome, while a smaller odds ratio indicates a weaker association.
<li>why the odds ratio is exp(coefficient) in logistic regression? The odds ratio is the exponent of the coefficient in a logistic regression model because the logistic regression model is based on the '''logit function, which is the natural logarithm of the odds ratio'''. The logit function takes the following form: '''logit(p) = log(p/(1-p))''', where p is the probability of the binary outcome occurring.
<li>Clinical example: Imagine that you are conducting a study to investigate the association between body mass index (''BMI'') and the risk of developing ''type 2 diabetes''. Fit a logistic regression using BMI as the covariate. Calculate the odds ratio for the BMI variable: exp(coefficient) = 1.64. This means that, for every unit increase in BMI, the odds of a patient developing type 2 diabetes increase by a factor of 1.64.
<li>'''Probability vs. odds''': Probability and odds can differ from each other in many ways. For example, probability (of an event) typically appears as a percentage, while you can express odds as a ''fraction or ratio'' (the ratio of the number of ways the event can occur to the number of ways it cannot occur). Another difference is that probability uses a range that only exists between the numbers zero and one, while odds use a range that has no limits.
<li> Calculate the odds ratio from the coefficient estimates; see [https://stats.stackexchange.com/questions/8661/logistic-regression-in-r-odds-ratio this post].
{{Pre}}
require(MASS)
N  <- 100              # generate some data
X1 <- rnorm(N, 175, 7)
X2 <- rnorm(N,  30, 8)
X3 <- abs(rnorm(N, 60, 30))
Y  <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 12)


* Create graph of length of time that each subject was in the study
# dichotomize Y and do logistic regression
<syntaxhighlight lang='rsplus'>
Yfac  <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi"))
library(survival)
glmFit <- glm(Yfac ~ X1 + X2 + X3, family=binomial(link="logit"))
# sort the aml data by time
aml <- aml[order(aml$time),]
with(aml, plot(time, type="h"))
</syntaxhighlight>
[[File:Aml time.svg|px=100]]
* Create the life table survival object
<syntaxhighlight lang='rsplus'>
summary(aml.survfit)
Call: survfit(formula = Surv(time, status == 1) ~ 1, data = aml)


time n.risk n.event survival std.err lower 95% CI upper 95% CI
exp(cbind(coef(glmFit), confint(glmFit)))
    5    23      2  0.9130  0.0588      0.8049        1.000
</pre>
    8    21      2  0.8261  0.0790      0.6848        0.996
</ul>
    9    19      1  0.7826  0.0860      0.6310        0.971
  12    18      1  0.7391  0.0916      0.5798        0.942
  13    17      1  0.6957  0.0959      0.5309        0.912
  18    14      1  0.6460  0.1011      0.4753        0.878
  23    13      2  0.5466  0.1073      0.3721        0.803
  27    11      1  0.4969  0.1084      0.3240        0.762
  30      9      1  0.4417  0.1095      0.2717        0.718
  31      8      1  0.3865  0.1089      0.2225        0.671
  33      7      1  0.3313  0.1064      0.1765        0.622
  34      6      1  0.2761  0.1020      0.1338        0.569
  43      5      1  0.2208  0.0954      0.0947        0.515
  45      4      1  0.1656  0.0860      0.0598        0.458
  48      2      1  0.0828  0.0727      0.0148        0.462
</syntaxhighlight>
* Kaplan-Meier curve for aml with the confidence bounds.
<syntaxhighlight lang='rsplus'>
plot(aml.survfit, xlab = "Time", ylab="Proportion surviving")
</syntaxhighlight>
* Create aml life tables broken out by treatment (x,  "Maintained" vs. "Not maintained")
<syntaxhighlight lang='rsplus'>
surv.by.aml.rx <- survfit(Surv(time, status == 1) ~ x, data = aml)


summary(surv.by.aml.rx)
== AUC ==
Call: survfit(formula = Surv(time, status == 1) ~ x, data = aml)
[https://hopstat.wordpress.com/2014/12/19/a-small-introduction-to-the-rocr-package/ A small introduction to the ROCR package]
<pre>
      predict.glm()             ROCR::prediction()    ROCR::performance()
glmobj ------------> predictTest -----------------> ROCPPred ---------> AUC
newdata                labels
</pre>


                x=Maintained
== Gompertz function ==
time n.risk n.event survival std.err lower 95% CI upper 95% CI
* [https://en.wikipedia.org/wiki/Gompertz_function Gompertz function] and [https://en.wikipedia.org/wiki/Gompertz_distribution Gompertz distribution]
    9    11      1    0.909  0.0867      0.7541        1.000
* [https://www.youtube.com/watch?v=0ifT-7K68sk Gompertz Curve in R | Tumor Growth Example]
  13    10      1    0.818  0.1163      0.6192        1.000
  18      8      1    0.716  0.1397      0.4884        1.000
  23      7      1    0.614  0.1526      0.3769        0.999
  31      5      1    0.491  0.1642      0.2549        0.946
  34      4      1    0.368  0.1627      0.1549        0.875
  48      2      1    0.184  0.1535      0.0359        0.944


                x=Nonmaintained
= Medical applications =
time n.risk n.event survival std.err lower 95% CI upper 95% CI
== RCT ==
    5    12      2  0.8333  0.1076      0.6470        1.000
* [https://www.rdatagen.net/post/2021-11-23-design-effects-with-baseline-measurements/ The design effect of a cluster randomized trial with baseline measurements]
    8    10      2  0.6667  0.1361      0.4468        0.995
* [https://www.r-bloggers.com/2024/09/explaining-a-causal-forest/ Explaining a Causal Forest]
  12      8      1  0.5833  0.1423      0.3616        0.941
  23      6      1  0.4861  0.1481      0.2675        0.883
  27      5      1  0.3889  0.1470      0.1854        0.816
  30      4      1  0.2917  0.1387      0.1148        0.741
  33      3      1  0.1944  0.1219      0.0569        0.664
  43      2      1  0.0972  0.0919      0.0153        0.620
  45      1      1  0.0000    NaN          NA          NA
</syntaxhighlight>
* Plot KM plot broken out by treatment
<syntaxhighlight lang='rsplus'>
plot(surv.by.aml.rx, xlab = "Time", ylab="Survival",
    col=c("black", "red"), lty = 1:2,
    main="Kaplan-Meier Survival vs. Maintenance in AML")
legend(100, .6, c("Maintained", "Not maintained"),
    lty = 1:2, col=c("black", "red"))
</syntaxhighlight>
* Perform the log rank test using the R function survdiff().
<syntaxhighlight lang='rsplus'>
surv.diff.aml <- survdiff(Surv(time, status == 1) ~ x, data=aml)
surv.diff.aml


Call:
== Subgroup analysis ==
survdiff(formula = Surv(time, status == 1) ~ x, data = aml)
Other related keywords: recursive partitioning, randomized clinical trials (RCT)


                N Observed Expected (O-E)^2/E (O-E)^2/V
* [https://www.rdatagen.net/post/sub-group-analysis-in-rct/ Thinking about different ways to analyze sub-groups in an RCT]
x=Maintained    11        7    10.69      1.27      3.4
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.7064/full Tutorial in biostatistics: data-driven subgroup identification and analysis in clinical trials] I Lipkovich, A Dmitrienko - Statistics in medicine, 2017
x=Nonmaintained 12      11    7.31      1.86      3.4
* Personalized medicine:Four perspectives of tailored medicine SJ Ruberg, L Shen - Statistics in Biopharmaceutical Research, 2015
 
* Berger, J. O., Wang, X., and Shen, L. (2014), “A Bayesian Approach to Subgroup Identification,” Journal of Biopharmaceutical Statistics, 24, 110–129.
Chisq= 3.4  on 1 degrees of freedom, p= 0.07
* [https://rpsychologist.com/treatment-response-subgroup Change over time is not "treatment response"]
</syntaxhighlight>
* [https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1740096?journalCode=uasa20 Inference on Selected Subgroups in Clinical Trials] Guo 2020


=== Some public data ===
== Interaction analysis ==
{| class="wikitable"
* Goal: '''assessing the predictiveness of biomarkers''' by testing their '''interaction (strength) with the treatment'''.
! package
* [[Survival_data#Prognostic_markers_vs_predictive_markers_.28and_other_biomarkers.29|Prognostics vs predictive marker]] including quantitative and qualitative interactions.
! data (sample size)
* [https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.7608 Evaluation of biomarkers for treatment selection usingindividual participant data from multiple clinical trials] Kang et al 2018
|-
* http://www.stat.purdue.edu/~ghobbs/STAT_512/Lecture_Notes/ANOVA/Topic_27.pdf#page=15. For survival data, y-axis is the survival time and B1=treatment, B2=control and X-axis is treatment-effect modifying score. But as seen on [http://www.stat.purdue.edu/~ghobbs/STAT_512/Lecture_Notes/ANOVA/Topic_27.pdf#page=16 page16], the effects may not be separated.
| [https://www.rdocumentation.org/packages/survival/versions/2.43-1 survival]
* [http://onlinelibrary.wiley.com/doi/10.1002/bimj.201500234/full Identification of biomarker-by-treatment interactions in randomized clinical trials with survival outcomes and high-dimensional spaces] N Ternès, F Rotolo, G Heinze, S Michiels - Biometrical Journal, 2017
| pbc (418), ovarian (26), aml/leukemia (23), colon (1858), lung (228), veteran (137)
* [https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.6564 Designing a study to evaluate the benefitof a biomarker for selectingpatient treatment] Janes 2015
|-
* [https://onlinelibrary.wiley.com/doi/epdf/10.1002/pst.1728 A visualization method measuring theperformance of biomarkers for guidingtreatment decisions] Yang et al 2015. Predictiveness curves were used a lot.
| [https://www.rdocumentation.org/packages/pec/versions/2018.07.26 pec]
* [https://onlinelibrary.wiley.com/doi/epdf/10.1111/biom.12191 Combining Biomarkers to Optimize Patient TreatmentRecommendations] Kang et al 2014. Several simulations are conducted.
| GBSG2 (686), cost (518)
* [https://www.ncbi.nlm.nih.gov/pubmed/24695044 An approach to evaluating and comparing biomarkers for patient treatment selection] Janes et al 2014
|-
* [http://journals.sagepub.com/doi/pdf/10.1177/0272989X13493147 A Framework for Evaluating Markers Used to Select Patient Treatment] Janes et al 2014
| [https://www.rdocumentation.org/packages/randomForestSRC/versions/2.7.0 randomForestSRC]
* Tian, L., Alizaden, A. A., Gentles, A. J., and Tibshirani, R. (2014) “A Simple Method for Detecting Interactions Between a Treatment and a Large Number of Covariates,” and the [https://books.google.com/books?hl=en&lr=&id=2gG3CgAAQBAJ&oi=fnd&pg=PA79&ots=y5LqF3vk-T&sig=r2oaOxf9gcjK-1bvFHVyfvwscP8#v=onepage&q&f=true book chapter].
| follic (541)
* [https://biostats.bepress.com/cgi/viewcontent.cgi?article=1228&context=uwbiostat Statistical Methods for Evaluating and Comparing Biomarkers for Patient Treatment Selection] Janes et al 2013
|-
* [https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1541-0420.2011.01722.x Assessing Treatment-Selection Markers using a Potential Outcomes Framework] Huang et al 2012
| [https://www.rdocumentation.org/packages/KMsurv/versions/0.1-5 KMsurv]
* [https://biostats.bepress.com/cgi/viewcontent.cgi?article=1223&context=uwbiostat Methods for Evaluating Prediction Performance of Biomarkers and Tests] Pepe et al 2012
| A LOT. tongue (80)
* Measuring the performance of markers for guiding treatment decisions by Janes, et al 2011. <syntaxhighlight lang='rsplus'>
|-
cf <- c(2, 1, .5, 0)
| [https://rdrr.io/cran/survivalROC/man/ survivalROC]
f1 <- function(x) { z <- cf[1] + cf[3] + (cf[2]+cf[4])*x; 1/ (1 + exp(-z)) }
| mayo (312)
f0 <- function(x) { z <- cf[1] + cf[2]*x; 1/ (1 + exp(-z)) }
|-
par(mfrow=c(1,3))
| [https://www.rdocumentation.org/packages/survAUC/versions/1.0-5 survAUC]
curve(f1, -3, 3, col = 'red', ylim = c(0, 1),
| NA
      ylab = '5-year DFS Rate', xlab = 'Marker A/D Value',
|}
      main = 'Predictiveness Curve', lwd = 2)
 
curve(f0, -3, 3, col = 'black', ylim = c(0, 1),
== Kaplan & Meier and Nelson-Aalen: survfit.formula(), Surv() ==
      xlab = '', ylab = '', lwd = 2, add = TRUE)
* Landmarks
legend(.5, .4, c("control", "treatment"),
** Kaplan-Meier: 1958
      col = c("black", "red"), lwd = 2)
** Nelson: 1969
** Cox and Brewlow: 1972 S(t) = exp(-Lambda(t))
** Aalen: 1978 Lambda(t)
* D distinct times <math>t_1 < t_2 < \cdots < t_D</math>. At time <math>t_i</math> there are <math>d_i</math> events. Let <math>Y_i</math> be the number of individuals who are at risk at time <math>t_i</math>. The quantity <math>d_i/Y_i</math> provides an estimate of the conditional probability that an individual who survives to just prior to time <math>t_i</math> experiences the event at time <math>t_i</math>. The '''KM estimator of the survival function''' and the '''Nelson-Aalen estimator of the cumulative hazard''' (their relationship is given below) are define as follows (<math>t_1 \le t</math>):
: <math>
\begin{align}
\hat{S}(t) &= \prod_{t_i \le t} [1 - d_i/Y_i] \\
\hat{H}(t) &= \sum_{t_i \le t} d_i/Y_i
\end{align}
</math>
<syntaxhighlight lang='rsplus'>
str(kidney)
'data.frame': 76 obs. of  7 variables:
$ id    : num  1 1 2 2 3 3 4 4 5 5 ...
$ time  : num  8 16 23 13 22 28 447 318 30 12 ...
$ status : num  1 1 1 0 1 1 1 1 1 1 ...
$ age    : num  28 28 48 48 32 32 31 32 10 10 ...
$ sex    : num  1 1 2 2 1 1 2 2 1 1 ...
$ disease: Factor w/ 4 levels "Other","GN","AN",..: 1 1 2 2 1 1 1 1 1 1 ...
$ frail  : num  2.3 2.3 1.9 1.9 1.2 1.2 0.5 0.5 1.5 1.5 ...
kidney[order(kidney$time), c("time", "status")]
kidney[kidney$time == 13, ] # one is dead and the other is alive
length(unique(kidney$time)) # 60
 
sfit <- survfit(Surv(time, status) ~ 1, data = kidney)
 
sfit
Call: survfit(formula = Surv(time, status) ~ 1, data = kidney)
 
      n  events  median 0.95LCL 0.95UCL
    76      58      78      39    152
 
str(sfit)
List of 13
$ n        : int 76
$ time    : num [1:60] 2 4 5 6 7 8 9 12 13 15 ...
$ n.risk  : num [1:60] 76 75 74 72 71 69 65 64 62 60 ...
$ n.event  : num [1:60] 1 0 0 0 2 2 1 2 1 2 ...
$ n.censor : num [1:60] 0 1 2 1 0 2 0 0 1 0 ...
$ surv    : num [1:60] 0.987 0.987 0.987 0.987 0.959 ...
$ type    : chr "right"
length(unique(kidney$time))  # [1] 60
all(sapply(sfit$time, function(tt) sum(kidney$time >= tt)) == sfit$n.risk) # TRUE
all(sapply(sfit$time, function(tt) sum(kidney$status[kidney$time == tt])) == sfit$n.event) # TRUE
all(sapply(sfit$time, function(tt) sum(1-kidney$status[kidney$time == tt])) == sfit$n.censor) #  TRUE
all(cumprod(1 - sfit$n.event/sfit$n.risk) == sfit$surv) #  FALSE
range(abs(cumprod(1 - sfit$n.event/sfit$n.risk) - sfit$surv))
# [1] 0.000000e+00 1.387779e-17
 
summary(sfit)
time n.risk n.event survival std.err lower 95% CI upper 95% CI
    2    76      1    0.987  0.0131      0.96155        1.000
    7    71      2    0.959  0.0232      0.91469        1.000
    8    69      2    0.931  0.0297      0.87484        0.991
...
  511      3      1    0.042  0.0288      0.01095        0.161
  536      2      1    0.021  0.0207      0.00305        0.145
  562      1      1    0.000    NaN          NA          NA
</syntaxhighlight>
* Note that the KM estimate is '''left-continuous''' step function with the intervals closed at left and open at right. For <math>t \in [t_j, t_{j+1})</math> for a certain ''j'', we have <math>\hat{S}(t) = \prod_{i=1}^j (1-d_i/n_i)</math> where <math>d_i</math> is the number people who have an event during the interval <math>[t_i, t_{i+1})</math> and <math>n_i</math> is the number of people at risk just before the beginning of the interval <math>[t_i, t_{i+1})</math>.
* The product-limit estimator can be constructed by using a ''reduced-sample'' approach. We can estimate the <math>P(T > t_i | T \ge t_i) = \frac{Y_i - d_i}{Y_i}</math> for <math>i=1,2,\cdots,D</math>. <math>
S(t_i) = \frac{S(t_i)}{S(t_{i-1})} \frac{S(t_{i-1})}{S(t_{i-2})} \cdots \frac{S(t_2)}{S(t_1)} \frac{S(t_1)}{S(0)} S(0) = P(T > t_i | T \ge t_i) P(T >t_{i-1} | T \ge t_{i-1}) \cdots P(T>t_2|T \ge t_2) P(T>t_1 | T \ge t_1)</math> because S(0)=1 and, for a discrete distribution, <math>S(t_{i-1}) = P(T > t_{i-1}) = P(T \ge t_i)</math>.
* '''Self consistency'''. If we had no censored observations, the estimator of the survival function at a time ''t'' is the proportion of observations which are larger than ''t'', that is, <math>\hat{S}(t) = \frac{1}{n}\sum I(X_i > t)</math>.
* Curves are plotted in the same order as they are listed by print (which gives a 1 line summary of each). For example, -1 < 1 and 'Maintenance' < 'Nonmaintained'. That means, the labels list in the legend() command should have the same order as the curves.
* Kaplan and Meier is used to give an estimator of the survival function S(t)
* Nelson-Aalen estimator is for the cumulative hazard H(t). Note that <math>0 \le H(t) < \infty</math> and <math>H(t) \rightarrow \infty</math> as t goes to infinity. So there is a constraint on the hazard function, see [https://en.wikipedia.org/wiki/Survival_analysis Wikipedia].


Note that S(t) is related to H(t) by <math>H(t) = -ln[S(t)]</math> or <math>S(t) = exp[-H(t)] </math>.
cf <- c(.1, 1, -.1, .5)
The two estimators are similar (see example 4.1A and 4.1B from Klein and Moeschberge).
curve(f1, -3, 3, col = 'red', ylim = c(0, 1),
      ylab = '5-year DFS Rate', xlab = 'Marker G Value',
      main = 'Predictiveness Curve', lwd = 2)
curve(f0, -3, 3, col = 'black', ylim = c(0, 1),
      xlab = '', ylab = '', lwd = 2, add = TRUE)
legend(.5, .4, c("control", "treatment"),
      col = c("black", "red"), lwd = 2)
abline(v= - cf[3]/cf[4], lty = 2)


The Nelson-Aalen estimator has two primary uses in analyzing data
cf <- c(1, -1, 1, 2)
# Selecting between parametric models for the time to event
curve(f1, -3, 3, col = 'red', ylim = c(0, 1),
# Crude estimates of the hazard rate h(t). This is related to the estimation of the survival function in Cox model. See 8.6 of Klein and Moeschberge.
      ylab = '5-year DFS Rate', xlab = 'Marker B Value',
 
      main = 'Predictiveness Curve', lwd = 2)
The Kaplan–Meier estimator (the product limit estimator) is an estimator for estimating the survival function from lifetime data. In medical research, it is often used to measure the fraction of patients living for a certain amount of time after treatment.
curve(f0, -3, 3, col = 'black', ylim = c(0, 1),
 
      xlab = '', ylab = '', lwd = 2, add = TRUE)
Note that
legend(.5, .85, c("control", "treatment"),
* '''The "+" sign in the KM curves means censored observations (this convention matches with the output of Surv() function) and a long vertical line (not '+') means there is a dead observation at that time.'''
      col = c("black", "red"), lwd = 2)
: <syntaxhighlight lang='rsplus'>
abline(v= - cf[3]/cf[4], lty = 2)
> aml[1:5,]
</syntaxhighlight> [[:File:PredcurveLogit.svg]]
  time status          x
* [https://www.degruyter.com/downloadpdf/j/ijb.2014.10.issue-1/ijb-2012-0052/ijb-2012-0052.pdf An Approach to Evaluating and Comparing Biomarkers for Patient Treatment Selection] The International Journal of Biostatistics by Janes, 2014. Y-axis is risk given marker, not P(T > t0|X). Good details.
1    9      1 Maintained
* Gunter, L., Zhu, J., and Murphy, S. (2011), “Variable Selection for Qualitative Interactions in Personalized Medicine While Controlling the Family-Wise Error Rate,” Journal of Biopharmaceutical Statistics, 21, 1063–1078.
2   13      1 Maintained
3   13      0 Maintained
4  18      1 Maintained
5   23      1 Maintained
> Surv(aml$time, aml$status)[1:5,]
[1] 9  13  13+ 18  23
</syntaxhighlight>
* '''If the last observation (longest survival time) is dead, the survival curve will goes down to zero. Otherwise, the survival curve will remain flat from the last event time.'''


Usually the KM curve of treatment group is higher than that of the control group.
= Statistical Learning =
* [http://statweb.stanford.edu/~tibs/ElemStatLearn/ Elements of Statistical Learning] Book homepage
* [http://statweb.stanford.edu/~tibs/research.html An Introduction to Statistical Learning with Applications in R]/ISLR], [https://github.com/tpn/pdfs/blob/master/An%20Introduction%20To%20Statistical%20Learning%20with%20Applications%20in%20R%20(ISLR%20Sixth%20Printing).pdf pdf]
** https://www.statlearning.com/ 2nd edition. Aug 2021. [https://cran.r-project.org/web/packages/ISLR2/index.html ISLR2] package.
** https://r4ds.github.io/bookclub-islr/
** [https://www.dataschool.io/15-hours-of-expert-machine-learning-videos/amp/?s=09 In-depth introduction to machine learning in 15 hours of expert videos]
** [https://emilhvitfeldt.github.io/ISLR-tidymodels-labs/index.html *Translations of the labs into using the tidymodels set of packages]


The Y-axis (the probability that a member from a given population will have a lifetime exceeding time) is often called
* [https://comp-approach.com/ A Computational Approach to Statistical Learning] by Taylor Arnold, Michael Kane, and Bryan Lewis. [https://comp-approach.com/chapter08.pdf Chap 8 Neural Networks].
* Cumulative probability
* Cumulative survival
* Percent survival
* Probability without event
* Proportion alive/surviving
* Survival
* Survival probability


[[File:KMcurve.png|400px]]  
* [http://heather.cs.ucdavis.edu/draftregclass.pdf From Linear Models to Machine Learning] by Norman Matloff
[[File:KMcurve cumhaz.png|400px]]
* [http://www.kdnuggets.com/2017/04/10-free-must-read-books-machine-learning-data-science.html 10 Free Must-Read Books for Machine Learning and Data Science]
* [https://towardsdatascience.com/the-10-statistical-techniques-data-scientists-need-to-master-1ef6dbd531f7 10 Statistical Techniques Data Scientists Need to Master]  
*# Linear regression
*# Classification: Logistic Regression, Linear Discriminant Analysis, Quadratic Discriminant Analysis
*# Resampling methods: Bootstrapping and Cross-Validation
*# Subset selection: Best-Subset Selection, Forward Stepwise Selection, Backward Stepwise Selection, Hybrid Methods
*# Shrinkage/regularization: Ridge regression, Lasso
*# Dimension reduction: Principal Components Regression, Partial least squares
*# Nonlinear models: Piecewise function, Spline, generalized additive model
*# Tree-based methods: Bagging, Boosting, Random Forest
*# Support vector machine
*# Unsupervised learning: PCA, k-means, Hierarchical
* [https://www.listendata.com/2018/03/regression-analysis.html?m=1 15 Types of Regression you should know]
* [https://www.tandfonline.com/doi/full/10.1080/01621459.2021.1979010 Is a Classification Procedure Good Enough?—A Goodness-of-Fit Assessment Tool for Classification Learning] Zhang 2021 JASA


<syntaxhighlight lang='rsplus'>
== LDA (Fisher's linear discriminant), QDA ==
> library(survival)
* https://en.wikipedia.org/wiki/Linear_discriminant_analysis.  
> str(aml$x)
** Assumptions: '''Multivariate normality, Homogeneity of variance/covariance''', Multicollinearity, Independence.
Factor w/ 2 levels "Maintained","Nonmaintained": 1 1 1 1 1 1 1 1 1 1 ...
** The common variance is calculated by the pooled covariance matrix just like the [[T-test#Two_sample_test_assuming_equal_variance|t-test case]].
> plot(leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml[7:17,] ) ,  
** ''Logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met.''
      lty=2:3, mark.time = TRUE) # a (small) subset, mark.time is used to show censored obs
* [https://datascienceplus.com/how-to-perform-logistic-regression-lda-qda-in-r/ How to perform Logistic Regression, LDA, & QDA in R]
> aml[7:17,]
* [http://r-posts.com/discriminant-analysis-statistics-all-the-way/ Discriminant Analysis: Statistics All The Way]
  time status            x
* [https://onlinelibrary.wiley.com/doi/10.1111/biom.13065 Multiclass linear discriminant analysis with ultrahigh‐dimensional features] Li 2019
7    31      1    Maintained
* [https://sebastianraschka.com/Articles/2014_python_lda.html Linear Discriminant Analysis – Bit by Bit]
8    34      1    Maintained
9    45      0    Maintained
10   48      1    Maintained
11  161      0    Maintained
12    5      1 Nonmaintained
13    5      1 Nonmaintained
14    8      1 Nonmaintained
15    8      1 Nonmaintained
16  12      1 Nonmaintained
17  16      0 Nonmaintained
> legend(100, .9, c("Maintenance", "No Maintenance"), lty = 2:3) # lty: 2=dashed, 3=dotted
> title("Kaplan-Meier Curves\nfor AML Maintenance Study")


# Cumulative hazard plot
== Bagging ==
# Lambda(t) = -log(S(t));
Chapter 8 of the book.
# see https://en.wikipedia.org/wiki/Survival_analysis
# http://statweb.stanford.edu/~olshen/hrp262spring01/spring01Handouts/Phil_doc.pdf
plot(leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml[7:17,] ) ,
      lty=2:3, mark.time = T, fun="cumhaz", ylab="Cumulative Hazard")
</syntaxhighlight>


* Kaplan-Meier estimator from the [http://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator wikipedia].
* Bootstrap mean is approximately a posterior average.
* Two papers [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3059453/ this] and  [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3932959/ this] to describe steps to calculate the KM estimate.
* Bootstrap aggregation or bagging average: Average the prediction over a collection of bootstrap samples, thereby reducing its variance. The bagging estimate is defined by
* [https://stats.stackexchange.com/questions/26247/estimating-a-survival-probability-in-r Estimating a survival probability in R]
:<math>\hat{f}_{bag}(x) = \frac{1}{B}\sum_{b=1}^B \hat{f}^{*b}(x).</math>
<syntaxhighlight lang='rsplus'>
# https://www.lexjansen.com/pharmasug/2011/CC/PharmaSUG-2011-CC16.pdf
mydata <- data.frame(time=c(3,6,8,12,12,21),status=c(1,1,0,1,1,1))
km <- survfit(Surv(time, status)~1, data=mydata)
plot(km, mark.time = T)
survest <- stepfun(km$time, c(1, km$surv))
plot(survest)
> str(km)
List of 13
$ n        : int 6
$ time    : num [1:5] 3 6 8 12 21
$ n.risk  : num [1:5] 6 5 4 3 1
$ n.event  : num [1:5] 1 1 0 2 1
$ n.censor : num [1:5] 0 0 1 0 0
$ surv    : num [1:5] 0.833 0.667 0.667 0.222 0
$ type    : chr "right"
$ std.err  : num [1:5] 0.183 0.289 0.289 0.866 Inf
$ upper    : num [1:5] 1 1 1 1 NA
$ lower    : num [1:5] 0.5827 0.3786 0.3786 0.0407 NA
$ conf.type: chr "log"
$ conf.int : num 0.95
> class(survest)
[1] "stepfun"  "function"
> survest
Step function
Call: stepfun(km$time, c(1, km$surv))
x[1:5] =     3,      6,      8,    12,    21
6 plateau levels =      1, 0.83333, 0.66667,  ..., 0.22222,      0
> str(survest)
function (v) 
- attr(*, "class")= chr [1:2] "stepfun" "function"
- attr(*, "call")= language stepfun(km$time, c(1, km$surv))
</syntaxhighlight>


[[File:Kmcurve_toy.svg|600px]]
[https://statcompute.wordpress.com/2016/01/02/where-bagging-might-work-better-than-boosting/ Where Bagging Might Work Better Than Boosting]


=== Multiple curves ===
[https://freakonometrics.hypotheses.org/52777 CLASSIFICATION FROM SCRATCH, BAGGING AND FORESTS 10/8]
Curves/groups are ordered. The first color in the palette is used to color the first level of the factor variable. This is same idea as [https://www.rdocumentation.org/packages/survminer/versions/0.4.2/topics/ggsurvplot ggsurvplot] in the survminer package. This affects parameters like '''col''' and '''lty''' in plot() function. For example,
* 1<2
* 'c' < 't'
* 'control' < 'treatment'
* 'Control' < 'Treatment'
* 'female' < 'male'.


For '''legend()''', the first category in legend argument will appear at the top of the legend box.
== Boosting ==
* Ch8.2 Bagging, Random Forests and Boosting of [http://www-bcf.usc.edu/~gareth/ISL/ An Introduction to Statistical Learning] and the [http://www-bcf.usc.edu/~gareth/ISL/Chapter%208%20Lab.txt code].
* [http://freakonometrics.hypotheses.org/19874 An Attempt To Understand Boosting Algorithm]
* [http://cran.r-project.org/web/packages/gbm/index.html gbm] package. An implementation of extensions to Freund and Schapire's '''AdaBoost algorithm''' and Friedman's '''gradient boosting machine'''. Includes regression methods for least squares, absolute loss, t-distribution loss, [http://mathewanalytics.com/2015/11/13/applied-statistical-theory-quantile-regression/ quantile regression], logistic, multinomial logistic, Poisson, Cox proportional hazards partial likelihood, AdaBoost exponential loss, Huberized hinge loss, and Learning to Rank measures (LambdaMart).
* https://www.biostat.wisc.edu/~kendzior/STAT877/illustration.pdf
* http://www.is.uni-freiburg.de/ressourcen/business-analytics/10_ensemblelearning.pdf and [http://www.is.uni-freiburg.de/ressourcen/business-analytics/homework_ensemblelearning_questions.pdf exercise]
* [https://freakonometrics.hypotheses.org/52782 Classification from scratch]
* [https://datasciencetut.com/boosting-in-machine-learning/ Boosting in Machine Learning:-A Brief Overview]


=== Inverse Probability of Censoring Weighted (IPCW) ===
=== AdaBoost ===
* [https://en.wikipedia.org/wiki/Inverse_probability_weighting Inverse probability weighting] from Wikipedia
AdaBoost.M1 by Freund and Schapire (1997):
* [https://onlinelibrary.wiley.com/doi/pdf/10.1002/bimj.200610301 Consistent Estimation of the Expected Brier Score in General Survival Models with Right‐Censored Event Times] Gerds et al 2006.
* https://www.bmj.com/content/352/bmj.i189.full.print Four examples are considered.
* [https://onlinelibrary.wiley.com/doi/full/10.1111/j.0006-341X.2000.00779.x Correcting for Noncompliance and Dependent Censoring in an AIDS Clinical Trial with Inverse Probability of Censoring Weighted (IPCW) Log‐Rank Tests] by James M. Robins, Biometrics 2000.
* [https://amstat.tandfonline.com/doi/abs/10.1198/000313001317098185#.WtO9eOjwb94 The Kaplan–Meier Estimator as an Inverse-Probability-of-Censoring Weighted Average] by Satten 2001. IPCW.


The plots below show by flipping the status variable, we can accurately ''recover'' the survival function of the censoring variable. See [[R#Superimpose_a_density_plot_or_any_curves|the R code here]] for superimposing the true exponential distribution on the KM plot of the censoring variable.
The error rate on the training sample is
<syntaxhighlight lang='rsplus'>
<math>
require(survival)
\bar{err} = \frac{1}{N} \sum_{i=1}^N I(y_i \neq G(x_i)),
n = 10000
</math>
beta1 = 2; beta2 = -1
lambdaT = 1 # baseline hazard
lambdaC = 2  # hazard of censoring
set.seed(1234)
x1 = rnorm(n,0)
x2 = rnorm(n,0)
# true event time
T = rweibull(n, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2))


# method 1: exponential censoring variable
Sequentially apply the weak classification algorithm to repeatedly modified versions of the data, thereby producing a sequence of weak classifiers <math>G_m(x), m=1,2,\dots,M.</math>
C <- rweibull(n, shape=1, scale=lambdaC) 
time = pmin(T,C) 
status <- 1*(T <= C)
mean(status)
summary(T)
summary(C)
par(mfrow=c(2,1), mar = c(3,4,2,2)+.1)
status2 <- 1-status
plot(survfit(Surv(time, status2) ~ 1),
    ylab="Survival probability",
    main = 'Exponential censoring time')


# method 2: uniform censoring variable
The predictions from all of them are combined through a weighted majority vote to produce the final prediction:
C <- runif(n, 0, 21)
<math>
time = pmin(T,C)
G(x) = sign[\sum_{m=1}^M \alpha_m G_m(x)].
status <- 1*(T <= C)
</math>
status2 <- 1-status
Here <math> \alpha_1,\alpha_2,\dots,\alpha_M</math> are computed by the boosting algorithm and weight the contribution of each respective <math>G_m(x)</math>. Their effect is to give higher influence to the more accurate classifiers in the sequence.
plot(survfit(Surv(time, status2) ~ 1),  
    ylab="Survival probability",
    main = "Uniform censoring time")
</syntaxhighlight>


[[File:Ipcw.svg|250px]]
* [https://sefiks.com/2018/11/02/a-step-by-step-adaboost-example/ A Step by Step Adaboost Example]
* [https://xavierbourretsicotte.github.io/AdaBoost.html AdaBoost: Implementation and intuition]


=== stepfun() and plot.stepfun() ===
=== Dropout regularization ===
* [https://www.r-bloggers.com/veterinary-epidemiologic-research-modelling-survival-data-non-parametric-analyses/ Draw cumulative hazards using stepfun()]
[https://statcompute.wordpress.com/2017/08/20/dart-dropout-regularization-in-boosting-ensembles/ DART: Dropout Regularization in Boosting Ensembles]
* For KM curve case, see an example [[#Kaplan_.26_Meier_and_Nelson-Aalen:_survfit.formula.28.29|above]].


=== Survival curves with number at risk at bottom: survminer package ===
=== Gradient boosting ===
R function survminer::ggsurvplot()
* https://en.wikipedia.org/wiki/Gradient_boosting
* http://www.sthda.com/english/articles/24-ggpubr-publication-ready-plots/81-ggplot2-easy-way-to-mix-multiple-graphs-on-the-same-page/#mix-table-text-and-ggplot
* [https://shirinsplayground.netlify.com/2018/11/ml_basics_gbm/ Machine Learning Basics - Gradient Boosting & XGBoost]
* http://r-addict.com/2016/05/23/Informative-Survival-Plots.html
* [http://www.sthda.com/english/articles/35-statistical-machine-learning-essentials/139-gradient-boosting-essentials-in-r-using-xgboost/ Gradient Boosting Essentials in R Using XGBOOST]
* [http://philipppro.github.io/catboost_better_than_the_rest/ Is catboost the best gradient boosting R package?]


Paper examples
== Gradient descent ==
* [https://www.nature.com/articles/nm.4466/figures/6 High-dimensional single-cell analysis predicts response to anti-PD-1 immunotherapy]
[https://en.wikipedia.org/wiki/Gradient_descent Gradient descent] is a first-order iterative optimization algorithm for finding the minimum of a function.
* [https://youtu.be/sDv4f4s2SB8?t=647 Gradient Descent, Step-by-Step] (video) StatQuest. '''Step size''' and '''learning rate'''.
** [https://youtu.be/sDv4f4s2SB8?t=567 Gradient descent is very useful when it is not possible to solve for where the derivative = 0]
** [https://youtu.be/sDv4f4s2SB8?t=1363 New parameter = Old parameter - Step size] where Step size = slope(or gradient) * Learning rate.
** [https://youtu.be/vMh0zPT0tLI  Stochastic Gradient Descent, Clearly Explained!!!]
* [https://spin.atomicobject.com/2014/06/24/gradient-descent-linear-regression/ An Introduction to Gradient Descent and Linear Regression] Easy to understand based on simple linear regression. Python code is provided too. The unknown parameter is the '''learning rate'''.
<ul>
<li>[https://econometricsense.blogspot.com/2011/11/gradient-descent-in-r.html Gradient Descent in R] by Econometric Sense. Example of using the trivial cost function 1.2 * (x-2)^2 + 3.2. R code is provided and visualization of steps is interesting! The unknown parameter is the '''learning rate'''.
<pre>
repeat until convergence {
  Xn+1 = Xn - α∇F(Xn)
}
</pre>
Where ∇F(x) would be the derivative for the cost function at hand and α is the learning rate.
</li></ul>
* [https://econometricsense.blogspot.com/2011/11/regression-via-gradient-descent-in-r.html Regression via Gradient Descent in R] by Econometric Sense.
* [http://gradientdescending.com/applying-gradient-descent-primer-refresher/ Applying gradient descent – primer / refresher]
* [http://sebastianruder.com/optimizing-gradient-descent/index.html An overview of Gradient descent optimization algorithms]
* [https://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-ridge-lasso-regression-python/ A Complete Tutorial on Ridge and Lasso Regression in Python]
* How to choose the learning rate?
** [http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&doc=exercises/ex3/ex3.html Machine learning] from Andrew Ng
** http://scikit-learn.org/stable/modules/sgd.html
* R packages
** https://cran.r-project.org/web/packages/gradDescent/index.html, https://www.rdocumentation.org/packages/gradDescent/versions/2.0
** https://cran.r-project.org/web/packages/sgd/index.html


=== Life table ===
The error function from a simple linear regression looks like
* https://www.r-bloggers.com/veterinary-epidemiologic-research-modelling-survival-data-non-parametric-analyses/
* [https://www.rdocumentation.org/packages/KMsurv/versions/0.1-5/topics/lifetab lifetab()]
 
== Alternatives to survival function plot ==
https://www.rdocumentation.org/packages/survival/versions/2.43-1/topics/plot.survfit
The '''fun''' argument, a transformation of the survival curve
* fun = "event" or "F": f(y) = 1-y; it calculates P(T < t). This is like a t-year risk (Blanche 2018).
* fun = "cumhaz": cumulative hazard function (f(y) = -log(y)); it calculates H(t). See [https://stats.stackexchange.com/a/60250 Intuition for cumulative hazard function].
 
== Breslow estimate ==
* http://support.sas.com/documentation/cdl/en/statug/68162/HTML/default/viewer.htm#statug_lifetest_details03.htm
* Breslow estimate is the exponentiation of the negative Nelson-Aalen estimate of the cumulative hazard function
 
== Logrank test ==
* [https://en.wikipedia.org/wiki/Logrank_test Logrank test] is a hypothesis test to compare the survival distributions of two samples. The logrank test statistic compares estimates of the hazard functions of the two groups at each observed event time.
* [https://onlinelibrary.wiley.com/doi/10.1111/biom.13102 On null hypotheses in survival analysis] Stensrud 2019
* [https://onlinelibrary.wiley.com/doi/full/10.1111/biom.12770 Efficiency of two sample tests via the restricted mean survival time for analyzing event time observations] Tian 2017
 
== Survival curve with confidence interval ==
http://www.sthda.com/english/wiki/survminer-r-package-survival-data-analysis-and-visualization
 
== Parametric models and survival function for censored data ==
Assume the CDF of survival time ''T'' is <math>F(\cdot)</math> and the CDF of the censoring time ''C'' is <math>G(\cdot)</math>,
: <math>
: <math>
\begin{align}
\begin{align}
P(T>t, \delta=1) &= \int_t^\infty (1-G(s))dF(s), \\
Err(m,b) &= \frac{1}{N}\sum_{i=1}^n (y_i - (m x_i + b))^2, \\
P(T>t, \delta=0) &= \int_t^\infty (1-F(s))dG(s)
\end{align}
\end{align}
</math>
</math>


* http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=23
We compute the gradient first for each parameters.
* http://www.ms.uky.edu/~mai/sta635/LikelihoodCensor635.pdf#page=2 survival function of <math>f(T, \delta)</math>
* https://web.stanford.edu/~lutian/coursepdf/unit2.pdf#page=3 joint density of <math>f(T, \delta)</math>
* http://data.princeton.edu/wws509/notes/c7.pdf#page=6
* Special case: ''T'' follows [https://en.wikipedia.org/wiki/Log-normal_distribution Log normal distribution] and ''C'' follows <math>U(0, \xi)</math>.
 
=== R ===
* [https://cran.r-project.org/web/packages/flexsurv/index.html flexsurv] package.
* [https://devinincerti.com/2019/06/18/parametric_survival.html Parametric survival modeling] which uses the '''flexsurv''' package.
* Used in [https://cran.rstudio.com/web/packages/simsurv/vignettes/simsurv_usage.html simsurv] package
 
== Parametric models and likelihood function for uncensored data ==
[https://stat.ethz.ch/R-manual/R-devel/library/survival/html/plot.survfit.html plot.survfit()]
 
* Exponential. <math> T \sim Exp(\lambda) </math>. <math>H(t) = \lambda t.</math> and <math>ln(S(t)) = -H(t) = -\lambda t.</math>
* Weibull. <math> T \sim W(\lambda,p).</math> <math>H(t) = \lambda^p t^p.</math> and <math>ln(-ln(S(t))) = ln(\lambda^p t^p)=const + p ln(t) </math>.
 
http://www.math.ucsd.edu/~rxu/math284/slect4.pdf
 
See also [http://data.princeton.edu/wws509/notes/c7.pdf#page=9 accelerated life models] where a set of covariates were used to model survival time.
 
== Survival modeling ==
=== Accelerated life models - a direct extension of the classical linear model ===
http://data.princeton.edu/wws509/notes/c7.pdf and also Kalbfleish and Prentice (1980).
 
<math>
log T_i = x_i' \beta + \epsilon_i
</math>
Therefore
* <math>T_i = exp(x_i' \beta) T_{0i} </math>. So if there are two groups (x=1 and x=0), and <math>exp(\beta) = 2</math>, it means one group live twice as long as people in another group.
* <math>S_1(t) = S_0(t/ exp(x' \beta))</math>. This explains the meaning of '''accelerated failure-time'''. '''Depending on the sign of <math>\beta' x</math>, the time is either accelerated by a constant factor or degraded by a constant factor'''. If <math>exp(\beta)=2</math>, the probability that a member in group one (eg treatment) will be alive at age t is exactly the same as the probability that a member in group zero (eg control group) will be alive at age t/2.
* The hazard function <math>\lambda_1(t) = \lambda_0(t/exp(x'\beta))/ exp(x'\beta) </math>. So if <math>exp(\beta)=2</math>, at any given age people in group one would be exposed to half the risk of people in group zero half their age.
 
In applications,
* If the errors are normally distributed, then we obtain a log-normal model for the T. Estimation of this model for censored data by maximum likelihood is known in the econometric literature as a Tobit model.
* If the errors have an extreme value distribution, then T has an exponential distribution. The hazard <math>\lambda</math> satisfies the log linear model <math>\log \lambda_i = x_i' \beta</math>.
 
=== Proportional hazard models ===
Note PH models is a type of multiplicative hazard rate models <math>h(x|Z) = h_0(x)c(\beta' Z)</math> where <math>c(\beta' Z) = \exp(\beta ' Z)</math>.
 
Assumption: Survival curves for two strata (determined by the particular choices of values for covariates) must have '''hazard functions that are proportional over time''' (i.e. '''constant relative hazard over time'''). [https://stats.stackexchange.com/questions/24552/proportional-hazards-assumption-meaning Proportional hazards assumption meaning]. The ratio of the hazard rates from two individuals with covariate value <math>Z</math> and <math>Z^*</math> is a constant function time.
: <math>
: <math>
\begin{align}
\begin{align}
\frac{h(t|Z)}{h(t|Z^*)} = \frac{h_0(t)\exp(\beta 'Z)}{h_0(t)\exp(\beta ' Z^*)} = \exp(\beta' (Z-Z^*)) \mbox{    independent of time}
\frac{\partial Err}{\partial m} &= \frac{2}{n} \sum_{i=1}^n -x_i(y_i - (m x_i + b)), \\
\frac{\partial Err}{\partial b} &= \frac{2}{n} \sum_{i=1}^n -(y_i - (m x_i + b))  
\end{align}
\end{align}
</math>
</math>


Test the assumption
The gradient descent algorithm uses an iterative method to update the estimates using a tuning parameter called '''learning rate'''.
* [https://rstudio-pubs-static.s3.amazonaws.com/300535_2a8382af47714d0aaa3f4cce9a7645a3.html Survival Analysis Tutorial] by Jacob Lindell and Joe Berry.
<pre>
* [https://stat.ethz.ch/R-manual/R-devel/library/survival/html/cox.zph.html cox.zph()] can be used to test the proportional hazards assumption for a Cox regression model fit.
new_m &= m_current - (learningRate * m_gradient)
* [https://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_4.pdf Log-log Kaplan-Meier curves] and other methods.
new_b &= b_current - (learningRate * b_gradient)  
* https://stats.idre.ucla.edu/other/examples/asa2/testing-the-proportional-hazard-assumption-in-cox-models/. If the predictor satisfy the proportional hazard assumption then the graph of the survival function versus the survival time should results in a graph with parallel curves, similarly the graph of the log(-log(survival)) versus log of survival time graph should result in parallel lines if the predictor is proportional.  This method does not work well for continuous predictor or categorical predictors that have many levels because the graph becomes to “cluttered”. 
</pre>


[[#Cox_Regression|Cox Regression]]
After each iteration, derivative is closer to zero. [http://blog.hackerearth.com/gradient-descent-algorithm-linear-regression Coding in R] for the simple linear regression.


== Weibull and Exponential model to Cox model ==
=== Gradient descent vs Newton's method ===
* https://socserv.socsci.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Cox-Regression.pdf. It also includes model diagnostic and all stuff is illustrated in R.
* [https://stackoverflow.com/a/12066869 What is the difference between Gradient Descent and Newton's Gradient Descent?]
* http://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_9.pdf
* [http://www.santanupattanayak.com/2017/12/19/newtons-method-vs-gradient-descent-method-in-tacking-saddle-points-in-non-convex-optimization/ Newton's Method vs Gradient Descent Method in tacking saddle points in Non-Convex Optimization]
* [https://dinh-hung-tu.github.io/gradient-descent-vs-newton-method/ Gradient Descent vs Newton Method]


In summary:
== Classification and Regression Trees (CART) ==
* Weibull distribution (Klein) <math>h(t) = p \lambda (\lambda t)^{p-1}</math> and <math>S(t) = exp(-\lambda t^p)</math>. If p >1, then the risk increases over time. If p<1, then the risk decreases over time.
=== Construction of the tree classifier ===
** Note that Weibull distribution has a different parametrization. See http://data.princeton.edu/pop509/ParametricSurvival.pdf#page=2. <math>h(t) = \lambda^p p t^{p-1}</math> and <math>S(t) = exp(-(\lambda t)^p)</math>. [https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Weibull.html R] and [https://en.wikipedia.org/wiki/Weibull_distribution wikipedia] also follows this parametrization except that <math>h(t) = p t^{p-1}/\lambda^p</math> and <math>S(t) = exp(-(t/\lambda)^p)</math>.
* Node proportion
* Exponential distribution <math>h(t)</math> = constant (independent of t). This is a special case of Weibull distribution (p=1).
:<math> p(1|t) + \dots + p(6|t) =1 </math> where <math>p(j|t)</math> define the node proportions (class proportion of class ''j'' on node ''t''. Here we assume there are 6 classes.
* Weibull (and also exponential) <strike>distribution</strike> regression model is the only case which belongs to both the proportional hazards and the accelerated life families.
* Impurity of node t
: <math>
:<math>i(t)</math> is a nonnegative function <math>\phi</math> of the <math>p(1|t), \dots, p(6|t)</math> such that <math> \phi(1/6,1/6,\dots,1/6)</math> = maximumm <math>\phi(1,0,\dots,0)=0, \phi(0,1,0,\dots,0)=0, \dots, \phi(0,0,0,0,0,1)=0</math>. That is, the node impurity is largest when all classes are equally mixed together in it, and smallest when the node contains only one class.
\begin{align}
* Gini index of impurity
\frac{h(x|Z_1)}{h(x|Z_2)} = \frac{h_0(x\exp(-\gamma' Z_1)) \exp(-\gamma ' Z_1)}{h_0(x\exp(-\gamma' Z_2)) \exp(-\gamma ' Z_2)} = \frac{(a/b)\left(\frac{x \exp(-\gamma ' Z_1)}{b}\right)^{a-1}\exp(-\gamma ' Z_1)}{(a/b)\left(\frac{x \exp(-\gamma ' Z_2)}{b}\right)^{a-1}\exp(-\gamma ' Z_2)}  \quad \mbox{which is independent of time x}
:<math>i(t) = - \sum_{j=1}^6 p(j|t) \log p(j|t).</math>
\end{align}
* Goodness of the split s on node t
</math>
:<math>\Delta i(s, t) = i(t) -p_Li(t_L) - p_Ri(t_R). </math> where <math>p_R</math> are the proportion of the cases in t go into the left node <math>t_L</math> and a proportion <math>p_R</math> go into right node <math>t_R</math>.
* [https://en.wikipedia.org/wiki/Proportional_hazards_model#Specifying_the_baseline_hazard_function Using the Weibull baseline hazard is the only circumstance under which the model satisfies both the proportional hazards, and accelerated failure time models]
A tree was grown in the following way: At the root node <math>t_1</math>, a search was made through all candidate splits to find that split <math>s^*</math> which gave the largest decrease in impurity;
* If X is exponential distribution with mean <math>b</math>, then X^(1/a) follows Weibull(a, b). See [https://en.wikipedia.org/wiki/Exponential_distribution Exponential distribution] and [https://en.wikipedia.org/wiki/Weibull_distribution Weibull distribution].
:<math>\Delta i(s^*, t_1) = \max_{s} \Delta i(s, t_1).</math>
* [http://krex.k-state.edu/dspace/bitstream/handle/2097/8787/AngelaCrumer2011.pdf?sequence=3 Derivation] of mean and variance of Weibull distribution.
* Class character of a terminal node was determined by the plurality rule. Specifically, if <math>p(j_0|t)=\max_j p(j|t)</math>, then ''t'' was designated as a class <math>j_0</math> terminal node.


{| class="wikitable"
=== R packages ===
|-
* [http://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf rpart]
! !! f(t)=h(t)*S(t) !! h(t) !! S(t) !! Mean
* http://exploringdatablog.blogspot.com/2013/04/classification-tree-models.html
|-
| Exponential (Klein p37) || <math>\lambda \exp(-\lambda t)</math> || <math>\lambda</math> || <math>\exp(-\lambda t)</math> || <math>1/\lambda</math>
|-
| Weibull (Klein, Bender, [https://en.wikipedia.org/wiki/Weibull_distribution#Alternative_parameterizations wikipedia]) || <math>p\lambda t^{p-1}\exp(-\lambda t^p)</math> || <math>p\lambda t^{p-1}</math> || <math>exp(-\lambda t^p)</math> || <math>\frac{\Gamma(1+1/p)}{\lambda^{1/p}}</math>
|-
| Exponential ([https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Exponential.html R]) || <math>\lambda \exp(-\lambda t)</math>, <math>\lambda</math> is rate || <math>\lambda</math> || <math>\exp(-\lambda t)</math> || <math>1/\lambda</math>
|-
| Weibull ([https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Weibull.html R], [https://en.wikipedia.org/wiki/Weibull_distribution wikipedia]) || <math>\frac{a}{b}\left(\frac{t}{b}\right)^{a-1} \exp(-(\frac{t}{b})^a)</math>,<br/><math>a</math> is shape, and <math>b</math> is scale || <math>\frac{a}{b}\left(\frac{t}{b}\right)^{a-1}</math> || <math>\exp(-(\frac{t}{b})^a)</math> || <math>b\Gamma(1+1/a)</math>
|}
* Accelerated failure-time model. Let <math>Y=\log(T)=\mu + \gamma'Z + \sigma W</math>. Then the survival function of <math>T</math> at the covariate Z,
: <math>
\begin{align}
S_T(t|Z) &= P(T > t |Z) \\
        &= P(Y > \ln t|Z) \\
        &= P(\mu + \sigma W > \ln t-\gamma' Z | Z) \\
        &= P(e^{\mu + \sigma W} > t\exp(-\gamma'Z) | Z) \\
        &= S_0(t \exp(-\gamma'Z)).
\end{align}
</math>
where <math>S_0(t)</math> denote the survival function T when Z=0. Since <math>h(t) = -\partial \ln (S(t))</math>, the hazard function of T with a covariate value Z is related to a baseline hazard rate <math>h_0</math> by (p56 Klein)
: <math>
\begin{align}
h(t|Z) = h_0(t\exp(-\gamma' Z)) \exp(-\gamma ' Z)
\end{align}
</math>


<syntaxhighlight lang='rsplus'>
== Partially additive (generalized) linear model trees ==
> mean(rexp(1000)^(1/2))
* https://eeecon.uibk.ac.at/~zeileis/news/palmtree/  
[1] 0.8902948
* https://cran.r-project.org/web/packages/palmtree/index.html
> mean(rweibull(1000, 2, 1))
[1] 0.8856265


> mean((rweibull(1000, 2, scale=4)/4)^2)
== Supervised Classification, Logistic and Multinomial ==
[1] 1.008923
* http://freakonometrics.hypotheses.org/19230
</syntaxhighlight>


=== Graphical way to check Weibull, AFT, PH ===
== Variable selection ==
http://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_9.pdf#page=40
=== Review ===
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5969114/ Variable selection – A review and recommendations for the practicing statistician] by Heinze et al 2018.


=== Weibull is related to Extreme value distribution ===
=== Variable selection and variable importance plot ===
* [https://www.itl.nist.gov/div898/handbook/apr/section1/apr163.htm Log(Weibull) = Extreme value]
* http://freakonometrics.hypotheses.org/19835
* [http://www.mathwave.com/articles/extreme-value-distributions.html Extreme Value Distributions] from mathwave.com
* [https://en.wikipedia.org/wiki/Generalized_extreme_value_distribution Generalized extreme value distribution] from wikipedia
* [https://www.rdocumentation.org/packages/EnvStats/versions/2.3.1/topics/EVD Density, distribution function, quantile function, and random generation for the (largest) extreme value distribution] from EnvStats R package
* [http://www.dataanalysisclassroom.com/lesson60/ Lesson 60 – Extreme value distributions in R]


=== Weibull distribution and bathtub ===
=== Variable selection and cross-validation ===
* https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1740-9713.2018.01177.x by John Crocker
* http://freakonometrics.hypotheses.org/19925
* https://www.sciencedirect.com/topics/materials-science/weibull-distribution
* http://ellisp.github.io/blog/2016/06/05/bootstrap-cv-strategies/
* https://en.wikipedia.org/wiki/Bathtub_curve


== CDF follows Unif(0,1) ==
=== Mallow ''C<sub>p</sub>'' ===
https://stats.stackexchange.com/questions/161635/why-is-the-cdf-of-a-sample-uniformly-distributed
Mallows's ''C<sub>p</sub>'' addresses the issue of overfitting. The Cp statistic calculated on a sample of data estimates the '''mean squared prediction error (MSPE)'''.  
:<math>
E\sum_j (\hat{Y}_j - E(Y_j\mid X_j))^2/\sigma^2,
</math>
The ''C<sub>p</sub>'' statistic is defined as
:<math> C_p={SSE_p \over S^2} - N + 2P. </math>


Take the Exponential distribution for example
* https://en.wikipedia.org/wiki/Mallows%27s_Cp
<syntaxhighlight lang='rsplus'>
* [https://www.jobnmadu.com/r-blog/2023-02-04-r-rmarkdown/mallows/ Better and enhanced method of estimating Mallow's Cp]
stem(pexp(rexp(1000)))
* Used in Yuan & Lin (2006) group lasso. The degrees of freedom is estimated by the bootstrap or perturbation methods. Their paper mentioned the performance is comparable with that of 5-fold CV but is computationally much faster.
stem(pexp(rexp(10000)))
</syntaxhighlight>


Another example is from [https://github.com/faithghlee/SurvivalDataSimulation/blob/master/Simulation_Code.r simulating survival time]. Note that this is exactly [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.2059 Bender et al 2005] approach. See also the [https://cran.rstudio.com/web/packages/simsurv/index.html simsurv] (newer) and [https://cran.rstudio.com/web/packages/survsim/index.html survsim] (older) packages.
=== Variable selection for mode regression ===
<syntaxhighlight lang='rsplus'>
http://www.tandfonline.com/doi/full/10.1080/02664763.2017.1342781 Chen & Zhou, Journal of applied statistics ,June 2017
set.seed(100)


#Define the following parameters outlined in the step:
=== lmSubsets ===
n = 1000
[https://eeecon.uibk.ac.at/~zeileis/news/lmsubsets/ lmSubsets]: Exact variable-subset selection in linear regression. 2020
beta_0 = 0.5
beta_1 = -1
beta_2 = 1


b = 1.6 #This will be changed later as mentioned in Step 5 of documentation
=== Permutation method ===
[https://medium.com/responsibleml/basic-xai-with-dalex-part-2-permutation-based-variable-importance-1516c2924a14 BASIC XAI with DALEX — Part 2: Permutation-based variable importance]


#Step 1
== Neural network ==
x_1<-rbinom(n, 1, 0.25)
* [http://junma5.weebly.com/data-blog/build-your-own-neural-network-classifier-in-r Build your own neural network in R]
x_2<-rbinom(n, 1, 0.7)
* Building A Neural Net from Scratch Using R - [https://rviews.rstudio.com/2020/07/20/shallow-neural-net-from-scratch-using-r-part-1/ Part 1]
* (Video) [https://youtu.be/ntKn5TPHHAk 10.2: Neural Networks: Perceptron Part 1 - The Nature of Code] from the Coding Train. The book [http://natureofcode.com/book/chapter-10-neural-networks/ THE NATURE OF CODE] by DANIEL SHIFFMAN
* [https://freakonometrics.hypotheses.org/52774 CLASSIFICATION FROM SCRATCH, NEURAL NETS]. The ROCR package was used to produce the ROC curve.
* [http://www.erikdrysdale.com/neuralnetsR/ Building a survival-neuralnet from scratch in base R]


#Step 2
== Support vector machine (SVM) ==
U<-runif(n, 0,1)
* [https://statcompute.wordpress.com/2016/03/19/improve-svm-tuning-through-parallelism/ Improve SVM tuning through parallelism] by using the '''foreach''' and '''doParallel''' packages.
T<-(-log(U)*exp(-(beta_0+beta_1*x_1+beta_2*x_2))) #Eqn (5)  
* [https://www.spsanderson.com/steveondata/posts/2023-09-11/index.html Plotting SVM Decision Boundaries with e1071 in R]
 
== Quadratic Discriminant Analysis (qda), KNN ==
[https://datarvalue.blogspot.com/2017/05/machine-learning-stock-market-data-part_16.html Machine Learning. Stock Market Data, Part 3: Quadratic Discriminant Analysis and KNN]


Fn <- ecdf(T) # https://stat.ethz.ch/R-manual/R-devel/library/stats/html/ecdf.html
== KNN ==
# verify F(T) or 1-F(T) ~ U(0, 1)
[https://finnstats.com/index.php/2021/04/30/knn-algorithm-machine-learning/ KNN Algorithm Machine Learning]
hist(Fn(T))
# look at the plot of survival probability vs time
plot(T, 1 - Fn(T))
</syntaxhighlight>


== Simulate survival data ==
== [https://en.wikipedia.org/wiki/Regularization_(mathematics) Regularization] ==
Note that status = 1 means an event (e.g. death) happened; Ti <= Ci. That is, the status variable used in R/Splus means the death indicator.
Regularization is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting


* http://www.bioconductor.org/packages/release/bioc/manuals/genefilter/man/genefilter.pdf#page=4
[https://www.datacamp.com/community/tutorials/tutorial-ridge-lasso-elastic-net Regularization: Ridge, Lasso and Elastic Net] from datacamp.com. Bias and variance trade-off in parameter estimates was used to lead to the discussion.
: <syntaxhighlight lang='rsplus'>
y <- rexp(10)
cen <- runif(10)
status <- ifelse(cen < .7, 1, 0)
</syntaxhighlight>
* [http://www.ms.uky.edu/~mai/Rsurv.pdf#page=10 How much power/accuracy is lost by using the Cox model instead of Weibull model when both model are correct?] <math>h(t|x)=\lambda=e^{3x+1} = h_0(t)e^{\beta x}</math> where <math>h_0(t)=e^1, \beta=3</math>.
: '''Note that''' for the '''exponential''' distribution, larger rate/<math>\lambda</math> corresponds to a smaller mean. This relation matches with the Cox regression where a large covariate corresponds to a smaller survival time. So the coefficient 3 in myrates in the below example has the same sign as the coefficient (2.457466 for censored data) in the output of the Cox model fitting.
: <syntaxhighlight lang='rsplus'>
n <- 30
x <- scale(1:n, TRUE, TRUE) # create covariates (standardized)
                            # the original example does not work on large 'n'
myrates <- exp(3*x+1)
set.seed(1234)
y <- rexp(n, rate = myrates) # generates the r.v.
cen <- rexp(n, rate = 0.5 )  #  E(cen)=1/rate
ycen <- pmin(y, cen)
di <- as.numeric(y <= cen)
survreg(Surv(ycen, di)~x, dist="weibull")$coef[2]  # -3.080125
coxph(Surv(ycen, di)~x)$coef  # 2.457466


# no censor
=== Regularized least squares ===
survreg(Surv(y,rep(1, n))~x,dist="weibull")$coef[2]  # -3.137603
https://en.wikipedia.org/wiki/Regularized_least_squares. Ridge/ridge/elastic net regressions are special cases.
survreg(Surv(y,rep(1, n))~x,dist="exponential")$coef[2]  # -3.143095
coxph(Surv(y,rep(1, n))~x)$coef  # 2.717794


# See the pdf note for the rest of code
=== Ridge regression ===
</syntaxhighlight>
* [https://stats.stackexchange.com/questions/52653/what-is-ridge-regression What is ridge regression?]
* Intercept in survreg for the exponential distribution. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=25.  
* [https://stats.stackexchange.com/questions/118712/why-does-ridge-estimate-become-better-than-ols-by-adding-a-constant-to-the-diago Why does ridge estimate become better than OLS by adding a constant to the diagonal?] The estimates become more stable if the covariates are highly correlated.
: <math>
* (In ridge regression) the matrix we need to invert no longer has determinant near zero, so the solution does not lead to uncomfortably large variance in the estimated parameters. And that’s a good thing. See [https://tamino.wordpress.com/2011/02/12/ridge-regression/ this post].
\begin{align}
* [https://www.tandfonline.com/doi/abs/10.1080/02664763.2018.1526891?journalCode=cjas20 Multicolinearity and ridge regression: results on type I errors, power and heteroscedasticity]
\lambda = exp(-intercept)
\end{align}
</math>
: <syntaxhighlight lang='rsplus'>
> futime <- rexp(1000, 5)
> survreg(Surv(futime,rep(1,1000))~1,dist="exponential")$coef
(Intercept)
  -1.618263
> exp(1.618263)
[1] 5.044321
</syntaxhighlight>
* Intercept and scale in survreg for a Weibull distribution. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=28.  
: <math>
\begin{align}
\gamma &= 1/scale \\
  \alpha &= exp(-(Intercept)*\gamma)
\end{align}
</math>
: <syntaxhighlight lang='rsplus'>
> survreg(Surv(futime,rep(1,1000))~1,dist="weibull")
Call:
survreg(formula = Surv(futime, rep(1, 1000)) ~ 1, dist = "weibull")


Coefficients:
Since L2 norm is used in the regularization, ridge regression is also called L2 regularization.
(Intercept)
  -1.639469


Scale= 1.048049
[https://drsimonj.svbtle.com/ridge-regression-with-glmnet ridge regression with glmnet]


Loglik(model)= 620.1  Loglik(intercept only)= 620.1
Hoerl and Kennard (1970a, 1970b) introduced ridge regression, which minimizes RSS subject to a constraint <math>\sum|\beta_j|^2 \le t</math>. Note that though ridge regression shrinks the OLS estimator toward 0 and yields a biased estimator <math>\hat{\beta} = (X^TX + \lambda X)^{-1} X^T y </math> where <math>\lambda=\lambda(t)</math>, a function of ''t'', the variance is smaller than that of the OLS estimator.
n= 1000
</syntaxhighlight>
* rsurv() function from the [https://cran.r-project.org/web/packages/ipred/index.html ipred] package
* [http://people.stat.sfu.ca/~raltman/stat402/402L32.pdf#page=4 Use Weibull distribution to model survival data]. We assume the shape is constant across subjects.  We then allow the scale to vary across subjects. For subject <math>i</math> with covariate <math>X_i</math>, <math>\log(scale_i)</math> = <math>\beta ' X_i</math>. Note that if we want to make the <math>\beta</math> sign to be consistent with the Cox model, we want to use <math>\log(scale_i)</math> = <math>-\beta ' X_i</math> instead.
* http://sas-and-r.blogspot.com/2010/03/example-730-simulate-censored-survival.html. Assuming shape=1 in the Weibull distribution, then the [[#Weibull_and_Exponential_model_to_Cox_model|hazard function]] can be expressed as a proportional hazard model
: <math>
h(t|x) = 1/scale = \frac{1}{\lambda/e^{\beta 'x}} = \frac{e^{\beta ' x}}{\lambda} = h_0(t) \exp(\beta' x)
</math>
: <syntaxhighlight lang='rsplus'>
n = 10000
beta1 = 2; beta2 = -1
lambdaT = .002 # baseline hazard
lambdaC = .004  # hazard of censoring
set.seed(1234)
x1 = rnorm(n,0)
x2 = rnorm(n,0)
# true event time
T = rweibull(n, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2))
# No censoring
event2 <- rep(1, length(T))
coxph(Surv(T, event2)~ x1 + x2)
#      coef exp(coef) se(coef)    z      p
# x1  1.9982    7.3761  0.0188 106.1 <2e-16
# x2 -1.0020    0.3671  0.0127 -79.1 <2e-16
#
# Likelihood ratio test=15556  on 2 df, p=0
# n= 10000, number of events= 10000


# Censoring
The solution exists if <math>\lambda >0</math> even if <math>n < p </math>.
C = rweibull(n, shape=1, scale=lambdaC)  #censoring time
time = pmin(T,C)  #observed time is min of censored and true
event = time==T  # set to 1 if event is observed
coxph(Surv(time, event)~ x1 + x2)
#      coef exp(coef) se(coef)    z      p
# x1  2.0104    7.4662  0.0225  89.3 <2e-16
# x2 -0.9921    0.3708  0.0155 -63.9 <2e-16
#
# Likelihood ratio test=11321  on 2 df, p=0
# n= 10000, number of events= 6002
</syntaxhighlight>
* https://stats.stackexchange.com/a/135129 (Bender's inverse probability method). Let <math>h_0(t)=\lambda \rho t^{\rho - 1} </math> where shape 𝜌>0 and scale 𝜆>0. Following the inverse probability method, a realisation of 𝑇∼𝑆(⋅|𝐱) is obtained by computing <math> t = \left( - \frac{\log(v)}{\lambda \exp(x' \beta)} \right) ^ {1/\rho} </math> with 𝑣 a uniform variate on (0,1). Using results on transformations of random variables, one may notice that 𝑇 has a conditional Weibull distribution (given 𝐱) with shape 𝜌 and scale 𝜆exp(𝐱′𝛽).
: <syntaxhighlight lang='rsplus'>
# N = sample size   
# lambda = scale parameter in h0()
# rho = shape parameter in h0()
# beta = fixed effect parameter
# rateC = rate parameter of the exponential distribution of censoring variable C


simulWeib <- function(N, lambda, rho, beta, rateC)
Ridge regression (L2 penalty) only shrinks the coefficients. In contrast, Lasso method (L1 penalty) tries to shrink some coefficient estimators to exactly zeros. This can be seen from comparing the coefficient path plot from both methods.
{
  # covariate --> N Bernoulli trials
  x <- sample(x=c(0, 1), size=N, replace=TRUE, prob=c(0.5, 0.5))


  # Weibull latent event times
Geometrically (contour plot of the cost function), the L1 penalty (the sum of absolute values of coefficients) will incur a probability of some zero coefficients (i.e. some coefficient hitting the corner of a diamond shape in the 2D case). For example, in the 2D case (X-axis=<math>\beta_0</math>, Y-axis=<math>\beta_1</math>), the shape of the L1 penalty <math>|\beta_0| + |\beta_1|</math> is a diamond shape whereas the shape of the L2 penalty (<math>\beta_0^2 + \beta_1^2</math>) is a circle.
  v <- runif(n=N)
  Tlat <- (- log(v) / (lambda * exp(x * beta)))^(1 / rho)


  # censoring times
=== Lasso/glmnet, adaptive lasso and FAQs ===
  C <- rexp(n=N, rate=rateC)
[[glmnet|glmnet]]


  # follow-up times and event indicators
=== Lasso logistic regression ===
  time <- pmin(Tlat, C)
https://freakonometrics.hypotheses.org/52894
  status <- as.numeric(Tlat <= C)


  # data set
=== Lagrange Multipliers ===
  data.frame(id=1:N,
[https://medium.com/@andrew.chamberlain/a-simple-explanation-of-why-lagrange-multipliers-works-253e2cdcbf74 A Simple Explanation of Why Lagrange Multipliers Works]
            time=time,
            status=status,
            x=x)
}
# Test
set.seed(1234)
betaHat <- rate <- rep(NA, 1e3)
for(k in 1:1e3)
{
  dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001)
  fit <- coxph(Surv(time, status) ~ x, data=dat)
  rate[k] <- mean(dat$status == 0)
  betaHat[k] <- fit$coef
}
mean(rate)
# [1] 0.12287
mean(betaHat)
# [1] -0.6085473
</syntaxhighlight>
* [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.2059 Generating survival times to simulate Cox proportional hazards models] Bender et al 2005
** [https://cran.r-project.org/web/packages/survsim/index.html survsim] package and the [https://www.jstatsoft.org/article/view/v059i02 paper] on JSS. See [http://justanotherdatablog.blogspot.com/2015/08/survival-analysis-1.html this post].
** [https://cran.rstudio.com/web/packages/simsurv/index.html simsurv] package (new, 2 vignettes).
** [https://stats.stackexchange.com/questions/65005/get-a-desired-percentage-of-censored-observations-in-a-simulation-of-cox-ph-mode Get a desired percentage of censored observations in a simulation of Cox PH Model]. The answer is based on Bender et al 2005. [http://onlinelibrary.wiley.com/doi/10.1002/sim.2059/epdf Generating survival times to simulate Cox proportional hazards models]. Statistics in Medicine 24: 1713–1723. The censoring time is fixed and the distribution of the censoring indicator is following the binomial. In fact, when we simulate survival data with a predefined censoring rate, we can pretend the survival time is already censored and only care about the censoring/status variable to make sure the censoring rate is controlled.
** (Search github) [https://github.com/faithghlee/SurvivalDataSimulation Using inverse CDF] <math> \lambda = exp(\beta' x), \; S(t)= \exp(-\lambda t) = \exp(-t e^{\beta' x}) \sim Unif(0,1) </math>
** [https://arxiv.org/pdf/1611.03063.pdf#page=17 Prediction Accuracy Measures for a Nonlinear Model and for Right-Censored Time-to-Event Data] Li and Wang
* [https://web.stanford.edu/~hastie/Papers/v39i05.pdf#page=8 Regularization paths for Cox's proportional hazards model via coordinate descent. J Stat Software] Simon et al 2011. [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2656-1#Sec8 Gsslasso Cox]: a Bayesian hierarchical model for predicting survival and detecting associated genes by incorporating pathway information by Tang 2019.


== Predefined censoring rates ==
=== How to solve lasso/convex optimization ===
[http://onlinelibrary.wiley.com/doi/10.1002/sim.7178/full Simulating survival data with predefined censoring rates for proportional hazards models]
* [https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf Convex Optimization] by Boyd S, Vandenberghe L, Cambridge 2004. It is cited by Zhang & Lu (2007). The '''interior point algorithm''' can be used to solve the optimization problem in adaptive lasso.
 
* Review of '''gradient descent''':
== Cross validation ==
** Finding maximum: <math>w^{(t+1)} = w^{(t)} + \eta \frac{dg(w)}{dw}</math>, where <math>\eta</math> is stepsize.
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.4780122407/epdf Cross validation in survival analysis] by Verweij & van Houwelingen, Stat in medicine 1993.
** Finding minimum: <math>w^{(t+1)} = w^{(t)} - \eta \frac{dg(w)}{dw}</math>.
* Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data. Simon et al, Brief Bioinform. 2011
** [https://stackoverflow.com/questions/12066761/what-is-the-difference-between-gradient-descent-and-newtons-gradient-descent What is the difference between Gradient Descent and Newton's Gradient Descent?] Newton's method requires <math>g''(w)</math>, more smoothness of g(.).
 
** Finding minimum for multiple variables ('''gradient descent'''): <math>w^{(t+1)} = w^{(t)} - \eta \Delta g(w^{(t)})</math>. For the least squares problem, <math>g(w) = RSS(w)</math>.
== Competing risk ==
** Finding minimum for multiple variables in the least squares problem (minimize <math>RSS(w)</math>):  <math>\text{partial}(j) = -2\sum h_j(x_i)(y_i - \hat{y}_i(w^{(t)}), w_j^{(t+1)} = w_j^{(t)} - \eta \; \text{partial}(j)</math>
* https://www.mailman.columbia.edu/research/population-health-methods/competing-risk-analysis
** Finding minimum for multiple variables in the ridge regression problem (minimize <math>RSS(w)+\lambda \|w\|_2^2=(y-Hw)'(y-Hw)+\lambda w'w</math>): <math>\text{partial}(j) = -2\sum h_j(x_i)(y_i - \hat{y}_i(w^{(t)}), w_j^{(t+1)} = (1-2\eta \lambda) w_j^{(t)} - \eta \; \text{partial}(j)</math>. Compared to the closed form approach: <math>\hat{w} = (H'H + \lambda I)^{-1}H'y</math> where 1. the inverse exists even N<D as long as <math>\lambda > 0</math> and 2. the complexity of inverse is <math>O(D^3)</math>, D is the dimension of the covariates.
* Page 61 of Klein and Moeschberger "Survival Analysis"
* '''Cyclical coordinate descent''' was used ([https://cran.r-project.org/web/packages/glmnet/vignettes/glmnet_beta.pdf#page=1 vignette]) in the glmnet package. See also '''[https://en.wikipedia.org/wiki/Coordinate_descent coordinate descent]'''. The reason we call it 'descent' is because we want to 'minimize' an objective function.
 
** <math>\hat{w}_j = \min_w g(\hat{w}_1, \cdots, \hat{w}_{j-1},w, \hat{w}_{j+1}, \cdots, \hat{w}_D)</math>
== [https://en.wikipedia.org/wiki/Survival_rate Survival rate] terminology ==
** See [https://www.jstatsoft.org/article/view/v033i01 paper] on JSS 2010. The Cox PHM case also uses the cyclical coordinate descent method; see the [https://www.jstatsoft.org/article/view/v039i05 paper] on JSS 2011.
* [https://www.cancer.gov/publications/dictionaries/cancer-terms?cdrid=44023 Disease-free survival (DFS)]: the period after curative treatment ['''disease eliminated'''] when no disease can be detected
** Coursera's [https://www.coursera.org/learn/ml-regression/lecture/rb179/feature-selection-lasso-and-nearest-neighbor-regression Machine learning course 2: Regression] at 1:42. [http://web.stanford.edu/~hastie/TALKS/CD.pdf#page=12 Soft-thresholding] the coefficients is the key for the L1 penalty. The range for the thresholding is controlled by <math>\lambda</math>. Note to view the videos and all materials in coursera we can enroll to audit the course without starting a trial.
* [https://en.wikipedia.org/wiki/Progression-free_survival Progression-free survival (PFS), overall survival (OS)]. PFS is the length of time during and after the treatment of a disease, such as cancer, that a patient lives with the '''disease but it does not get worse'''. See an use at [https://www.cancer.gov/about-cancer/treatment/clinical-trials/nci-supported/nci-match NCI-MATCH] trial.
** [http://www.adeveloperdiary.com/data-science/machine-learning/introduction-to-coordinate-descent-using-least-squares-regression/ Introduction to Coordinate Descent using Least Squares Regression]. It also covers '''Cyclic Coordinate Descent''' and '''Coordinate Descent vs Gradient Descent'''. A python code is provided.
* Time to progression: The length of time from the date of diagnosis or the start of treatment for a disease until the disease starts to get worse or spread to other parts of the body. In a clinical trial, measuring the time to progression is one way to see how well a new treatment works. Also called TTP.
** No step size is required as in gradient descent.
* Metastasis-free survival (MFS) time: the period until metastasis is detected
** [https://sandipanweb.wordpress.com/2017/05/04/implementing-lasso-regression-with-coordinate-descent-and-the-sub-gradient-of-the-l1-penalty-with-soft-thresholding/ Implementing LASSO Regression with Coordinate Descent, Sub-Gradient of the L1 Penalty and Soft Thresholding in Python]
* [http://www.cancer.net/navigating-cancer-care/cancer-basics/understanding-statistics-used-guide-prognosis-and-evaluate-treatment Understanding Statistics Used to Guide Prognosis and Evaluate Treatment] (DFS & PFS rate)
** Coordinate descent in the least squares problem: <math>\frac{\partial}{\partial w_j} RSS(w)= -2 \rho_j + 2 w_j</math>; i.e. <math>\hat{w}_j = \rho_j</math>.
 
** Coordinate descent in the Lasso problem (for normalized features): <math>
== Books ==
\hat{w}_j =  
* [http://www.springer.com/us/book/9781441966452 Survival Analysis, A Self-Learning Text] by Kleinbaum, David G., Klein, Mitchel
\begin{cases}
* [http://www.springer.com/us/book/9783319312439 Applied Survival Analysis Using R] by Moore, Dirk F.
\rho_j + \lambda/2, & \text{if }\rho_j < -\lambda/2 \\
* [http://www.springer.com/us/book/9783319194240 Regression Modeling Strategies] by Harrell, Frank
0, & \text{if } -\lambda/2 \le \rho_j \le \lambda/2\\
* [http://www.springer.com/us/book/9781461413523 Regression Methods in Biostatistics] by Vittinghoff, E., Glidden, D.V., Shiboski, S.C., McCulloch, C.E.
\rho_j- \lambda/2, & \text{if }\rho_j > \lambda/2
* https://tbrieder.org/epidata/course_reading/e_tableman.pdf
\end{cases}
* [https://www.wiley.com/en-us/Survival+Analysis%3A+Models+and+Applications-p-9780470977156 Survival Analysis: Models and Applications] by Xian Liu
 
== HER2-positive breast cancer ==
* https://www.mayoclinic.org/breast-cancer/expert-answers/FAQ-20058066
* https://en.wikipedia.org/wiki/Trastuzumab (antibody, injection into a vein or under the skin)
 
= [https://en.wikipedia.org/wiki/Proportional_hazards_model Cox proportional hazards model] and the partial log-likelihood function =
 
Let ''Y''<sub>''i''</sub> denote the observed time (either censoring time or event time) for subject ''i'', and let ''C''<sub>''i''</sub> be the indicator that the time corresponds to an event (i.e. if ''C''<sub>''i''</sub>&nbsp;=&nbsp;1 the event occurred and if ''C''<sub>''i''</sub>&nbsp;=&nbsp;0 the time is a censoring time).  The hazard function for the Cox proportional hazard model has the form
 
<math>
\lambda(t|X) = \lambda_0(t)\exp(\beta_1X_1 + \cdots + \beta_pX_p) = \lambda_0(t)\exp(X \beta^\prime).
</math>
 
This expression gives the hazard at time ''t'' for an individual with covariate vector (explanatory variables) ''X''. Based on this hazard function, a '''partial likelihood''' (defined on hazard function) can be constructed from the datasets as
 
<math>
L(\beta) = \prod\limits_{i:C_i=1}\frac{\theta_i}{\sum_{j:Y_j\ge Y_i}\theta_j},
</math>
</math>
** Choosing <math>\lambda</math> via cross validation tends to favor less sparse solutions and thus smaller <math>\lambda</math> then optimal choice for feature selection. See "Machine learning: a probabilistic perspective", Murphy 2012.
** [http://support.sas.com/resources/papers/proceedings15/3297-2015.pdf Lasso Regularization for Generalized Linear Models in Base SAS® Using Cyclical Coordinate Descent]
* Classical: Least angle regression (LARS) Efron et al 2004.
* [https://www.mathworks.com/help/stats/lasso.html?s_tid=gn_loc_drop Alternating Direction Method of Multipliers (ADMM)]. Boyd, 2011. “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers.” Foundations and Trends in Machine Learning. Vol. 3, No. 1, 2010, pp. 1–122.
** https://stanford.edu/~boyd/papers/pdf/admm_slides.pdf
** [https://cran.r-project.org/web/packages/ADMM/ ADMM] package
** [https://www.quora.com/Convex-Optimization-Whats-the-advantage-of-alternating-direction-method-of-multipliers-ADMM-and-whats-the-use-case-for-this-type-of-method-compared-against-classic-gradient-descent-or-conjugate-gradient-descent-method What's the advantage of alternating direction method of multipliers (ADMM), and what's the use case for this type of method compared against classic gradient descent or conjugate gradient descent method?]
* [https://math.stackexchange.com/questions/771585/convexity-of-lasso If some variables in design matrix are correlated, then LASSO is convex or not?]
* Tibshirani. [http://www.jstor.org/stable/2346178 Regression shrinkage and selection via the lasso] (free). JRSS B 1996.
* [http://www.econ.uiuc.edu/~roger/research/conopt/coptr.pdf Convex Optimization in R] by Koenker & Mizera 2014.
* [https://web.stanford.edu/~hastie/Papers/pathwise.pdf Pathwise coordinate optimization] by Friedman et al 2007.
* [http://web.stanford.edu/~hastie/StatLearnSparsity/ Statistical learning with sparsity: the Lasso and generalizations] T. Hastie, R. Tibshirani, and M. Wainwright, 2015 (book)
* Element of Statistical Learning (book)
* https://youtu.be/A5I1G1MfUmA StatsLearning Lect8h 110913
* Fu's (1998) shooting algorithm for Lasso ([http://web.stanford.edu/~hastie/TALKS/CD.pdf#page=11 mentioned] in the history of coordinate descent) and Zhang & Lu's (2007) modified shooting algorithm for adaptive Lasso.
* [https://www.cs.ubc.ca/~murphyk/MLbook/ Machine Learning: a Probabilistic Perspective] Choosing <math>\lambda</math> via cross validation tends to favor less sparse solutions and thus smaller <math>\lambda</math> than optimal choice for feature selection.
* [https://github.com/OHDSI/Cyclops Cyclops] package - Cyclic Coordinate Descent for Logistic, Poisson and Survival Analysis. [https://cran.r-project.org/web/packages/Cyclops/index.html CRAN]. It imports '''Rcpp''' package. It also provides Dockerfile.
* [http://www.optimization-online.org/DB_FILE/2014/12/4679.pdf Coordinate Descent Algorithms] by Stephen J. Wright


where ''θ''<sub>''j''</sub>&nbsp;=&nbsp;exp(''X''<sub>''j'' </sub>''β''<sup>''′''</sup>) and ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub> are the covariate vectors for the ''n'' independently sampled individuals in the dataset (treated here as column vectors). [http://psfaculty.ucdavis.edu/bsjjones/coxslides.pdf This pdf] or [http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=12 this note] give a toy example
=== Quadratic programming ===
* https://en.wikipedia.org/wiki/Quadratic_programming
* https://en.wikipedia.org/wiki/Lasso_(statistics)
* [https://cran.r-project.org/web/views/Optimization.html CRAN Task View: Optimization and Mathematical Programming]
* [https://cran.r-project.org/web/packages/quadprog/ quadprog] package and [https://www.rdocumentation.org/packages/quadprog/versions/1.5-5/topics/solve.QP solve.QP()] function
* [https://rwalk.xyz/solving-quadratic-progams-with-rs-quadprog-package/ Solving Quadratic Progams with R’s quadprog package]
* [https://rwalk.xyz/more-on-quadratic-programming-in-r/ More on Quadratic Programming in R]
* https://optimization.mccormick.northwestern.edu/index.php/Quadratic_programming
* [https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssb.12273 Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects] where the algorithm from [https://ieeexplore.ieee.org/abstract/document/7448814/ Lee] 2016 was used.


The corresponding log partial likelihood is
=== Constrained optimization ===
[https://cran.r-project.org/web/packages/Jaya/vignettes/A_guide_to_JA.html Jaya Package]. Jaya Algorithm is a gradient-free optimization algorithm. It can be used for Maximization or Minimization of a function for solving both constrained and unconstrained optimization problems. It does not contain any hyperparameters.


<math>
=== Highly correlated covariates ===
\ell(\beta) = \sum_{i:C_i=1} \left(X_i \beta^\prime - \log \sum_{j:Y_j\ge Y_i}\theta_j\right).
'''1. Elastic net'''
</math>


This function can be maximized over ''β'' to produce maximum partial likelihood estimates of the model parameters.
''' 2. Group lasso'''
* [http://pages.stat.wisc.edu/~myuan/papers/glasso.final.pdf Yuan and Lin 2006] JRSSB
* https://cran.r-project.org/web/packages/gglasso/, http://royr2.github.io/2014/04/15/GroupLasso.html
* https://cran.r-project.org/web/packages/grpreg/
* https://cran.r-project.org/web/packages/grplasso/ by Lukas Meier ([http://people.ee.duke.edu/~lcarin/lukas-sara-peter.pdf paper]), used in the '''biospear''' package for survival data
* https://cran.r-project.org/web/packages/SGL/index.html, http://royr2.github.io/2014/05/20/SparseGroupLasso.html, http://web.stanford.edu/~hastie/Papers/SGLpaper.pdf


The partial [[Score (statistics)|score function]] is
=== Grouped data ===
<math>
* [https://www.tandfonline.com/doi/abs/10.1080/02664763.2020.1822304?journalCode=cjas20 Regularized robust estimation in binary regression models]
\ell^\prime(\beta) = \sum_{i:C_i=1} \left(X_i - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j}{\sum_{j:Y_j\ge Y_i}\theta_j}\right),
</math>


and the [[Hessian matrix]] of the partial log likelihood is
=== Other Lasso ===
* [https://statisticaloddsandends.wordpress.com/2019/01/14/pclasso-a-new-method-for-sparse-regression/ pcLasso]
* [https://www.biorxiv.org/content/10.1101/630079v1 A Fast and Flexible Algorithm for Solving the Lasso in Large-scale and Ultrahigh-dimensional Problems] Qian et al 2019 and the [https://github.com/junyangq/snpnet snpnet] package
* [https://doi.org/10.1093/biostatistics/kxz034 Adaptive penalization in high-dimensional regression and classification with external covariates using variational Bayes] by Velten & Huber 2019 and the bioconductor package [http://www.bioconductor.org/packages/release/bioc/html/graper.html graper]. Differentially penalizes '''feature groups''' defined by the covariates and adapts the relative strength of penalization to the information content of each group.  Incorporating side-information on the assay type and spatial or functional annotations could help to improve prediction performance. Furthermore, it could help prioritizing feature groups, such as different assays or gene sets.


<math>
== Comparison by plotting ==
\ell^{\prime\prime}(\beta) = -\sum_{i:C_i=1} \left(\frac{\sum_{j:Y_j\ge Y_i}\theta_jX_jX_j^\prime}{\sum_{j:Y_j\ge Y_i}\theta_j} - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j\times \sum_{j:Y_j\ge Y_i}\theta_jX_j^\prime}{[\sum_{j:Y_j\ge Y_i}\theta_j]^2}\right).
If we are running simulation, we can use the [https://github.com/pbiecek/DALEX DALEX] package to visualize the fitting result from different machine learning methods and the true model. See http://smarterpoland.pl/index.php/2018/05/ml-models-what-they-cant-learn.
</math>


Using this score function and Hessian matrix, the partial likelihood can be maximized using the [[Newton's method|Newton-Raphson]] algorithm. The inverse of the Hessian matrix, evaluated at the estimate of ''β'', can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate [[standard error]]s for the regression coefficients.
== Prediction ==
[https://amstat.tandfonline.com/doi/full/10.1080/01621459.2020.1762613 Prediction, Estimation, and Attribution] Efron 2020


If X is age, then the coefficient is likely >0. If X is some treatment, then the coefficient is likely <0.
== Postprediction inference/Inference based on predicted outcomes ==
[https://www.pnas.org/content/117/48/30266 Methods for correcting inference based on outcomes predicted by machine learning] Wang 2020. [https://github.com/leekgroup/postpi postpi] package.


== Compare the partial likelihood to the full likelihood ==
== SHAP/SHapley Additive exPlanation: feature importance for each class ==
http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=10
<ul>
<li>https://en.wikipedia.org/wiki/Shapley_value
<li>Python https://shap.readthedocs.io/en/latest/index.html
<li>[https://towardsdatascience.com/introduction-to-shap-with-python-d27edc23c454 Introduction to SHAP with Python]. For a given prediction, SHAP values can tell us how much each factor in a model has contributed to the prediction.
<li>[https://towardsdatascience.com/a-novel-approach-to-feature-importance-shapley-additive-explanations-d18af30fc21b A Novel Approach to Feature Importance — Shapley Additive Explanations]
<li>[https://towardsdatascience.com/shap-shapley-additive-explanations-5a2a271ed9c3 SHAP: Shapley Additive Explanations]
<li>R package [https://cran.r-project.org/web/packages/shapr/ shapr]: Prediction Explanation with Dependence-Aware Shapley Values
* The output of Shapley value produced by explain() is an n_test x (1+p_test) matrix where "n" is the number of obs and "p" is the dimension of predictor.
* The Shapley values can be plotted using a barplot for each test sample.
* '''approach''' parameter can be empirical/gaussian/copula/ctree. See [https://rdrr.io/cran/shapr/man/ doc]
* Note the package only supports a few prediction models to be used in the '''shapr''' function.
<pre>
$ debug(shapr:::get_supported_models)
$ shapr:::get_supported_models()
Browse[2]> print(DT)
  model_class get_model_specs predict_model
1:    default          FALSE          TRUE
2:        gam            TRUE          TRUE
3:        glm            TRUE          TRUE
4:          lm            TRUE          TRUE
5:      ranger            TRUE          TRUE
6: xgb.Booster            TRUE          TRUE
</pre>
</li>
<li>[https://blog.datascienceheroes.com/how-to-interpret-shap-values-in-r/ A gentle introduction to SHAP values in R] '''xgboost''' package
<li>[https://stackoverflow.com/a/71886457 Create SHAP plots for tidymodels objects]
<li>[https://cran.r-project.org/web/packages/shapper/index.html shapper]: Wrapper of Python Library 'shap'
<li>[https://lorentzen.ch/index.php/2022/12/21/interpret-complex-linear-models-with-shap-within-seconds/ Interpret Complex Linear Models with SHAP within Seconds]
<li>[https://www.r-bloggers.com/2024/06/shap-values-of-additive-models/ SHAP Values of Additive Models]
</ul>


== z-column (Wald statistic) from R's coxph() ==
= Imbalanced/unbalanced Classification =
* https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Cox-Regression.pdf#page=6 The  ratio  of  each  regression  coefficient  to  its standard error, a Wald statistic which is asymptotically standard normal under the hypothesis that the corresponding β is 0.
See [[ROC#Unbalanced_classes|ROC]].
* http://dni-institute.in/blogs/cox-regression-interpret-result-and-predict/


== How exactly can the Cox-model ignore exact times? ==
= Deep Learning =
[https://stats.stackexchange.com/q/94025 The Cox model does not depend on the times itself, instead it only needs an ordering of the events].
* [https://bcourses.berkeley.edu/courses/1453965/wiki CS294-129 Designing, Visualizing and Understanding Deep Neural Networks] from berkeley.
* https://www.youtube.com/playlist?list=PLkFD6_40KJIxopmdJF_CLNqG3QuDFHQUm
* [https://www.r-bloggers.com/deep-learning-from-first-principles-in-python-r-and-octave-part-5/ Deep Learning from first principles in Python, R and Octave – Part 5]


<syntaxhighlight lang='rsplus'>
== Tensor Flow (tensorflow package) ==
library(survival)
* https://tensorflow.rstudio.com/
survfit(Surv(time, status) ~ x, data = aml)  
* [https://youtu.be/atiYXm7JZv0 Machine Learning with R and TensorFlow] (Video)
fit <- coxph(Surv(time, status) ~ x, data = aml)
* [https://developers.google.com/machine-learning/crash-course/ Machine Learning Crash Course] with TensorFlow APIs
coef(fit) # 0.9155326
* [http://www.pnas.org/content/early/2018/03/09/1717139115 Predicting cancer outcomes from histology and genomics using convolutional networks] Pooya Mobadersany et al, PNAS 2018
min(diff(sort(unique(aml$time)))) # 1


# Shift survival time for some obs but keeps the same order
== Biological applications ==
# make sure we choose obs (n=20 not works but n=21 works) with twins
* [https://academic.oup.com/bioinformatics/article-abstract/33/22/3685/4092933 An introduction to deep learning on biological sequence data: examples and solutions]
rbind(order(aml$time), sort(aml$time), aml$time[order(aml$time)])
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16]
# [1,]  12  13  14  15    1  16    2    3  17    4    5    18    19    6    20    7
# [2,]    5    5    8    8    9  12  13  13  16    18    23    23    27    28    30    31
# [3,]    5    5    8    8    9  12  13  13  16    18    23    23    27    28    30    31
# [,17] [,18] [,19] [,20] [,21] [,22] [,23]
# [1,]    21    8    22     9    23    10    11
# [2,]   33    34    43    45    45    48  161
# [3,]    33    34    43    45    45    48  161


aml$time2 <- aml$time
== Machine learning resources ==
aml$time2[order(aml$time)[1:21]] <- aml$time[order(aml$time)[1:21]] - .9
* [https://www.makeuseof.com/tag/machine-learning-courses/ These Machine Learning Courses Will Prepare a Career Path for You]
fit2 <- coxph(Surv(time2, status) ~ x, data = aml); fit2
* [https://blog.datasciencedojo.com/machine-learning-algorithms/ 101 Machine Learning Algorithms for Data Science with Cheat Sheets]
coef(fit2) #      0.9155326
* [https://supervised-ml-course.netlify.com/ Supervised machine learning case studies in R] - A Free, Interactive Course Using Tidy Tools.
coef(fit) == coef(fit2) # TRUE


aml$time3 <- aml$time
== The Bias-Variance Trade-Off & "DOUBLE DESCENT" in the test error ==
aml$time3[order(aml$time)[1:20]] <- aml$time[order(aml$time)[1:20]] - .9
https://twitter.com/daniela_witten/status/1292293102103748609 and an easy to read [https://threadreaderapp.com/thread/1292293102103748609.html Thread Reader].
fit3 <- coxph(Surv(time3, status) ~ x, data = aml); fit3
coef(fit3) #      0.8891567
coef(fit) == coef(fit3) # FALSE
</syntaxhighlight>


== Partial likelihood when there are ties; hypothesis testing: Likelihood Ratio Test, Wald Test & Score Test ==  
* (Thread #17) The key point is with 20 DF, n=p, and there's exactly ONE least squares fit that has zero training error. And that fit happens to have oodles of wiggles.....
http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=29
* (Thread #18) but as we increase the DF so that p>n, there are TONS of '''interpolating''' least squares fits. The MINIMUM NORM least squares fit is the "least wiggly" of those zillions of fits. And the "least wiggly" among them is even less wiggly than the fit when p=n !!!
* (Thread #19) "double descent" is happening b/c DF isn't really the right quantity for the the x-axis: like, the fact that we are choosing the minimum norm least squares fit actually means that the spline with 36 DF is **less** flexible than the spline with 20 DF.  
* (Thread #20) if had used a ridge penalty when fitting the spline (instead of least squares)? Well then we wouldn't have interpolated training set, we wouldn't have seen double descent, AND we would have gotten better test error (for the right value of the tuning parameter!)
* (Thread #21) When we use (stochastic) gradient descent to fit a neural net, we are actually picking out the minimum norm solution!! So the spline example is a pretty good analogy for what is happening when we see double descent for neural nets.


In R's coxph(): Nearly all Cox regression programs use the ''Breslow'' method by default, but not this one. The '' '''Efron approximation''' '' is used as the default here, it is more accurate when dealing with tied death times, and is as efficient computationally.
== Survival data ==
[https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.8542?campaign=woletoc Deep learning for survival outcomes] Steingrimsson, 2020


http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xaghtmlnode28.html (include the case when there is a partition of parameters). The formulas for 3 tests are also available on  Appendix B of Klein book.
= Randomization inference =
* Google: randomization inference in r
* [http://www.personal.psu.edu/ljk20/zeros.pdf Randomization Inference for Outcomes with Clumping at Zero], [https://amstat.tandfonline.com/doi/full/10.1080/00031305.2017.1385535#.W09zpdhKg3E The American Statistician] 2018
* [https://jasonkerwin.com/nonparibus/2017/09/25/randomization-inference-vs-bootstrapping-p-values/ Randomization inference vs. bootstrapping for p-values]


The following code does not test for models. But since there is only one coefficient, the results are the same. If there is more than one variable, we can use anova(model1, model2) to run LRT.
== Randomization test ==
<syntaxhighlight lang='rsplus'>
[https://www.tandfonline.com/doi/full/10.1080/01621459.2023.2199814 What is a Randomization Test?]
library(KMsurv)
# No ties. Section 8.2
data(btrial)
str(btrial)
# 'data.frame': 45 obs. of  3 variables:
# $ time : int  19 25 30 34 37 46 47 51 56 57 ...
# $ death: int  1 1 1 1 1 1 1 1 1 1 ...
# $ im  : int  1 1 1 1 1 1 1 1 1 1 ...
table(subset(btrial, death == 1)$time)
# death time is unique
coxph(Surv(time, death) ~ im, data = btrial)
#    coef exp(coef) se(coef)    z    p
# im 0.980    2.665    0.435 2.25 0.024
# Likelihood ratio test=4.45  on 1 df, p=0.03
# n= 45, number of events= 24


# Ties, Section 8.3
== Myths of randomisation ==
data(kidney)
[https://www.growkudos.com/publications/10.1002%25252Fsim.5713/reader Myths of randomisation]
str(kidney)
# 'data.frame': 119 obs. of 3 variables:
# $ time : num  1.5 3.5 4.5 4.5 5.5 8.5 8.5 9.5 10.5 11.5 ...
# $ delta: int  1 1 1 1 1 1 1 1 1 1 ...
# $ type : int  1 1 1 1 1 1 1 1 1 1 ...
table(subset(kidney, delta == 1)$time)
# 0.5  1.5  2.5  3.5  4.5  5.5  6.5  8.5  9.5 10.5 11.5 15.5 16.5 18.5 23.5 26.5
# 6    1    2    2    2    1    1    2    1    1    1    2    1    1    1    1


# Default: Efron method
== Unequal probabilities ==
coxph(Surv(time, delta) ~ type, data = kidney)
[https://www.r-bloggers.com/2024/08/sampling-without-replacement-with-unequal-probabilities-by-ellis2013nz/ Sampling without replacement with unequal probabilities]
# coef exp(coef) se(coef)    z    p
# type -0.613    0.542    0.398 -1.54 0.12
# Likelihood ratio test=2.41  on 1 df, p=0.1
# n= 119, number of events= 26
summary(coxph(Surv(time, delta) ~ type, data = kidney))
# n= 119, number of events= 26
# coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6126    0.5420  0.3979 -1.539    0.124
#
# exp(coef) exp(-coef) lower .95 upper .95
# type    0.542      1.845    0.2485    1.182
#
# Concordance= 0.497  (se = 0.056 )
# Rsquare= 0.02  (max possible= 0.827 )
# Likelihood ratio test= 2.41  on 1 df,  p=0.1
# Wald test            = 2.37  on 1 df,  p=0.1
# Score (logrank) test = 2.44  on 1 df,  p=0.1


# Breslow method
= Model selection criteria =
summary(coxph(Surv(time, delta) ~ type, data = kidney, ties = "breslow"))
* [http://r-video-tutorial.blogspot.com/2017/07/assessing-accuracy-of-our-models-r.html Assessing the Accuracy of our models (R Squared, Adjusted R Squared, RMSE, MAE, AIC)]
# n= 119, number of events= 26
* [https://forecasting.svetunkov.ru/en/2018/03/22/comparing-additive-and-multiplicative-regressions-using-aic-in-r/ Comparing additive and multiplicative regressions using AIC in R]
#        coef exp(coef) se(coef)      z Pr(>|z|)
* [https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1459316?src=recsys Model Selection and Regression t-Statistics] Derryberry 2019
# type -0.6182    0.5389  0.3981 -1.553    0.12
* Mean Absolute Deviance. Measure of the average absolute difference between the predicted values and the actual values.
#
* Cf: [https://en.wikipedia.org/wiki/Average_absolute_deviation Mean absolute deviation], [https://en.wikipedia.org/wiki/Median_absolute_deviation Median absolute deviation]. Measure of the variability.
#      exp(coef) exp(-coef) lower .95 upper .95
# type    0.5389      1.856    0.247    1.176
#
# Concordance= 0.497  (se = 0.056 )
# Rsquare= 0.02  (max possible= 0.827 )
# Likelihood ratio test= 2.45  on 1 df,   p=0.1
# Wald test            = 2.41  on 1 df,  p=0.1
# Score (logrank) test = 2.49  on 1 df,  p=0.1


# Discrete/exact method
== All models are wrong ==
summary(coxph(Surv(time, delta) ~ type, data = kidney, ties = "exact"))
[https://en.wikipedia.org/wiki/All_models_are_wrong All models are wrong] from George Box.
#        coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6294    0.5329  0.4019 -1.566    0.117
#
#      exp(coef) exp(-coef) lower .95 upper .95
# type    0.5329      1.877    0.2424    1.171
#
# Rsquare= 0.021  (max possible= 0.795 )
# Likelihood ratio test= 2.49  on 1 df,  p=0.1
# Wald test            = 2.45  on 1 df,  p=0.1
# Score (logrank) test = 2.53  on 1 df,  p=0.1
</syntaxhighlight>


== Hazard (function) and survival function ==
== MSE ==
A hazard is the rate at which events happen, so that the probability of an event happening in a short time interval is the length of time multiplied by the hazard.
* [https://stats.stackexchange.com/a/306337 Is MSE decreasing with increasing number of explanatory variables?] Yes


<math>
== Akaike information criterion/AIC ==
h(t) = \lim_{\Delta t \to 0} \frac{P(t \leq T < t+\Delta t|T \geq t)}{\Delta t} = \frac{f(t)}{S(t)} = -\partial{ln[S(t)]}
* https://en.wikipedia.org/wiki/Akaike_information_criterion.
</math>
:<math>\mathrm{AIC} \, = \, 2k - 2\ln(\hat L)</math>, where k be the number of estimated parameters in the model.
* Smaller is better (error criteria)
* Akaike proposed to approximate the expectation of the cross-validated log likelihood  <math>E_{test}E_{train} [log L(x_{test}| \hat{\beta}_{train})]</math> by <math>log L(x_{train} | \hat{\beta}_{train})-k </math>.
* Leave-one-out cross-validation is asymptotically equivalent to AIC, for ordinary linear regression models.
* AIC can be used to compare two models even if they are not hierarchically nested.
* [https://www.rdocumentation.org/packages/stats/versions/3.6.0/topics/AIC AIC()] from the stats package.
* [https://broom.tidymodels.org/reference/glance.lm.html broom::glance()] was used.
* Generally resampling based measures such as cross-validation should be preferred over theoretical measures such as Aikake's Information Criteria. [http://scott.fortmann-roe.com/docs/BiasVariance.html Understanding the Bias-Variance Tradeoff] & [http://scott.fortmann-roe.com/docs/MeasuringError.html Accurately Measuring Model Prediction Error].


Therefore
== BIC ==
:<math>\mathrm{BIC} \, = \, \ln(n) \cdot 2k - 2\ln(\hat L)</math>, where k be the number of estimated parameters in the model.


<math>
== Overfitting ==
H(x) = \int_0^x h(u) d(u) = -ln[S(x)].
* [https://stats.stackexchange.com/questions/81576/how-to-judge-if-a-supervised-machine-learning-model-is-overfitting-or-not How to judge if a supervised machine learning model is overfitting or not?]
</math>
* [https://win-vector.com/2021/01/04/the-nature-of-overfitting/ The Nature of Overfitting], [https://win-vector.com/2021/01/07/smoothing-isnt-always-safe/ Smoothing isn’t Always Safe]


or
== AIC vs AUC ==
[https://stats.stackexchange.com/a/51278 What is the difference in what AIC and c-statistic (AUC) actually measure for model fit?]


<math>
Roughly speaking:
S(x) = e^{-H(x)}
* AIC is telling you how good your model fits for a specific mis-classification cost.
</math>
* AUC is telling you how good your model would work, on average, across all mis-classification costs.


Hazards (or probability of hazards) may vary with time, while the assumption in proportional hazard models for survival is that the hazard is a constant proportion.
'''Frank Harrell''': AUC (C-index) has the advantage of measuring the concordance probability as you stated, aside from cost/utility considerations. To me the bottom line is the AUC should be used to describe discrimination of one model, not to compare 2 models. For comparison we need to use the most powerful measure: deviance and those things derived from deviance: generalized 𝑅<sup>2</sup> and AIC.


Examples:
== Variable selection and model estimation ==
* If h(t)=c, S(t) is exponential. f(t) = c exp(-ct). The mean is 1/c.
[https://stats.stackexchange.com/a/138475 Proper variable selection: Use only training data or full data?]
* If <math>\log h(t) = c + \rho t</math>, S(t) is  Gompertz distribution.
* If <math>\log h(t)=c + \rho \log (t)</math>, S(t) is Weibull distribution.
* For Cox regression, the [http://www.math.ucsd.edu/~rxu/math284/slect6.pdf survival function can be shown] to be  <math>S(t|X) = S_0(t) ^ {\exp(X\beta)}</math>.
: <math>
\begin{align}
S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\
  &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\
  &= e^{-\int_0^t h_0(u) du \cdot exp(X \beta)} \\
  &= S_0(t)^{exp(X \beta)}
\end{align}
</math>
Alternatively,
: <math>
\begin{align}
S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\
  &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\
  &= e^{-H_0(t) \cdot exp(X \beta)}
\end{align}
</math>
where the cumulative baseline hazard at time t, <math>H_0(t)</math>, is commonly estimated through the non-parametric Breslow estimator.


== Check the proportional hazard (constant HR over time) assumption by cox.zph() ==
* training observations to perform all aspects of model-fitting—including variable selection
* https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/cox.zph
* make use of the full data set in order to obtain more accurate coefficient estimates (This statement is arguable)
* Draw a hazard rate plot. [https://stats.stackexchange.com/a/34092 How to calculate predicted hazard rates from a Cox PH model?]. The hazard ratio should be constant; h(t|Z) / h(t|Z*) is independent of t.


== Sample size calculators ==
= Cross-Validation =
* [http://powerandsamplesize.com/Calculators/Test-Time-To-Event-Data/Cox-PH-Equivalence Calculate Sample Size Needed to Test Time-To-Event Data: Cox PH, Equivalence] including a reference
References:
* http://www.sample-size.net/sample-size-survival-analysis/ including a reference
* [https://arxiv.org/abs/2104.00673 Cross-validation: what does it estimate and how well does it do it?], [https://www.tandfonline.com/doi/full/10.1080/01621459.2023.2197686 JASA] 2023
* [https://youtu.be/v18f-Jsqi4c?t=1309 Evolution of survival sample size methods] demonstrated by nQuery software. '''Sample size refers the number of events; status=1 (not the number of observations)'''
* http://www.icssc.org/Documents/AdvBiosGoa/Tab%2026.00_SurvSS.pdf no reference
* [https://cran.r-project.org/web/packages/powerSurvEpi powerSurvEpi] R package
* [https://cran.r-project.org/web/packages/NPHMC/index.html NPHMC] R package (based on the Proportional Hazards Mixture Cure Model) and the [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3859312/ paper]
* [http://r.789695.n4.nabble.com/Power-calculation-for-survival-analysis-td3830031.html Hmisc::cpower()] function.


=== How many events are required to fit the Cox regression reliably? ===
R packages:
If we have only 1 covariate and the covariate is continuous, we need at least 2 events (one for the baseline hazard and one for beta).  
* [https://cran.r-project.org/web/packages/rsample/index.html rsample] (released July 2017). An [https://leekgroup.github.io/postpi/doc/vignettes.html example] from the postpi package.
* [https://cran.r-project.org/web/packages/CrossValidate/index.html CrossValidate] (released July 2017)
* [https://github.com/thierrymoudiki/crossval crossval] (github, new home at https://techtonique.r-universe.dev/),
** [https://thierrymoudiki.github.io/blog/2020/05/08/r/misc/crossval-custom-errors Custom errors for cross-validation using crossval::crossval_ml]
** [https://thierrymoudiki.github.io/blog/2021/07/23/r/crossvalidation-r-universe crossvalidation on R-universe, plus a classification example]


If the covariate is discrete, we need at least one event from (each of) two groups in order to fit the Cox regression reliably. For example, if status=(0,0,0,1,0,1) and x=(0,0,1,1,2,2) works fine.  
== Bias–variance tradeoff ==
<syntaxhighlight lang='rsplus'>
<ul>
library(survival)
<li>[https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff Wikipedia]
head(ovarian)
<li>[https://www.simplilearn.com/tutorials/machine-learning-tutorial/bias-and-variance Everything You Need To Know About Bias And Variance]. Y-axis = error, X-axis = model complexity.
#  futime fustat    age resid.ds rx ecog.ps
<li>[https://datacadamia.com/data_mining/bias_trade-off#model_complexity_is_betterworse Statistics - Bias-variance trade-off (between overfitting and underfitting)]
# 1    59      1 72.3315        2 1      1
<li>[https://statisticallearning.org/bias-variance-tradeoff.html *Chapter 4 The Bias–Variance Tradeoff] from Basics of Statistical Learning by David Dalpiaz. R code is included. Regression case.
# 2    115      1 74.4932        2  1      1
<li>Ridge regression
# 3    156      1 66.4658        2  1      2
* <math>Obj = (y-X \beta)^T (y - X \beta) + \lambda ||\beta||_2^2 </math>
# 4    421      0 53.3644        2  2      1
* [https://lbelzile.github.io/lineaRmodels/bias-and-variance-tradeoff.html Plot of MSE, bias**2, variance of ridge estimator in terms of lambda] by Léo Belzile. Note that there is a typo in the bias term. It should be <math>E(\gamma)-\gamma = [(Z^TZ+\lambda I_p)^{-1}Z^TZ -I_p] \lambda </math>.
# 5    431      1 50.3397        2  1      1
* [https://www.statlect.com/fundamentals-of-statistics/ridge-regression Explicit form of the bias and variance] of ridge estimator. The estimator is linear. <math>\hat{\beta} = (X^T X + \lambda I_p)^{-1} (X^T y) </math>
# 6    448      0 56.4301        1  1      2
</ul>


ova <- ovarian # n=26
== Data splitting ==
ova$time <- ova$futime
[https://www.fharrell.com/post/split-val/?s=09 Split-Sample Model Validation]
ova$status <- 0
ova$status[1:4] <- 1
coxph(Surv(time, status) ~ rx, data = ova) # OK
summary(survfit(Surv(time, status) ~ rx, data =ova))
#                rx=1
#  time n.risk n.event survival std.err lower 95% CI upper 95% CI
#    59    13      1    0.923  0.0739        0.789            1
#  115    12      1    0.846  0.1001        0.671            1
#  156    11      1    0.769  0.1169        0.571            1
#                rx=2
#    time  n.risk  n.event  survival  std.err lower 95% CI upper 95% CI
# 421.0000 10.0000  1.0000    0.9000  0.0949      0.7320      1.0000


# Suspicious Cox regression result due to 0 sample size in one group
== PRESS statistic (LOOCV) in regression ==
ova$status <- 0
The [https://en.wikipedia.org/wiki/PRESS_statistic PRESS statistic] (predicted residual error sum of squares) <math>\sum_i (y_i - \hat{y}_{i,-i})^2</math> provides another way to find the optimal model in regression. See the [https://lbelzile.github.io/lineaRmodels/cross-validation-1.html formula for the ridge regression] case.
ova$status[1:3] <- 1
coxph(Surv(time, status) ~ rx, data = ova)
#        coef exp(coef)  se(coef) z p
# rx -2.13e+01  5.67e-10  2.32e+04 0 1
#
# Likelihood ratio test=4.41 on 1 df, p=0.04
# n= 26, number of events= 3
# Warning message:
# In fitter(X, Y, strats, offset, init, control, weights = weights,  :
#  Loglik converged before variable  1 ; beta may be infinite.  


summary(survfit(Surv(time, status) ~ rx, data = ova))
== LOOCV vs 10-fold CV in classification ==
#                rx=1
* Background: [https://en.wikipedia.org/wiki/Variance#Sum_of_correlated_variables Variance of mean] for correlated data. If the variables have equal variance ''σ''<sup>2</sup> and the average correlation of distinct variables is ''ρ'', then the variance of their mean is
# time n.risk n.event survival std.err lower 95% CI upper 95% CI
#   59    13      1    0.923  0.0739        0.789            1
#  115    12      1    0.846  0.1001        0.671            1
#  156    11      1    0.769  0.1169        0.571            1
#                rx=2  
# time n.risk n.event survival std.err lower 95% CI upper 95% CI
</syntaxhighlight>


== Extract p-values ==
:<math>\operatorname{Var}\left(\overline{X}\right) = \frac{\sigma^2}{n} + \frac{n - 1}{n}\rho\sigma^2.</math>
<syntaxhighlight lang='rsplus'>
:This implies that the variance of the mean increases with the average of the correlations.
fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
* ([https://hastie.su.domains/ISLR2/ISLRv2_website.pdf#page=214 5.1.4 of ISLR 2nd])
** k-fold CV is that it often gives more accurate estimates of the test error rate than does LOOCV. This has to do with a bias-variance trade-off.
** '''When we perform LOOCV, we are in effect averaging the outputs of n fitted models, each of which is trained on an almost identical set of observations; therefore, these outputs are highly (positively) correlated with each other.''' Since the mean of many highly correlated quantities has higher variance than does the mean of many quantities that are not as highly correlated, the test error estimate resulting from LOOCV tends to have higher variance than does the test error estimate resulting from k-fold CV... Typically, given these considerations, one performs k-fold cross-validation using k = 5 or k = 10, as these values have been shown empirically to yield test error rate estimates that suffer neither from excessively high bias nor from very high variance.
* [https://stats.stackexchange.com/a/264721 10-fold Cross-validation vs leave-one-out cross-validation]
** Leave-one-out cross-validation is approximately unbiased.  But it tends to have a high '''variance'''.
** The '''variance''' in fitting the model tends to be higher if it is fitted to a small dataset.
** In LOOCV, because there is a lot of overlap between training sets, and thus the test error estimates are highly correlated, which means that the mean value of the test error estimate will have higher '''variance'''.
** Unless the dataset were very small, I would use 10-fold cross-validation if it fitted in my computational budget, or better still, bootstrap estimation and bagging.
* [https://web.stanford.edu/~hastie/ISLR2/ISLRv2_website.pdf#page=213 Chapter 5 Resampling Methods] of ISLR 2nd.
* [https://r4ds.github.io/bookclub-islr/bias-variance-tradeoff-and-k-fold-cross-validation.html  Bias-Variance Tradeoff and k-fold Cross-Validation]
* [https://stats.stackexchange.com/a/90903 Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high?]
* [https://stats.stackexchange.com/a/178421 High variance of leave-one-out cross-validation]
* [https://brb.nci.nih.gov/techreport/TechReport_Molinaro.pdf Prediction Error Estimation: A Comparison of Resampling Methods] Molinaro 2005
* Survival data [https://brb.nci.nih.gov/techreport/Subramanina-Simon-StatMed.pdf An evaluation of resampling methods for assessment of survival risk prediction in high-dimensional settings] Subramanian 2010
* [https://brb.nci.nih.gov/techreport/Briefings.pdf#page=10 Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data] Subramanian 2011.
** classification error: (Molinaro 2005) For small sample sizes of fewer than 50 cases, they recommended use of leave-one-out cross-validation to minimize mean squared error of the estimate of prediction error.
** survival data using time-dependent ROC: (Subramanian 2010) They recommended use of 5- or 10-fold cross-validation for a wide range of conditions


# method 1:
== Monte carlo cross-validation ==
beta <- coef(fit)
This method creates multiple random splits of the dataset into training and validation data. See [https://en.wikipedia.org/wiki/Cross-validation_(statistics)#Repeated_random_sub-sampling_validation Wikipedia].
se <- sqrt(diag(vcov(fit)))
* It is not creating replicates of CV samples.
1 - pchisq((beta/se)^2, 1)
* As the number of random splits approaches infinity, the result of repeated random sub-sampling validation tends towards that of leave-p-out cross-validation.


# method 2: https://www.biostars.org/p/65315/
== Difference between CV & bootstrapping ==
coef(summary(fit))[, "Pr(>|z|)"]
[https://stats.stackexchange.com/a/18355 Differences between cross validation and bootstrapping to estimate the prediction error]
</syntaxhighlight>
* CV tends to be less biased but K-fold CV has fairly large variance.
* Bootstrapping tends to drastically reduce the variance but gives more biased results (they tend to be pessimistic).
* The 632 and 632+ rules methods have been adapted to deal with the bootstrap bias
* Repeated CV does K-fold several times and averages the results similar to regular K-fold


== Expectation of life & expected future lifetime ==
== .632 and .632+ bootstrap ==
* The average lifetime is the same as the area under the survival curve.
* 0.632 bootstrap: Efron's paper [https://www.jstor.org/stable/pdf/2288636.pdf  Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation] in 1983.
* 0.632+ bootstrap: The CV estimate of prediction error is nearly unbiased but can be highly variable. See [https://www.tandfonline.com/doi/pdf/10.1080/01621459.1997.10474007 Improvements on Cross-Validation: The .632+ Bootstrap Method] by Efron and Tibshirani, JASA 1997.
* Chap 17.7 from "An Introduction to the Bootstrap" by Efron and Tibshirani. Chapman & Hall.
* Chap 7.4 (resubstitution error <math>\overline{err} </math>) and chap 7.11 (<math>Err_{boot(1)}</math>leave-one-out bootstrap estimate of prediction error) from "The Elements of Statistical Learning" by Hastie, Tibshirani and Friedman. Springer.
* [http://stats.stackexchange.com/questions/96739/what-is-the-632-rule-in-bootstrapping What is the .632 bootstrap]?
: <math>
: <math>
\begin{align}
Err_{.632} = 0.368 \overline{err} + 0.632 Err_{boot(1)}
\mu &= \int_0^\infty t f(t) dt \\
  &= \int_0^\infty S(t) dt
\end{align}
</math>
</math>
by integrating by parts making use of the fact that -f(t) is the derivative of S(t), which has limits S(0)=1 and <math>S(\infty)=0</math>. [https://stats.stackexchange.com/questions/186497/calculating-life-time-expectancy The average lifetime may not be bounded] if you have censored data, there's censored observations that last beyond your last recorded death.
* [https://link.springer.com/referenceworkentry/10.1007/978-1-4419-9863-7_1328 Bootstrap, 0.632 Bootstrap, 0.632+ Bootstrap] from Encyclopedia of Systems Biology by Springer.
* The [https://en.wikipedia.org/wiki/Survival_analysis#Quantities_derived_from_the_survival_distribution expected future lifetime at a given time <math>t_0</math>]
* bootpred() from bootstrap function.
:<math>\frac{1}{S(t_0)} \int_0^{\infty} t\,f(t_0+t)\,dt = \frac{1}{S(t_0)} \int_{t_0}^{\infty} S(t)\,dt,</math>
* The .632 bootstrap estimate can be extended to statistics other than prediction error. See the paper [https://www.tandfonline.com/doi/full/10.1080/10543406.2016.1226329 Issues in developing multivariable molecular signatures for guiding clinical care decisions] by Sachs. [https://github.com/sachsmc/signature-tutorial Source code]. Let <math>\phi</math> be a performance metric, <math>S_b</math> a sample of size n from a bootstrap, <math>S_{-b}</math> subset of <math>S</math> that is disjoint from <math>S_b</math>; test set.
: <math>
\hat{E}^*[\phi_{\mathcal{F}}(S)] = .368 \hat{E}[\phi_{f}(S)] + 0.632 \hat{E}[\phi_{f_b}(S_{-b})]
</math>
: where <math>\hat{E}[\phi_{f}(S)]</math> is the naive estimate of <math>\phi_f</math> using the entire dataset.
* For survival data
** [https://cran.r-project.org/web/packages/ROC632/ ROC632] package, [https://repositorium.sdum.uminho.pt/bitstream/1822/52744/1/paper4_final_version_CatarinaSantos_ACB.pdf Overview], and the paper [https://www.degruyter.com/view/j/sagmb.2012.11.issue-6/1544-6115.1815/1544-6115.1815.xml?format=INT Time Dependent ROC Curves for the Estimation of True Prognostic Capacity of Microarray Data] by Founcher 2012.
** [https://onlinelibrary.wiley.com/doi/full/10.1111/j.1541-0420.2007.00832.x Efron-Type Measures of Prediction Error for Survival Analysis] Gerds 2007.
** [https://academic.oup.com/bioinformatics/article/23/14/1768/188061 Assessment of survival prediction models based on microarray data] Schumacher 2007. Brier score.
** [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4194196/ Evaluating Random Forests for Survival Analysis using Prediction Error Curves] Mogensen, 2012. [https://cran.r-project.org/web/packages/pec/ pec] package
** [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis] Chen 2012. Concordance, ROC... But bootstrap was not used.
** [https://www.sciencedirect.com/science/article/pii/S1672022916300390#b0150 Comparison of Cox Model Methods in A Low-dimensional Setting with Few Events] 2016. Concordance, calibration slopes RMSE are considered.


== Hazard Ratio vs Relative Risk ==
== Create partitions for cross-validation ==
# https://en.wikipedia.org/wiki/Hazard_ratio
Stratified sampling: caret::createFolds()
# '''Hazard''' represents the '''instantaneous event rate''', which means the probability that an individual would experience an event (e.g. death/relapse) at a particular given point in time after the intervention, assuming that this individual has survived to that particular point of time without experiencing any event.
<ul>
# '''Hazard ratio''' is a measure of '''an effect''' of '''an intervention''' of '''an outcome''' of interest over time.
<li>[http://r-exercises.com/2016/11/13/sampling-exercise-1/ set.seed(), sample.split(),createDataPartition(), and createFolds()] functions from the [https://github.com/cran/caret/blob/master/R/createDataPartition.R caret] package. [https://topepo.github.io/caret/data-splitting.html Simple Splitting with Important Groups]. [https://rdrr.io/rforge/caret/src/R/createFolds.R ?createFolds], [https://gist.github.com/mrecos/47a201af97d8d218beb6 Stratified K-folds Cross-Validation with Caret]  
# Hazard ratio = hazard in the intervention group / Hazard in the control group
<pre>
# A hazard ratio is often reported as a “reduction in risk of death or progression” – This '''risk reduction''' is calculated as '''1 minus the Hazard Ratio (exp^beta)''', e.g., HR of 0.84 is equal to a 16% reduction in risk. See [http://www.time4epi.com/docs/default-source/default-document-library/insight07_understandinghazardratios.pdf?sfvrsn=2 www.time4epi.com] and [http://stats.stackexchange.com/questions/70741/how-to-interpret-a-hazard-ratio-from-a-continuous-variable-unit-of-difference stackexchange.com].
# Stratified sampling
# Hazard ratio and its confidence can be obtained in R by using the '''summary()''' method; e.g. '''fit <- coxph(Surv(time, status) ~ x); summary(fit)$conf.int; confint(fit)'''
library(caret)
# The coefficient beta represents the expected change in '''log hazard''' if X changes by one unit and all other variables are held constant in Cox models. See [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5969114/ Variable selection – A review and recommendations for the practicing statistician] by Heinze et al 2018.
set.seed(1)
x <- sample(rep(c("A", "B"), c(100, 200))) # 1:2 ratio
folds <- createFolds(x, k = 5, list = TRUE, returnTrain = FALSE)


Another [https://socialsciences.mcmaster.ca/jfox/Books/Companion-1E/appendix-cox-regression.pdf example] (John Fox, Cox Proportional-Hazards Regression for Survival Data) is assuming Y ~ age + prio + others.
# Confirm that each fold has approximately the same proportion of samples
* If exp(beta_age) = 0.944. It means an additional year of age '''reduces the hazard by a factor''' of .944 on average, or (1-.944)*100 = 5.6 '''percent'''.
# for each unique value in the target variable
* If exp(beta_prio) = 1.096, it means each prior conviction '''increases the hazard by a factor''' of 1.096, or 9.6 '''percent'''.
for(i in 1:5) print(prop.table(table(x[folds[[i]]])))   # 1:2 ratio


[https://www.quora.com/How-do-you-explain-the-difference-between-hazard-ratio-and-relative-risk-to-a-layman How do you explain the difference between hazard ratio and '''relative risk''' to a layman?] from Quora.
length(unique(union(union(union(union(folds[[1]], folds[[2]]), folds[[3]]), folds[[4]]), folds[[5]])))
# [1] 300
</pre>
</ul>


[https://www.stat-d.si/mz/mz13.1/p4.pdf Odds Ratio, Hazard Ratio and Relative Risk] by Janez Stare
Random sampling: sample()
<ul>
<li>[https://github.com/cran/glmnet/blob/master/R/cv.glmnet.R#L245 cv.glmnet()]
<pre>
sample(rep(seq(nfolds), length = N))  # a vector
set.seed(1); sample(rep(seq(3), length = 20))
# [1] 1 1 1 2 1 1 2 2 2 3 3 2 3 1 3 3 3 1 2 2
</pre>


For two groups that differ only in treatment condition, the ratio of the hazard functions is given by <math>e^\beta</math>, where <math>\beta</math> is the estimate of treatment effect derived from the regression model. See [https://en.wikipedia.org/wiki/Hazard_ratio#Definition_and_derivation here].
<li>Another way is to use '''replace=TRUE''' in sample() (not quite uniform compared to the last method, strange)
 
<pre>
[http://stats.stackexchange.com/questions/26408/what-is-the-difference-between-a-hazard-ratio-and-the-ecoef-of-a-cox-equation?rq=1 Compute ratio ratios from coxph()] in R (Hint: exp(beta)).
sample(1:nfolds, N, replace=TRUE) # a vector
set.seed(1); sample(1:3, 20, replace=TRUE)
# [1] 1 3 1 2 1 3 3 2 2 3 3 1 1 1 2 2 2 2 3 1
table(.Last.value)
# .Last.value
# 1 2 3
# 7 7 6
</pre>
<li>[https://drsimonj.svbtle.com/k-fold-cross-validation-with-modelr-and-broom k-fold cross validation with modelr and broom]
<li>[https://cran.r-project.org/web/packages/h2o/index.html h2o] package to [https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-018-4546-8#page=4 split the merged training dataset into three parts]
<pre>
n <- 42; nfold <- 5  # unequal partition
folds <- split(sample(1:n), rep(1:nfold, length = n))  # a list
sapply(folds, length)
</pre>
<li>Another simple example. Split the data into 70% training data and 30% testing data
<pre>
mysplit <- sample(c(rep(0, 0.7 * nrow(df)), rep(1, nrow(df) - 0.7 * nrow(df))))
train <- df[mysplit == 0, ]
test <- df[mysplit == 1, ]
</pre>
</ul>


'''Prognostic index''' is defined on http://www.math.ucsd.edu/~rxu/math284/slect6.pdf#page=2.
== Create training/testing data ==
<ul>
<li>[https://rdrr.io/rforge/caret/man/createDataPartition.html ?createDataPartition].
<li>[https://stackoverflow.com/a/46591859 caret createDataPartition returns more samples than expected]. It is more complicated than it looks.
<pre>
set.seed(1)
createDataPartition(rnorm(10), p=.3)
# $Resample1
# [1] 1 2 4 5


[http://www.sthda.com/english/wiki/cox-proportional-hazards-model#basics-of-the-cox-proportional-hazards-model Basics of the Cox proportional hazards model]. Good prognostic factor (b<0 or HR<1) and bad prognostic factor (b>0 or HR>1).
set.seed(1)
createDataPartition(rnorm(10), p=.5)
# $Resample1
# [1] 1 2 4 5 6 9
</pre>
<li>[https://www.r-bloggers.com/2024/07/stratified-sampling-in-r-a-practical-guide-with-base-r-and-dplyr/ Stratified Sampling in R: A Practical Guide with Base R and dplyr]
<li>[https://en.wikipedia.org/wiki/Stratified_sampling Stratified sampling]: [https://www.statology.org/stratified-sampling-r/ Stratified Sampling in R (With Examples)], [https://rsample.tidymodels.org/reference/initial_split.html initial_split()] from tidymodels. '''With a strata argument, the random sampling is conducted within the stratification variable'''. So it guaranteed each strata (stratify variable level) has observations in training and testing sets.
<pre>
> library(rsample) # or library(tidymodels)
> table(mtcars$cyl)
4  6  8
11  7 14
> set.seed(22)
> sp <- initial_split(mtcars, prop=.8, strata = cyl)
  # 80% training and 20% testing sets
> table(training(sp)$cyl)
4  6  8
8  5 11
> table(testing(sp)$cyl)
4 6 8
3 2 3
> 8/11; 5/7; 11/14 # split by initial_split()
[1] 0.7272727
[1] 0.7142857
[1] 0.7857143
> 9/11; 6/7; 12/14 # if we try to increase 1 observation
[1] 0.8181818
[1] 0.8571429
[1] 0.8571429
> (8+5+11)/nrow(mtcars)
[1] 0.75
> (9+6+12)/nrow(mtcars)
[1] 0.84375  # looks better


Variable selection: variables were retained in the prediction models if they had a hazard ratio of <0.85 or >1.15 (for binary variables) and were statistically significant at the 0.01 level. see [http://www.bmj.com/content/357/bmj.j2497 Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study].
> set.seed(22)
> sp2 <- initial_split(mtcars, prop=.8)
table(training(sp2)$cyl)
4  6  8
8  7 10
> table(testing(sp2)$cyl)
4 8
3 4
# not what we want since cyl "6" has no observations
</pre>
</ul>


=== Hazard Ratio and death probability ===
== Nested resampling ==
https://en.wikipedia.org/wiki/Hazard_ratio#The_hazard_ratio_and_survival
* [http://appliedpredictivemodeling.com/blog/2017/9/2/njdc83d01pzysvvlgik02t5qnaljnd Nested Resampling with rsample]
* [https://github.com/compstat-lmu/lecture_i2ml/tree/master/slides-pdf Introduction to Machine Learning (I2ML)]
* https://stats.stackexchange.com/questions/292179/whats-the-meaning-of-nested-resampling


Suppose ''S''<sub>0</sub>(t)=.2 (20% survived at time t) and the hazard ratio (hr) is 2 (a group has twice the chance of dying than a comparison group), then (Cox model is assumed)
Nested resampling is need when we want to '''tuning a model''' by using a grid search. The default settings of a model are likely not optimal for each data set out. So an inner CV has to be performed with the aim to find the best parameter set of a learner for each fold.
# ''S''<sub>1</sub>(t)=''S''<sub>0</sub>(t)<sup>hr</sup> = .2<sup>2</sup> = .04 (4% survived at t)
# The corresponding death probabilities are 0.8 and 0.96.
#  If a subject is exposed to twice the risk of a reference subject at every age, then the probability that the subject will be alive at any given age is the square of the probability that the reference subject (covariates = 0) would be alive at the same age. See [http://data.princeton.edu/pop509/ParametricSurvival.pdf#page=10 p10 of this lecture notes].
# exp(x*beta) is the relative risk associated with covariate value x.


=== Hazard Ratio Forest Plot ===
See a diagram at https://i.stack.imgur.com/vh1sZ.png
The forest plot quickly summarizes the hazard ratio data across multiple variables –If the line crosses the 1.0 value, the hazard ratio is not significant and there is no clear advantage for either arm.


[https://www.datacamp.com/community/tutorials/survival-analysis-R#fifth Hazard ratio forest plot: ggforest() from survminer]
In BRB-ArrayTools -> class prediction with multiple methods, the ''alpha'' (significant level of threshold used for gene selection, 2nd option in individual genes) can be viewed as a tuning parameter for the development of a classifier.


=== Restricted mean survival time ===
== Pre-validation/pre-validated predictor ==
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-13-152 Restricted mean survival time: an alternative to the hazard ratio for the design and analysis of randomized trials with a time-to-event outcome] Royston 2013
* [https://www.degruyter.com/view/j/sagmb.2002.1.1/sagmb.2002.1.1.1000/sagmb.2002.1.1.1000.xml Pre-validation and inference in microarrays] Tibshirani and Efron, Statistical Applications in Genetics and Molecular Biology, 2002.
* [https://onbiostatistics.blogspot.com/2019/04/the-use-of-restricted-mean-survival.html The Use of Restricted Mean Survival Time (RMST) Method When Proportional Hazards Assumption is in Doubt]
* See glmnet vignette
** To estimate treatment effect for time to event, Hazard Ratio (HR) is commonly used.
* http://www.stat.columbia.edu/~tzheng/teaching/genetics/papers/tib_efron.pdf#page=5. In each CV, we compute the estimate of the response. This estimate of the response will serve as a new predictor ('''pre-validated 'predictor' ''') in the final fitting model.
** HR is often assumed to be constant over time (i.e., proportional hazard assumption).
* P1101 of Sachs 2016. With pre-validation, instead of computing the statistic <math>\phi</math> for each of the held-out subsets (<math>S_{-b}</math> for the bootstrap or <math>S_{k}</math> for cross-validation), the fitted signature <math>\hat{f}(X_i)</math> is estimated for <math>X_i \in S_{-b}</math> where <math>\hat{f}</math> is estimated using <math>S_{b}</math>. This process is repeated to obtain a set of '''pre-validated 'signature' ''' estimates <math>\hat{f}</math>. Then an association measure <math>\phi</math> can be calculated using the pre-validated signature estimates and the true outcomes <math>Y_i, i = 1, \ldots, n</math>.
** Recently, we have some doubt about this assumption.
* Another description from the paper [https://www.genetics.org/content/205/1/77 The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection]. The prevalidation method is a variant of cross-validation. We then use <math>(y_i, \hat{\eta}_i) </math> to compute the measures described above. The cross-validated linear predictor for each patient is derived independently of the observed response of the patient, and hence the “prevalidated” dataset Embedded Image can essentially be treated as a “new dataset.” Therefore, this procedure provides valid assessment of the predictive performance of the model. To get stable results, we run 10× 10-fold cross-validation for real data analysis.
** If the PH assumption does not hold, the interpretation of HR can be difficult.
* In CV, left-out samples = hold-out cases = test set
* RMST is defined as the area under the survival curve up to t*, which should be pre-specified for a randomized trial. Uno 2014
* [https://cran.r-project.org/web/packages/survRM2/vignettes/survRM2-vignette3-2.html survRM2] package


== Piece-wise constant baseline hazard model, Poisson model and Breslow estimate ==
== Custom cross validation ==
* https://en.wikipedia.org/wiki/Proportional_hazards_model#Relationship_to_Poisson_models
* [https://github.com/WinVector/vtreat vtreat package]
* http://data.princeton.edu/wws509/notes/c7s4.html
* https://github.com/WinVector/vtreat/blob/master/Examples/CustomizedCrossPlan/CustomizedCrossPlan.md
* It has been implemented in the biospear package ([https://github.com/cran/biospear/blob/master/R/poissonize.R poissonize.R]) with the 'grplasso' package for group-lasso method. ''We implemented a Poisson model over two-month intervals, corresponding to a piecewise constant hazard model which approximates rather well the Breslow estimator in the Cox model''.
* http://r.789695.n4.nabble.com/exponential-proportional-hazard-model-td805536.html
* https://www.demogr.mpg.de/papers/technicalreports/tr-2010-003.pdf
* [https://stats.stackexchange.com/q/8117 Does Cox Regression have an underlying Poisson distribution?]
** [https://stats.stackexchange.com/questions/115479/calculate-incidence-rates-using-poisson-model-relation-to-hazard-ratio-from-cox/116083#116083 Calculate incidence rates using poisson model: relation to hazard ratio from Cox PH model] R code verification is included.
* https://rdrr.io/cran/JM/man/piecewiseExp.ph.html
* https://rdrr.io/cran/pch/man/pchreg.html
* [https://statmd.wordpress.com/2012/10/05/survival-analysis-via-hazard-based-modeling-and-generalized-linear-models/ Survival Analysis via Hazard Based Modeling and Generalized Linear Models]
* https://www.rdocumentation.org/packages/mgcv/versions/1.8-23/topics/cox.pht


== Estimate baseline hazard <math>h_0(t)</math>, Breslow cumulative baseline hazard <math>H_0(t)</math>, baseline survival function <math>S_0(t)</math> and the survival function <math>S(t)</math> ==
== Cross validation vs regularization ==
Google: how to estimate baseline hazard rate
[http://www.win-vector.com/blog/2019/11/when-cross-validation-is-more-powerful-than-regularization/ When Cross-Validation is More Powerful than Regularization]
* survfit.object has print(), plot(), lines(), and points() methods. It returns a list with components
** n
** time
** n.risk
** n.event
** n.censor
** surv [S_0(t)]
** cumhaz [ same as -log(surv)]
** upper
** lower
** n.all
* Terry Therneau: [http://r.789695.n4.nabble.com/Is-the-output-of-survfit-coxph-survival-or-baseline-survival-td3861919.html The ''baseline survival'', which is the survival for a hypothetical subject with all covariates=0, may be useful mathematical shorthand when writing a book but I cannot think of a single case where the resulting curve would be of any practical interest in medical data.]
* http://www.math.ucsd.edu/~rxu/math284/slect6.pdf#page=4 '''Breslow''' Estimator for '''cumulative''' baseline hazard at a time t and '''Kalbfleisch/Prentice''' Estimator
* When there are no covariates, the Breslow’s estimate reduces to the Fleming-Harrington (Nelson-Aalen) estimate, and K/P reduces to KM.
* [http://stats.stackexchange.com/questions/68737/how-to-estimate-baseline-hazard-function-in-cox-model-with-r stackexchange.com] and [https://stats.stackexchange.com/questions/36015/prediction-in-cox-regression/36077#36077 '''cumulative''' and non-cumulative baseline hazard]
* [http://grokbase.com/t/r/r-help/012p93znnh/r-newbie-cox-baseline-hazard (newbie) Cox Baseline Hazard] ''There are two methods of calculating the baseline survival, the default one gives the baseline hazard estimator you want. It is attributed to Aalen, Breslow, or Peto (see the next item).'' An example: https://stats.idre.ucla.edu/r/examples/asa/r-applied-survival-analysis-ch-2/.
* [https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/survfit.coxph survfit.coxph](formula, newdata, type, ...)
** newdata: '''Default is the mean of the covariates used in the coxph fit'''
** type = "aalen", "efron", or "kalbfleisch-prentice". The default is to match the computation used in the Cox model. The Nelson-Aalen-Breslow estimate for ties='breslow', the Efron estimate for ties='efron' and the Kalbfleisch-Prentice estimate for a discrete time model ties='exact'. Variance estimates are the Aalen-Link-Tsiatis, Efron, and Greenwood. The default will be the Efron estimate for ties='efron' and the '''Aalen estimate''' otherwise.
* [http://grokbase.com/t/r/r-help/04a5ydyst0/r-nelson-aalen-estimator-in-r Nelson-Aalen estimator in R]. The easiest way to get the Nelson-Aalen estimator is
<syntaxhighlight lang='rsplus'>
basehaz(coxph(Surv(time,status)~1,data=aml))
</syntaxhighlight>
because the (Breslow) hazard estimator for a Cox model reduces to the Nelson-Aalen estimator when there are no covariates. You can also compute it from information returned by survfit().
<syntaxhighlight lang='rsplus'>
fit <- survfit(Surv(time, status) ~ 1, data = aml)
cumsum(fit$n.event/fit$n.risk) # the Nelson-Aalen estimator for the times given by fit$times
-log(fit$surv)  # cumulative hazard
</syntaxhighlight>


=== Manually compute ===
== Cross-validation with confidence (CVC) ==
'''Breslow estimator of the baseline cumulative hazard rate''' reduces to the '''Nelson-Aalen''' estimator <math>\sum_{t_i \le t} \frac{d_i}{Y_i}</math> (<math>Y_i</math> is the number at risk at time <math>t_i</math>) when there are no covariates present; see p283 of Klein 2003.
[https://amstat.tandfonline.com/doi/abs/10.1080/01621459.2019.1672556 JASA 2019] by Jing Lei, [https://arxiv.org/pdf/1703.07904.pdf pdf], [http://www.stat.cmu.edu/~jinglei/pub.shtml code]
: <math>
\begin{align}
\hat{H}_0(t) &= \sum_{t_i \le t} \frac{d_i}{W(t_i;b)}, \\
W(t_i;b) &= \sum_{j \in R(t_i)} \exp(b' z_j)
\end{align}
</math>
where <math> t_1 < t_2 < \cdots < t_D</math> denotes the distinct death times and <math>d_i</math> be the number of deaths at time <math>t_i</math>. The estimator of the baseline survival function <math>S_0(t) = \exp [-H_0(t)]</math> is given by <math>\hat{S}_0(t) = \exp [-\hat{H}_0(t)]</math>. Below we use the formula to compute the cumulative hazard (and survival function) and compare them with the result obtained using R's built-in functions. The following code is a modification of the snippet from the post [https://stats.stackexchange.com/questions/46532/cox-baseline-hazard Breslow cumulative hazard and basehaz()].
<syntaxhighlight lang='rsplus'>
bhaz <- function(beta, time, status, x) {
  # time can be duplicated
  # x (covariate) must be continuous
  data <- data.frame(time,status,x)
  data <- data[order(data$time), ]
  dt  <- unique(data$time)
  k    <- length(dt)
  risk <- exp(data.matrix(data[,-c(1:2)]) %*% beta)
  h    <- rep(0,k)
 
  for(i in 1:k) {
    h[i] <- sum(data$status[data$time==dt[i]]) / sum(risk[data$time>=dt[i]])         
  }
 
  return(data.frame(h, dt))
}


# Example 1 'ovarian' which has unique survival time
== Correlation data ==
all(table(ovarian$futime) == 1) # TRUE
[https://arxiv.org/pdf/1904.02438.pdf Cross-Validation for Correlated Data] Rabinowicz, JASA 2020


fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
== Bias in Error Estimation ==
# 1. compute the cumulative baseline hazard
* [https://academic.oup.com/jnci/article/95/1/14/2520188#55882619 Pitfalls in the Use of DNA Microarray Data for Diagnostic and Prognostic Classification] Simon 2003. [https://github.com/arraytools/pitfalls My R code].
# 1.1 manually using Breslow estimator at the observed time
** Conclusion: '''Feature selection''' must be done within each cross-validation. Otherwise the selected feature already saw the labels of the training data, and made use of them.
h0 <- bhaz(fit$coef, ovarian$futime, ovarian$fustat, ovarian$age)
** Simulation: 2000 sets of 20 samples, of which 10 belonged to class 1 and the remaining 10 to class 2. Each sample was a vector of 6000 features (synthetic gene expressions).
H0 <- cumsum(h0$h)
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1397873/ Bias in Error Estimation when Using Cross-Validation for Model Selection] Varma & Simon 2006
# 1.2 use basehaz (always compute at the observed time)
** Conclusion: '''Parameter tuning''' must be done within each cross-validation; '''nested CV''' is advocated.
# since we consider the baseline, we need to use centered=FALSE
** Figures 1 (Shrunken centroids, shrinkage parameter Δ) & 2 (SVM, kernel parameters) are biased. Figure 3 (Shrunken centroids) & 4 (SVM) are unbiased.
H0.bh <- basehaz(fit, centered = FALSE)
** For k-NN, the parameter is k.
cbind(H0, h0$dt, H0.bh)
** Simulation:
range(abs(H0 - H0.bh$hazard)) # [1] 6.352747e-22 5.421011e-20
*** Null data: 1000 sets of 40 samples, of which 20 belonged to class 1 and the remaining 20 to class 2. Each sample was a vector of 6000 features (synthetic gene expressions).
*** Non-null data: we simulated differential expression by fixing 10 genes (out of 6000) to have a population mean differential expression of 1 between the two classes.
* Over-fitting and [https://www.jmlr.org/papers/volume11/cawley10a/cawley10a.pdf selection bias]; see [https://en.wikipedia.org/wiki/Cross-validation_(statistics) Cross-validation_(statistics)], [https://en.wikipedia.org/wiki/Selection_bias Selection bias] on Wikipedia. [https://twitter.com/sketchplanator/status/1409175698166763528 Comic].
* [https://arxiv.org/abs/1901.08974 On the cross-validation bias due to unsupervised pre-processing] Moscovich, 2019. [https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssb.12537?campaign=wolearlyview JRSSB] 2022
* [https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-022-00126-w?s=09 Risk of bias of prognostic models developed using machine learning: a systematic review in oncology] Dhiman 2022
* [https://github.com/matloff/fastStat#lesson-over--predictive-modeling----avoiding-overfitting Avoiding Overfitting] from fastStat: All of REAL Statistics


# 2. compute the estimation of the survival function
== Bias due to unsupervised preprocessing ==
# 2.1 manually using Breslow estimator at t = observed time (one dim, sorted)
[https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssb.12537 On the cross-validation bias due to unsupervised preprocessing] 2022. Below I follow the practice from [https://hpc.nih.gov/apps/python.html#envs Biowulf] to install Mamba. In this example, the 'project1' subfolder (2.0 GB) is located in '~/conda/envs' directory.
#    and observed age (another dim):
{{Pre}}
# S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
$ which python3
S1 <- outer(exp(-H0),  exp(coef(fit) * ovarian$age), "^")
/usr/bin/python3
dim(S1) # row = times, col = age
# 2.2 How about considering times not at observed (e.g. h0$dt - 10)?
# Let's look at the hazard rate
newtime <- h0$dt - 10
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
S2 <- outer(exp(-H0), exp(coef(fit) * ovarian$age), "^")
dim(S2) # row = newtime, col = age


# 2.3 use summary() and survfit() to obtain the estimation of the survival
$ wget https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh
S3 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = h0$dt)$surv
$ bash Mambaforge-Linux-x86_64.sh -p /home/brb/conda -b
dim(S3)  # row = times, col = age
$ source ~/conda/etc/profile.d/conda.sh && source ~/conda/etc/profile.d/mamba.sh
range(abs(S1 - S3)) # [1] 2.117244e-17 6.544321e-12
$ mkdir -p ~/bin
# 2.4 How about considering times not at observed (e.g. h0$dt - 10)?
$ cat <<'__EOF__' > ~/bin/myconda
# Note that we cannot put times larger than the observed
__conda_setup="$('/home/$USER/conda/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
S4 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = newtime)$surv
if [ $? -eq 0 ]; then
range(abs(S2 - S4)) # [1] 0.000000e+00 6.544321e-12
    eval "$__conda_setup"
</syntaxhighlight>
else
    if [ -f "/home/$USER/conda/etc/profile.d/conda.sh" ]; then
        . "/home/$USER/conda/etc/profile.d/conda.sh"
    else
        export PATH="/home/$USER/conda/bin:$PATH"
    fi
fi
unset __conda_setup


<syntaxhighlight lang='rsplus'>
if [ -f "/home/$USER/conda/etc/profile.d/mamba.sh" ]; then
# Example 2 'kidney' which has duplicated time
    . "/home/$USER/conda/etc/profile.d/mamba.sh"
fit <- coxph(Surv(time, status) ~ age, data = kidney)
fi
# manually compute the breslow cumulative baseline hazard
__EOF__
#  at the observed time
$ source ~/bin/myconda
h0 <- with(kidney, bhaz(fit$coef, time, status, age))
H0 <- cumsum(h0$h)
# use basehaz (always compute at the observed time)
# since we consider the baseline, we need to use centered=FALSE
H0.bh <- basehaz(fit, centered = FALSE)
head(cbind(H0, h0$dt, H0.bh))
range(abs(H0 - H0.bh$hazard)) # [1] 0.000000000 0.005775414


# manually compute the estimation of the survival function
$ export MAMBA_NO_BANNER=1
# at t = observed time (one dim, sorted) and observed age (another dim):
$ mamba create -n project1 python=3.7 numpy scipy scikit-learn mkl-service mkl_random pandas matplotlib
# S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
$ mamba activate project1
S1 <- outer(exp(-H0),  exp(coef(fit) * kidney$age), "^")
$ which python # /home/brb/conda/envs/project1/bin/python
dim(S1) # row = times, col = age
# How about considering times not at observed (h0$dt - 1)?
# Let's look at the hazard rate
newtime <- h0$dt - 1
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
S2 <- outer(exp(-H0), exp(coef(fit) * kidney$age), "^")
dim(S2) # row = newtime, col = age


# use summary() and survfit() to obtain the estimation of the survival
$ git clone https://github.com/mosco/unsupervised-preprocessing.git
S3 <- summary(survfit(fit, data.frame(age = kidney$age)), times = h0$dt)$surv
$ cd unsupervised-preprocessing/
dim(S3)  # row = times, col = age
$ python    # Ctrl+d to quit
range(abs(S1 - S3)) # [1] 0.000000000 0.002783715
$ mamba deactivate
# How about considering times not at observed (h0$dt - 1)?
</pre>
# We cannot put times larger than the observed
S4 <- summary(survfit(fit, data.frame(age = kidney$age)), times = newtime)$surv
range(abs(S2 - S4)) # [1] 0.000000000 0.002783715
</syntaxhighlight>
* [https://stat.ethz.ch/R-manual/R-devel/library/survival/html/basehaz.html basehaz()] (an alias for survfit) from [http://stats.stackexchange.com/questions/25317/how-to-calculate-predicted-hazard-rates-from-a-cox-ph-model stackexchange.com] and [http://r.789695.n4.nabble.com/breslow-estimator-for-cumulative-hazard-function-td795277.html here]. basehaz() has a parameter ''centered'' which by default is TRUE. Actually basehaz() gives '''cumulative hazard H(t)'''. See [http://r.789695.n4.nabble.com/Baseline-survival-estimate-td965389.html here]. That is, exp(-basehaz(fit)$hazard) is the same as summary(survfit(fit))$surv. basehaz() function is not useful.
<syntaxhighlight lang='rsplus'>
fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
> fit
Call:
coxph(formula = Surv(futime, fustat) ~ age, data = ovarian)


      coef exp(coef) se(coef)    z      p
== Pitfalls of applying machine learning in genomics  ==
age 0.1616    1.1754  0.0497 3.25 0.0012
[https://www.nature.com/articles/s41576-021-00434-9 Navigating the pitfalls of applying machine learning in genomics] 2022


Likelihood ratio test=14.3  on 1 df, p=0.000156
= Bootstrap =
n= 26, number of events= 12
See [[Bootstrap]]


# Note the default 'centered = TRUE' for basehaz()
= Clustering =
> exp(-basehaz(fit)$hazard) # exp(-cumulative hazard)
See [[Heatmap#Clustering|Clustering]].
[1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
[8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
[15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
[22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
> dim(ovarian)
[1] 26  6
> exp(-basehaz(fit)$hazard)[ovarian$fustat == 1]
[1] 0.9880206 0.9738738 0.9545899 0.8973620 0.8243117 0.8243117 0.7750981
[8] 0.7750981 0.5204807 0.5204807 0.5204807 0.5204807
> summary(survfit(fit))$surv
[1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
[8] 0.7750981 0.7244924 0.6734146 0.5962187 0.5204807
> summary(survfit(fit, data.frame(age=mean(ovarian$age))),
          time=ovarian$futime[ovarian$fustat == 1])$surv
# Same result as above
> summary(survfit(fit, data.frame(age=mean(ovarian$age))),
                    time=ovarian$futime)$surv
[1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
[8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
[15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
[22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
</syntaxhighlight>


== Predicted survival probability in Cox model: survfit.coxph(), plot.survfit() & summary.survfit( , times) ==
= Cross-sectional analysis =
For theory, see section 8.6 Estimation of the survival function in Klein & Moeschberger.
* https://en.wikipedia.org/wiki/Cross-sectional_study. The opposite of cross-sectional analysis is longitudinal analysis.
* Cross-sectional analysis refers to a type of research method in which data is collected '''at a single point in time''' from a group of individuals, organizations, or other units of analysis. This approach contrasts with longitudinal studies, which follow the same group of individuals or units over an extended period of time.
** In a cross-sectional analysis, researchers typically collect data from a sample of individuals or units that are representative of the population of interest. This data can then be used to examine patterns, relationships, or differences among the units at a specific point in time.
** Cross-sectional analysis is commonly used in fields such as sociology, psychology, public health, and economics to study topics such as demographics, health behaviors, income inequality, and social attitudes. While cross-sectional analysis can provide valuable insights into the characteristics of a population at a given point in time, it cannot establish causality or determine changes over time.


For R, see [https://stackoverflow.com/questions/26641178/extract-survival-probabilities-in-survfit-by-groups Extract survival probabilities in Survfit by groups]
= Mixed Effect Model =


[https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/plot.survfit plot.survfit()]. fun="log" to plot log survival curve, fun="event" plot cumulative events, fun="cumhaz" plots cumulative hazard (f(y) = -log(y)).
See [[Longitudinal#Mixed_Effect_Model|Longitudinal analysis]].


The plot function below will draw 4 curves: <math>S_0(t)^{\exp(\hat{\beta}_{age}*60)}</math>, <math>S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageII})}</math>, <math>S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIII})}</math>, <math>S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIV})}</math>.
= Entropy =
<syntaxhighlight lang='rsplus'>
* [http://theautomatic.net/2020/02/18/how-is-information-gain-calculated/ HOW IS INFORMATION GAIN CALCULATED?]
library(KMsurv) # Data package for Klein & Moeschberge
* [https://youtu.be/YtebGVx-Fxw Entropy (for data science) Clearly Explained!!!] by StatQuest
data(larynx)
** Entropy and [https://youtu.be/YtebGVx-Fxw?t=186 Surprise] and [https://youtu.be/YtebGVx-Fxw?t=951 surprise is in an inverse relationship to probability]
larynx$stage <- factor(larynx$stage)
** [https://youtu.be/YtebGVx-Fxw?t=716 Entropy is an expectation of surprise]
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)
** [https://youtu.be/YtebGVx-Fxw?t=921 Entropy can be used to quantify the similarity]
** [https://youtu.be/YtebGVx-Fxw?t=931 Entropy is the highest when we have the same number of both types of chickens]
: <math>
\begin{align}
Entropy &= \sum \log(1/p(x)) p(x) = \sum Surprise P(Surprise)
\end{align}
</math>


# Figure 8.3 from Section 8.6
== Definition ==
plot(survfit(coxobj, newdata = data.frame(age=rep(60, 4), stage=factor(1:4))), lty = 1:4)
Entropy is defined by -log2(p) where p is a probability. '''Higher entropy represents higher unpredictable of an event'''.


# Estimated probability for a 60-year old for different stage patients
Some examples:
out <- summary(survfit(coxobj, data.frame(age = rep(60, 4), stage=factor(1:4))), times = 5)
* Fair 2-side die: Entropy = -.5*log2(.5) - .5*log2(.5) = 1.
out$surv
* Fair 6-side die: Entropy = -6*1/6*log2(1/6) = 2.58
#  time n.risk n.event survival1 survival2 survival3 survival4
* Weighted 6-side die: Consider pi=.1 for i=1,..,5 and p6=.5. Entropy = -5*.1*log2(.1) - .5*log2(.5) = 2.16 (less unpredictable than a fair 6-side die).
#    5     34      40    0.702    0.665      0.51    0.142
sum(larynx$time >=5) # n.risk
# [1] 34
sum(larynx$delta[larynx$time <=5]) # n.event
# [1] 40
sum(larynx$time >5) # Wrong
# [1] 31
sum(larynx$delta[larynx$time <5]) # Wrong
# [1] 39


# 95% confidence interval
== Use ==
out$lower
When entropy was applied to the variable selection, we want to select a class variable which gives a largest entropy difference between without any class variable (compute entropy using response only) and with that class variable (entropy is computed by adding entropy in each class level) because this variable is most discriminative and it gives most '''information gain'''. For example,
# 0.8629482 0.9102532 0.7352413 0.548579
* entropy (without any class)=.94,
out$upper
* entropy(var 1) = .69,
# 0.5707952 0.4864903 0.3539527 0.03691768
* entropy(var 2)=.91,
</syntaxhighlight>
* entropy(var 3)=.725.  
We will choose variable 1 since it gives the largest gain (.94 - .69) compared to the other variables (.94 -.91, .94 -.725).  


We need to pay attention when the number of covariates is large (and we don't want to specify each covariates in the formula). The key is to create a data frame and use dot (.) in the formula. This is to fix a warning message: '' 'newdata' had XXX rows but variables found have YYY rows'' from running '''survfit(, newdata)'''.
Why is picking the attribute with the most information gain beneficial? It ''reduces'' entropy, which increases predictability. A decrease in entropy signifies an decrease in unpredictability, which also means an increase in predictability.


Another way is to use [https://stackoverflow.com/questions/25313897/r-survival-analysis-coxph-call-multiple-column as.formula()] if we don't want to create a new object.
Consider a split of a continuous variable. Where should we cut the continuous variable to create a binary partition with the highest gain? Suppose cut point c1 creates an entropy .9 and another cut point c2 creates an entropy .1. We should choose c2.
<syntaxhighlight lang='rsplus'>
xsub <- data.frame(xtrain)
colnames(xsub) <- paste0("x", 1:ncol(xsub))


coxobj <- coxph(Surv(ytrain[, "time"], ytrain[, "status"]) ~ ., data = xsub)
== Related ==
In addition to information gain, gini (dʒiːni) index is another metric used in decision tree. See [http://en.wikipedia.org/wiki/Decision_tree_learning wikipedia page] about decision tree learning.


newdata <- data.frame(xtest)
= Ensembles =
colnames(newdata) <- paste0("x", 1:ncol(newdata))
* Combining classifiers. Pro: better classification performance. Con: time consuming.
* Comic http://flowingdata.com/2017/09/05/xkcd-ensemble-model/
* [http://www.win-vector.com/blog/2019/07/common-ensemble-models-can-be-biased/ Common Ensemble Models can be Biased]
* [https://github.com/marjoleinf/pre?s=09 pre: an R package for deriving prediction rule ensembles]. It works on binary, multinomial, (multivariate) continuous, count and survival responses.


survprob <- summary(survfit(coxobj, newdata=newdata),
== Bagging ==
                    times = 5)$surv[1, ] 
Draw N bootstrap samples and summary the results (averaging for regression problem, majority vote for classification problem). Decrease variance without changing bias. Not help much with underfit or high bias models.
# since there is only 1 time point, we select the first row in surv (surv is a matrix with one row).
</syntaxhighlight>


The [https://www.rdocumentation.org/packages/pec/versions/2018.07.26/topics/predictSurvProb predictSurvProb()] function from the [https://www.rdocumentation.org/packages/pec/versions/2018.07.26 pec] package can also be used to extract survival probability predictions from various modeling approaches.
=== Random forest ===
* '''Variance importance''': if you scramble the values of a variable, and the accuracy of your tree does not change much, then the variable is not very important.
* Why is it useful to compute variance importance? So the model's predictions are easier to interpret (not improve the prediction performance).
* Random forest has advantages of easier to run in parallel and suitable for small n large p problems.
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2264-5 Random forest versus logistic regression: a large-scale benchmark experiment] by Raphael Couronné, BMC Bioinformatics 2018
* [https://github.com/suiji/arborist Arborist]: Parallelized, Extensible Random Forests
* [https://academic.oup.com/bioinformatics/article-abstract/35/15/2701/5250706?redirectedFrom=fulltext On what to permute in test-based approaches for variable importance measures in Random Forests]
* [https://datasandbox.netlify.app/posts/2022-10-03-tree%20based%20methods/ Tree Based Methods: Exploring the Forest] A study of the different tree based methods in machine learning .
* It seems RF is good in classification problem. [https://thierrymoudiki.github.io/blog/2023/08/27/r/misc/crossvalidation-boxplots Comparing cross-validation results using crossval_ml and boxplots]
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-024-05877-5 Random forests for the analysis of matched case–control studies] 2024


=== Predicted survival probabilities from glmnet: c060/peperr, biospear packages and manual computation ===
== Boosting ==
* Terry Therneau: [http://r.789695.n4.nabble.com/Predict-in-glmnet-for-cox-family-td4706070.html The answer is that you cannot predict survival time, in general]
Instead of selecting data points randomly with the boostrap, it favors the misclassified points.  
* https://rdrr.io/cran/c060/man/predictProb.glmnet.html
<syntaxhighlight lang='rsplus'>
## S3 method for class 'glmnet'
predictProb(object, response, x, times, complexity, ...)


set.seed(1234)
Algorithm:
junk <- biospear::simdata(n=500, p=500, q.main = 10, q.inter = 0,
* Initialize the weights
                  prob.tt = .5, m0=1, alpha.tt=0,
* Repeat
                  beta.main= -.5, b.corr = .7, b.corr.by=25,
** resample with respect to weights
                  wei.shape = 1, recr=3, fu=2, timefactor=1)
** retrain the model
summary(junk$time)
** recompute weights
library(glmnet)
library(c060) # Error: object 'predictProb' not found
library(peperr)


y <- cbind(time=junk$time, status=junk$status)
Since boosting requires computation in iterative and bagging can be run in parallel, bagging has an advantage over boosting when the data is very large.
x <- cbind(1, junk[, "treat", drop = FALSE])
names(x) <- c("inter", "treat")
x <- as.matrix(x)
cvfit <- cv.glmnet(x, y, family = "cox")
obj <- glmnet(x, y, family = "cox")
xnew <- matrix(c(0,0), nr=1)
predictProb(obj, y, xnew, times=1, complexity = cvfit$lambda.min)
# Error in exp(lp[response[, 1] >= t.unique[i]]) :
#  non-numeric argument to mathematical function
# In addition: Warning message:
# In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL'
</syntaxhighlight>
* https://www.rdocumentation.org/packages/biospear/versions/1.0.1/topics/expSurv and manual computation (search bhaz)
<pre>
expSurv(res, traindata, method, ci.level = .95, boot = FALSE, nboot, smooth = TRUE,
  pct.group = 4, time, trace = TRUE, ncores = 1)
# S3 method for resexpSurv
predict(object, newdata, ...)
</pre>
<syntaxhighlight lang='rsplus'>
# continue the example
# BMsel() takes a little while
resBM <- biospear::BMsel(
    data = junk,
    method = "lasso",
    inter = FALSE,
    folds = 5)
 
# Note: if we specify time =5 in expsurv(), we will get an error message
# 'time' is out of the range of the observed survival time.
# Note: if we try to specify more than 1 time point, we will get the following msg
# 'time' must be an unique value; no two values are allowed.
esurv <- biospear::expSurv(
    res = resBM,
    traindata = junk,
    boot = FALSE,
    time = median(junk$time),
    trace = TRUE)
# debug(biospear:::plot.resexpSurv)
plot(esurv, method = "lasso")
# This is equivalent to doing the following
xx <- attributes(esurv)$m.score[, "lasso"]
wc <- order(xx); wgr <- 1:nrow(esurv$surv)
p1 <- plot(x = xx[wc], y = esurv$surv[wgr, "lasso"][wc],
          xlab='prognostic score', ylab='survival prob')
# prognostic score beta*x in this cases.
# ignore treatment effect and interactions
xxmy <- as.vector(as.matrix(junk[, names(resBM$lasso)]) %*% resBM$lasso)
# See page4 of the paper. Scaled scores were used in the plot
range(abs(xx - (xxmy-quantile(xxmy, .025)) / (quantile(xxmy, .975) -  quantile(xxmy, .025))))
# [1] 1.500431e-09 1.465241e-06


h0 <- bhaz(resBM$lasso, junk$time, junk$status, junk[, names(resBM$lasso)])
== Time series ==
newtime <- median(junk$time)
* [https://petolau.github.io/Ensemble-of-trees-for-forecasting-time-series/ Ensemble learning for time series forecasting in R]
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
* [https://blog.bguarisma.com/time-series-forecasting-lab-part-5-ensembles Time Series Forecasting Lab (Part 5) - Ensembles], [https://blog.bguarisma.com/time-series-forecasting-lab-part-6-stacked-ensembles Time Series Forecasting Lab (Part 6) - Stacked Ensembles]
# newx <- junk[ , names(resBM$lasso)]
# Compute the estimate of the survival probability at training x and time = median(junk$time)
# using Breslow method
S2 <- outer(exp(-H0),  exp(xxmy), "^") # row = newtime, col = newx
range(abs(esurv$surv[wgr, "lasso"] - S2))
# [1] 6.455479e-18 2.459136e-06
# My implementation of the prognostic plot
#  Note that the x-axis on the plot is based on prognostic scores beta*x,
#  not on treatment modifying scores gamma*x as described in the paper.
#  Maybe it is because inter = FALSE in BMsel() we have used.
p2 <- plot(xxmy[wc], S2[wc], xlab='prognostic score', ylab='survival prob')  # cf p1


> names(esurv)
= p-values =
[1] "surv" "lower" "upper"
== p-values ==
> str(esurv$surv)
* Prob(Data | H0)
num [1:500, 1:2] 0.772 0.886 0.961 0.731 0.749 ...
* https://en.wikipedia.org/wiki/P-value
- attr(*, "dimnames")=List of 2
* [https://amstat.tandfonline.com/toc/utas20/73/sup1 Statistical Inference in the 21st Century: A World Beyond p < 0.05] The American Statistician, 2019
  ..$ : NULL
* [https://matloff.wordpress.com/2016/03/07/after-150-years-the-asa-says-no-to-p-values/ THE ASA SAYS NO TO P-VALUES] The problem is that with large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. We have the opposite problem with small samples: The power of the test is low, and we will announce that there is “no significant effect” when in fact we may have too little data to know whether the effect is important.
  ..$ : chr [1:2] "lasso" "oracle"
* [http://www.r-statistics.com/2016/03/its-not-the-p-values-fault-reflections-on-the-recent-asa-statement/ It’s not the p-values’ fault]
* [https://stablemarkets.wordpress.com/2016/05/21/exploring-p-values-with-simulations-in-r/ Exploring P-values with Simulations in R] from Stable Markets.
* p-value and [https://en.wikipedia.org/wiki/Effect_size effect size]. http://journals.sagepub.com/doi/full/10.1177/1745691614553988
* [https://datascienceplus.com/ditch-p-values-use-bootstrap-confidence-intervals-instead/ Ditch p-values. Use Bootstrap confidence intervals instead]


esurv2 <- predict(esurv, newdata = junk)
== Misuse of p-values ==
esurv2$surv      # All zeros?
* https://en.wikipedia.org/wiki/Misuse_of_p-values. The p-value does not indicate the size or importance of the observed effect.
</syntaxhighlight>
* Question: If we are fitting a multivariate regression and variable 1 ends with p-value .01 and variable 2 has p-value .001. How do we describe variable 2 is more significant than variable 1?  
Bug from the sample data (interaction was considered here; inter = TRUE) ?
** Answer: you can say that variable 2 has a smaller p-value than variable 1. A p-value is a measure of the strength of evidence '''against the null hypothesis'''. It is the probability of observing a test statistic as extreme or more extreme than the one calculated from your data, assuming the null hypothesis is true. The smaller the p-value, the stronger the evidence '''against the null hypothesis''' and in favor of the alternative hypothesis. In your example, variable 2 has a smaller p-value than variable 1, which means that there is stronger evidence '''against the null hypothesis''' for variable 2 than for variable 1. <u>However, it is important to note that a smaller p-value does not necessarily mean that one variable has a stronger effect or is more important than the other.</u> Instead of comparing p-values directly, it would be more appropriate to look at '''effect sizes''' and '''confidence intervals''' to determine the relative importance of each variable.
<syntaxhighlight lang='rsplus'>
** '''Effect Size''': While a p-value tells you whether an effect exists, it does not convey the size of the effect. A p-value of 0.001 may be due to a larger effect size than one producing a p-value of 0.01, but ''this isn’t always the case''. '''Effect size measures (like Cohen’s d for two means, Pearson’s r for two continuous variables, or Odds Ratio in  logistic regression or contingency tables)''' are necessary to interpret the practical significance.
set.seed(123456)
** '''Practical Significance''': Even if both p-values are statistically significant, the practical or clinical significance of the findings should be considered. A very small effect size, even with a p-value of 0.001, may not be practically important.
resBM <- BMsel(
  data = Breast,
  x = 4:ncol(Breast),
  y = 2:1,
  tt = 3,
  inter = TRUE,
  std.x = TRUE,
  folds = 5,
  method = c("lasso", "lasso-pcvl"))


esurv <- expSurv(
* Question: do p-values show the relative importance of different predictors?
  res = resBM,
** P-values can indicate the statistical significance of a predictor in a model, but they do not directly measure the relative importance of different predictors.
  traindata = Breast,
** A p-value is a measure of the probability that the observed relationship between a predictor and the response variable occurred by chance under the null hypothesis. A smaller p-value suggests that it is less likely that the observed relationship occurred by chance, which often leads to the conclusion that the predictor is statistically significant.
  boot = FALSE,
** However, p-values do not tell us about the size or magnitude of an effect, nor do they directly compare the effects of different predictors. ''Two predictors might both be statistically significant, but one might have a much larger '''effect''' on the response variable than the other'' (There are several statistical measures that can be used to assess the relative importance of predictors in a model: Standardized Coefficients, Partial Correlation Coefficients, Variable Importance in Projection (VIP), Variable Importance Measures in Tree-Based Models, LASSO (Least Absolute Shrinkage and Selection Operator) and Relative Weights Analysis).
  smooth = TRUE,
** Moreover, p-values are sensitive to sample size. With a large enough sample size, even tiny, unimportant differences can become statistically significant.
  time = 4,
** Therefore, while p-values are a useful tool in model analysis, they should not be used alone to determine the relative importance of predictors. Other statistical measures and domain knowledge should also be considered.
  trace = TRUE
)
Computation of the expected survival
Computation of analytical confidence intervals
Computation of smoothed B-splines
Error in cobs(x = x, y = y, print.mesg = F, print.warn = F, method = "uniform", :
  There is at least one pair of adjacent knots that contains no observation.
</syntaxhighlight>


== Loglikelihood ==
== Distribution of p values in medical abstracts ==
* fit$loglik is a vector of length 2 (Null model, fitted model)
* http://www.ncbi.nlm.nih.gov/pubmed/26608725
* Use '''survival::anova()''' command to do a likelihood ratio test. Note this function does not work on ''glmnet'' object.
* [https://github.com/jtleek/tidypvals An R package with several million published p-values in tidy data sets] by Jeff Leek.
* [https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/residuals.coxph residuals.coxph] Calculates martingale, deviance, score or Schoenfeld residuals for a Cox proportional hazards model.
* No deviance() on coxph object!
* [https://stat.ethz.ch/R-manual/R-devel/library/survival/html/logLik.coxph.html logLik()] function will return fit$loglik[2]


=== Get the partial likelihood of a Cox PH Model with new data ===
== nominal p-value and Empirical p-values ==
offset was used. See https://stackoverflow.com/questions/26721551/is-there-a-way-to-get-the-partial-likelihood-of-a-cox-ph-model-with-new-data-and
* Nominal p-values are based on asymptotic null distributions
* Empirical p-values are computed from simulations/permutations
* [https://stats.stackexchange.com/questions/536116/what-is-the-concepts-of-nominal-and-actual-significance-level What is the concepts of nominal and '''actual''' significance level?]
** The nominal significance level is the significance level a test is designed to achieve. This is very often 5% or 1%. Now in many situations the nominal significance level can't be achieved precisely. This can happen because the distribution is discrete and doesn't allow for a precise given rejection probability, and/or because the theory behind the test is asymptotic, i.e., the nominal level is only achieved for 𝑛→∞.


=== glmnet ===
== (nominal) alpha level ==
* It seems AIC does not require the assumption of nested models.
Conventional methodology for statistical testing is, in advance of undertaking the test, to set a NOMINAL ALPHA CRITERION LEVEL (often 0.05). The outcome is classified as showing STATISTICAL SIGNIFICANCE if the actual ALPHA (probability of the outcome under the null hypothesis) is no greater than this NOMINAL ALPHA CRITERION LEVEL.  
* https://en.wikipedia.org/wiki/Akaike_information_criterion, ([https://forvo.com/word/akaike/ akaike pronunciation in Japanese])
* http://www.translationdirectory.com/glossaries/glossary033.htm
:<math>
* http://courses.washington.edu/p209s07/lecturenotes/Week%205_Monday%20overheads.pdf
\begin{align}
\mathrm{AIC} &= 2k - 2\ln(\hat L) \\
\mathrm{AICc} &= \mathrm{AIC} + \frac{2k^2 + 2k}{n - k - 1}
\end{align}
</math>
* [https://stats.stackexchange.com/questions/25817/is-it-possible-to-calculate-aic-and-bic-for-lasso-regression-models Is it possible to calculate AIC and BIC for lasso regression models?]. See the references about the degrees of freedom in Lasso regressions.
<syntaxhighlight lang='rsplus'>
fit <- glmnet(x, y, family = "multinomial")


tLL <- fit$nulldev - deviance(fit) # ln(L)
== Normality assumption ==
k <- fit$df
[https://www.biorxiv.org/content/early/2018/12/20/498931 Violating the normality assumption may be the lesser of two evils]
n <- fit$nobs
AICc <- -tLL+2*k+2*k*(k+1)/(n-k-1)
AICc
</syntaxhighlight>
* For ''glmnet'' object, see [https://rdrr.io/cran/glmnet/man/deviance.glmnet.html ?deviance.glmnet] and [https://stackoverflow.com/questions/40920051/r-getting-aic-bic-likelihood-from-glmnet R: Getting AIC/BIC/Likelihood from GLMNet]. An example with all continuous variables
<syntaxhighlight lang='rsplus'>
set.seed(10101)
N=1000;p=6
nzc=p/3
x=matrix(rnorm(N*p),N,p)
beta=rnorm(nzc)
fx=x[,seq(nzc)]%*%beta/3
hx=exp(fx)
ty=rexp(N,hx)
tcens=rbinom(n=N,prob=.3,size=1)# censoring indicator
y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)
coxobj <- coxph(Surv(ty, 1-tcens) ~ x)
coxobj_small <- coxph(Surv(ty, 1-tcens) ~1)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(ty, 1 - tcens)
# Model 1: ~ x
# Model 2: ~ 1
# loglik  Chisq Df P(>|Chi|) 
# 1 -4095.2                     
# 2 -4102.7 15.151  6  0.01911 *


fit2 <- glmnet(x,y,family="cox", lambda=0) # ridge regression
== Second-Generation p-Values ==
deviance(fit2)
[https://amstat.tandfonline.com/doi/full/10.1080/00031305.2018.1537893 An Introduction to Second-Generation p-Values] Blume et al, 2020
# [1] 8190.313
fit2$df
# [1] 6
fit2$nulldev - deviance(fit2) # Log-Likelihood ratio statistic
# [1] 15.15097
1-pchisq(fit2$nulldev - deviance(fit2), fit2$df)
# [1] 0.01911446
</syntaxhighlight>
Here is another example including a factor covariate:
<syntaxhighlight lang='rsplus'>
library(KMsurv) # Data package for Klein & Moeschberge
data(larynx)
larynx$stage <- factor(larynx$stage)
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)
coef(coxobj)
# age    stage2    stage3    stage4
# 0.0190311 0.1400402 0.6423817 1.7059796
coxobj_small <- coxph(Surv(time, delta)~age, data = larynx)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(time, delta)
# Model 1: ~ age + stage
# Model 2: ~ age
# loglik  Chisq Df P(>|Chi|) 
# 1 -187.71                     
# 2 -195.55 15.681  3  0.001318 **
#  ---
#  Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1


# Now let's look at the glmnet() function.
== Small p-value due to very large sample size ==
# It seems glmnet does not recognize factor covariates.
* [https://stats.stackexchange.com/a/44466 How to correct for small p-value due to very large sample size]
coxobj2 <- with(larynx, glmnet(cbind(age, stage), Surv(time, delta), family = "cox", lambda=0))
* [https://www.galitshmueli.com/system/files/Print%20Version.pdf Too big to fail: large samples and the p-value problem], Lin 2013. Cited by [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2263-6#Sec17 ComBat] paper.
coxobj2$nulldev - deviance(coxobj2)  # Log-Likelihood ratio statistic
* [https://stats.stackexchange.com/a/44466 How to correct for small p-value due to very large sample size]
# [1] 15.72596
* [https://math.stackexchange.com/a/2939553 Does 𝑝-value change with sample size?]
coxobj1 <- with(larynx, glmnet(cbind(1, age), Surv(time, delta), family = "cox", lambda=0))
* [https://sebastiansauer.github.io/pvalue_sample_size/ The effect of sample on p-values. A simulation]
deviance(coxobj1) - deviance(coxobj2)
* [https://data.library.virginia.edu/power-and-sample-size-analysis-using-simulation/ Power and Sample Size Analysis using Simulation]
# [1] 13.08457
* [https://stats.stackexchange.com/questions/73045/simulating-p-values-as-a-function-of-sample-size Simulating p-values as a function of sample size]
1-pchisq(deviance(coxobj1) - deviance(coxobj2) , coxobj2$df-coxobj1$df)
* [https://researchutopia.wordpress.com/2013/11/10/understanding-p-values-via-simulations/ Understanding p-values via simulations]
# [1] 0.0002977376
* [https://www.r-bloggers.com/2018/04/p-values-sample-size-and-data-mining/ P-Values, Sample Size and Data Mining]
</syntaxhighlight>


== High dimensional data ==
== Bayesian ==
https://cran.r-project.org/web/views/Survival.html
* Bayesian believers, who adhere to Bayesian statistics, often have a different perspective on hypothesis testing compared to '''frequentist statisticians'''. '''In Bayesian statistics, the focus is on estimating the probability of a hypothesis being true given the data, rather than on the probability of the data given a specific hypothesis (as in p-values).'''
* Bayesian believers generally prefer using Bayesian methods, such as computing credible intervals or Bayes factors, which provide more directly interpretable results in terms of the probability of hypotheses. These methods can be seen as more informative than p-values, as they give a range of plausible values for the parameter of interest or directly compare the relative plausibility of different hypotheses.


== glmnet + Cox models ==
= T-statistic =
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-017-0354-0 Robust estimation of the expected survival probabilities from high-dimensional Cox models with biomarker-by-treatment interactions in randomized clinical trials] by Nils Ternès, Federico Rotolo and Stefan Michiels, BMC Medical Research Methodology, 2017 (open review available). The corresponding software '''biospear''' on [https://cran.microsoft.com/web/packages/biospear/index.html cran] and  [https://www.rdocumentation.org/packages/biospear/versions/1.0.1 rdocumentation.org].
See [[T-test#T-statistic|T-statistic]].
* [http://r.789695.n4.nabble.com/Predict-in-glmnet-for-cox-family-td4706070.html Expected time of survival in glmnet for cox family]


=== Error in glmnet: x should be a matrix with 2 or more columns ===
= ANOVA =
https://stackoverflow.com/questions/29231123/why-cant-pass-only-1-coulmn-to-glmnet-when-it-is-possible-in-glm-function-in-r
See [[T-test#ANOVA|ANOVA]].


=== Error in coxnet: (list) object cannot be coerced to type 'double' ===
= [https://en.wikipedia.org/wiki/Goodness_of_fit Goodness of fit] =
Fix: do not use data.frame in X. Use cbind() instead.
== [https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test Chi-square tests] ==  
* [http://freakonometrics.hypotheses.org/20531 An application of chi-square tests]


== Prognostic index/risk scores ==
== Fitting distribution ==
* [https://en.wikipedia.org/wiki/International_Prognostic_Index International Prognostic Index]
[https://magesblog.com/post/2011-12-01-fitting-distributions-with-r/ Fitting distributions with R]
* Low scores correspond to the lowest predicted risk and high scores correspond to the greatest predicted risk.
* The test data were first segregated into high-risk and low-risk groups by the median of training risk scores. [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis]
* On the paper "The C-index is not proper for the evaluation of t-year predicted risk" [https://academic.oup.com/biostatistics/advance-article/doi/10.1093/biostatistics/kxy006/4864363 Blanche et al 2018] defined the true '''t-year predicted risk''' by <math>P(T \le t | Z) = 1 - Survival</math>


=== linear.predictors component in coxph object ===
== Normality distribution check ==
The $linear.predictors component is not <math>\beta' x</math>. It is <math>\beta' (x-\mu_x)</math>. See [http://r.789695.n4.nabble.com/coxph-linear-predictors-td3015784.html this post].
[https://finnstats.com/index.php/2021/11/09/anderson-darling-test-in-r/ Anderson-Darling Test in R (Quick Normality Check)]


=== predict.coxph, prognostic index & risk score ===
== Kolmogorov-Smirnov test ==
* [https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/predict.coxph predict.coxph()] Compute fitted values and regression terms for a model fitted by coxph. The Cox model is a relative risk model; predictions of type "linear predictor", "risk", and "terms" are all relative to the sample from which they came. By default, the reference value for each of these is the mean covariate within strata. The primary underlying reason is statistical: a Cox model only predicts relative risks between pairs of subjects within the same strata, and hence the addition of a constant to any covariate, either overall or only within a particular stratum, has no effect on the fitted results. '''Returned value''': a vector or matrix of predictions, or a list containing the predictions (element "fit") and their standard errors (element "se.fit") if the se.fit option is TRUE. <syntaxhighlight lang='rsplus'>
* [https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test Kolmogorov-Smirnov test]
predict(object, newdata,
* [https://www.rdocumentation.org/packages/dgof/versions/1.2/topics/ks.test ks.test()] in R
    type=c("lp", "risk", "expected", "terms", "survival"),
* [https://www.statology.org/kolmogorov-smirnov-test-r/ Kolmogorov-Smirnov Test in R (With Examples)]
    se.fit=FALSE, na.action=na.pass, terms=names(object$assign), collapse,
* [https://rpubs.com/mharris/KSplot kolmogorov-smirnov plot]
    reference=c("strata", "sample"),  ...)
* [https://stackoverflow.com/a/27282758 Visualizing the Kolmogorov-Smirnov statistic in ggplot2]
</syntaxhighlight> type:
* [https://www.tandfonline.com/doi/full/10.1080/00031305.2024.2356095 On Misuses of the Kolmogorov–Smirnov Test for One-Sample Goodness-of-Fit] 2024
** "lp": linear predictor
** "risk": risk score exp(lp)
** "expected": the expected number of events given the covariates and follow-up time. The survival probability for a subject is equal to exp(-expected).
** "terms": the terms of the linear predictor.
* http://stats.stackexchange.com/questions/44896/how-to-interpret-the-output-of-predict-coxph. The '''$linear.predictors''' component represents <math>\beta (x - \bar{x})</math>. The risk score (type='risk') corresponds to <math>\exp(\beta (x-\bar{x}))</math>. '''Factors are converted to dummy predictors as usual'''; see [https://stackoverflow.com/questions/14921805/convert-a-factor-to-indicator-variables model.matrix].
* http://www.togaware.com/datamining/survivor/Lung1.html <syntaxhighlight lang='rsplus'>
library(coxph)
fit <- coxph(Surv(time, status) ~ age , lung)
fit
#  Call:
#  coxph(formula = Surv(time, status) ~ age, data = lung)
#      coef exp(coef) se(coef)    z    p
# age 0.0187      1.02  0.0092 2.03 0.042
#
# Likelihood ratio test=4.24  on 1 df, p=0.0395  n= 228, number of events= 165
fit$means
#      age
# 62.44737


# type = "lr" (Linear predictor)
= Contingency Tables =
as.numeric(predict(fit,type="lp"))[1:10] 
[https://finnstats.com/index.php/2021/05/09/contingency-coefficient-association/ How to Measure Contingency-Coefficient (Association Strength)]. '''gplots::balloonplot()''' and '''corrplot::corrplot()''' .
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518  0.21626733
# [7] 0.10394626  0.16010680 -0.17685643 -0.02709500
0.0187 * (lung$age[1:10] - fit$means)
# [1]  0.21603421  0.10383421 -0.12056579 -0.10186579 -0.04576579  0.21603421
# [7]  0.10383421  0.15993421 -0.17666579 -0.02706579
fit$linear.predictors[1:10]
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518
# [6]  0.21626733  0.10394626  0.16010680 -0.17685643 -0.02709500


# type = "risk" (Risk score)
== What statistical test should I do ==
> as.numeric(predict(fit,type="risk"))[1:10]
[https://statsandr.com/blog/what-statistical-test-should-i-do/ What statistical test should I do?]
[1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342 1.1095408
[8] 1.1736362 0.8379001 0.9732688
> exp((lung$age-mean(lung$age)) * 0.0187)[1:10]
[1] 1.2411448 1.1094165 0.8864188 0.9031508 0.9552657 1.2411448
[7] 1.1094165 1.1734337 0.8380598 0.9732972
> exp(fit$linear.predictors)[1:10]
[1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342
[7] 1.1095408 1.1736362 0.8379001 0.9732688
</syntaxhighlight>


== Survival risk prediction ==
== Graphically show association ==
* [https://brb.nci.nih.gov/techreport/Briefings.pdf Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data] Simon 2011. The authors have noted the CV has been used for optimization of tuning parameters but the data available are too limited for effective into training & test sets.
** The CV Kaplan-Meier curves are essentially unbiased and the separation between the curves gives a fair representation of the value of the expression profiles for predicting survival risk.
** The log-rank statistic does not have the usual chi-squared distribution under the null hypothesis. This is because the data was used to create the risk groups.
** Survival ROC curve can be used as a measure of predictive accuracy for the survival risk group model at a certain landmark time.
** The ROC curve can be misleading. For example if re-substitution is used, the AUC can be very large.
** The p-value for the significance of the test that AUC=.5 for the cross-validated survival ROC curve can be computed by permutations.
* Measure of assessment for prognostic prediction
:{| class="wikitable"
!
! 0/1
! Survival
|-
| Sensitivity
| <math>P(Pred=1|True=1)</math>
| <math>P(\beta' X > c | T < t)</math>
|-
| Specificity
| <math>P(Pred=0|True=0)</math>
| <math>P(\beta' X \le c | T \ge t)</math>
|}
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.4106/full An evaluation of resampling methods for assessment of survival risk prediction in high-dimensional settings] Subramanian, et al 2010.
** The conditional probabilities can be estimated by Heagerty et al 2000 (R package [https://cran.r-project.org/web/packages/survivalROC/index.html survivalROC]). '''The AUC(t) can be used for comparing and assessing prognostic models (a measure of accuracy) for future samples.''' In the absence of an independent large dataset, an estimate for AUC(t) is obtained through resampling from the original sample <math>S_n</math>.
** Resubstitution estimate of AUC(t) (i.e. all observations were used for feature selection, model building as well as the estimation of accuracy) is too optimistic. So k-fold CV method is considered.
** There are two ways to compute k-fold CV estimate of AUC(t): the pooling strategy (used in the paper) and average strategy (AUC(t)s are first computed for each test set and are then averaged). In the pooling strategy, all the test set risk-score predictions are first collected and AUC(t) is calculated on this combined set.
** Conclusions: sample splitting and LOOCV have a higher mean square error than other methods. 5-fold or 10-fold CV provide a good balance between bias and variability for a wide range of data settings.
* [https://brb.nci.nih.gov/techreport/JNCI-NSLC-Signatures.pdf Gene Expression–Based Prognostic Signatures in Lung Cancer: Ready for Clinical Use?] Subramanian, et al 2010.
* [https://academic.oup.com/bioinformatics/article/23/14/1768/188061/Assessment-of-survival-prediction-models-based-on Assessment of survival prediction models based on microarray data] Martin Schumacher, et al. 2007
* [http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0020108 Semi-Supervised Methods to Predict Patient Survival from Gene Expression Data]  Eric Bair , Robert Tibshirani, 2004
* Time dependent ROC curves for censored survival data and a diagnostic marker. Heagerty et al, Biometrics 2000
** [http://faculty.washington.edu/heagerty/Software/SurvROC/SurvivalROC/survivalROCdiscuss.pdf An introduction to survivalROC] by Saha, Heagerty. If the AUCs are computed at several time points, we can plot the AUCs vs time for different models (eg different covariates) and compare them to see which model performs better.
** The '''survivalROC''' package does not draw an ROC curve. It outputs FP (x-axis) and TP (y-axis). We can use basic R or ggplot to draw the curve.
** [https://www.rdocumentation.org/packages/survivalROC/versions/1.0.1/topics/survivalROC survivalROC()] calculates AUC at specified time by using NNE method (default). We can use the prognostic index as marker when there are more than one markers is used. Note that [https://www.rdocumentation.org/packages/survAUC/versions/1.0-5/topics/AUC.uno survAUC::AUC.uno()] uses Uno (2007) to calculate FP and TP.
** [https://rstudio-pubs-static.s3.amazonaws.com/3506_36a9509e9d544386bd3e69de30bca608.html Assessment of Discrimination in Survival Analysis (C-statistics, etc)]
** [http://sachsmc.github.io/plotROC/ plotROC] package by Sachs for showing ROC curves from multiple time points on the same plot.
** [https://datascienceplus.com/time-dependent-roc-for-survival-prediction-models-in-r/ Time-dependent ROC for Survival Prediction Models in R]
** survivalROC怎么看最佳cut-off值?/ HOW to use the survivalROC to get optimal cut-off values?
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-10-413 Survival prediction from clinico-genomic models - a comparative study] Hege M Bøvelstad, 2009
* [http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(19990915/30)18:17/18%3C2529::AID-SIM274%3E3.0.CO;2-5/full Assessment and comparison of prognostic classification schemes for survival data]. E. Graf, C. Schmoor, W. Sauerbrei, et al. 1999
* [http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(20000229)19:4%3C453::AID-SIM350%3E3.0.CO;2-5/full What do we mean by validating a prognostic model?] Douglas G. Altman, Patrick Royston, 2000
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.3768/full On the prognostic value of survival models with application to gene expression signatures] T. Hielscher, M. Zucknick, W. Werft, A. Benner, 2000
* Accuracy of point predictions in survival analysis, Henderson et al, Statist Med, 2001.
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis] Hung-Chia Chen et al 2012
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.7342/abstract Accuracy of predictive ability measures for survival models] Flandre, Detsch and O'Quigley, 2017.
* [http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006026 Association between expression of random gene sets and survival is evident in multiple cancer types and may be explained by sub-classification] Yishai Shimoni, PLOS 2018
* [http://www.bmj.com/content/bmj/357/bmj.j2497 Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study]
* [http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006076 Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data] Ching et al 2018.


== Assessing the performance of prediction models ==
# '''Bar Graphs''': Bar graphs can be used to compare the frequency of different categories in two variables. Each bar represents a category, and the height of the bar represents its frequency. You can create side-by-side bar graphs or stacked bar graphs to compare frequencies across categories. See [https://statisticsbyjim.com/basics/contingency-table/ Contingency Table: Definition, Examples & Interpreting] (row totals) and [https://online.stat.psu.edu/stat100/lesson/6/6.1 Two Different Categorical Variables] (column totals).
* [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.6246 Investigating the prediction ability of survival models based on both clinical and omics data: two case studies] by Riccardo De Bin, Statistics in Medicine 2014. (not useful)
# '''Mosaic Plots''': A mosaic plot gives a visual representation of the relationship between two categorical variables. It's a rectangular grid that represents the total population, and it's divided into smaller rectangles that represent the categories of each variable. The size of each rectangle is proportional to the frequency of each category. See [https://yardsale8.github.io/stat110_book/chp3/mosaic.html Visualizing Association With Mosaic Plots].
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis] Chen et al , BMC Medical Research Methodology 2012
# '''Categorical Scatterplots''': In seaborn, a Python data visualization library, there are categorical scatterplots that adjust the positions of points on the categorical axis with a small amount of random "jitter" or using an algorithm that prevents them from overlapping. See [https://seaborn.pydata.org/tutorial/categorical.html Visualizing categorical data].
* [https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.4242 A simulation study of predictive ability measures in a survival model I: Explained variation measures] Choodari‐Oskooei et al, Stat in Medicine 2011
# '''Contingency Tables''': While not a graphical method, contingency tables are often used in conjunction with graphical methods. A contingency table displays how many individuals fall in each combination of categories for two variables.
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3575184/ Assessing the performance of prediction models: a framework for some traditional and novel measures] by Ewout W. Steyerberg, Andrew J. Vickers, [...], and Michael W. Kattan, 2010.
* [https://academic.oup.com/bioinformatics/article/27/22/3206/194302 survcomp: an R/Bioconductor package for performance assessment and comparison of survival models] paper in 2011 and [http://bcb.dfci.harvard.edu/~aedin/courses/Bioconductor/survival.pdf Introduction to R and Bioconductor Survival analysis] where the survcomp package can be used. The summary here is based on this paper.
* [https://stats.stackexchange.com/questions/181634/how-to-compare-predictive-power-of-survival-models How to compare predictive power of survival models?]
* [https://stats.stackexchange.com/questions/17604/how-to-compare-harrell-c-index-from-different-models-in-survival-analysis How to compare Harrell C-index from different models in survival analysis?] and [https://stats.stackexchange.com/q/17648 Frank Harrell's comment]: Doing model comparison with LR statistics is more powerful than using methods that depend on an asymptotic distribution of the C-index.


=== Hazard ratio ===
Q: How to guess whether two variables are associated by looking at the counts in a 2x2 contingency table:<br>
[https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/hazard.ratio hazard.ratio()]
* '''Observe the distribution of counts''': If the counts are evenly distributed across the cells of the table, it suggests that there may not be a strong association between the two variables. However, if the counts are unevenly distributed, it suggests that there may be an association.
<syntaxhighlight lang='rsplus'>
* '''Compare the diagonal cells''': If the counts in the diagonal cells (top left to bottom right or top right to bottom left) are high compared to the off-diagonal cells, it suggests a '''positive association''' between the two variables. Conversely, if the counts in the off-diagonal cells are high, it suggests a '''negative association'''. See [[Statistics#Odds_ratio_and_Risk_ratio |odds ratio]] >1 (pos association) or <1 (neg association).  
hazard.ratio(x, surv.time, surv.event, weights, strat, alpha = 0.05,
* Calculate and compare the '''row and column totals''': If the row and column totals are similar, it suggests that there may not be a strong association between the two variables. However, if the row and column totals are very different, it suggests that there may be an association.
            method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)
</syntaxhighlight>


=== D index ===
Q: When creating a barplot of percentages from a contingency table, whether you calculate percentages by dividing counts by row totals or column totals? A: It depends on the question you’re trying to answer. See [https://statisticsbyjim.com/basics/contingency-table/ Contingency Table: Definition, Examples & Interpreting].
[https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/D.index D.index()]
* '''Row Totals''': If you’re interested in understanding the distribution of a '''variable''' within each '''row category''', you would calculate percentages by dividing counts by row totals. This is often used when the '''row variable''' is the '''independent variable''' and you want to see how the column variable ('''dependent variable''') is distributed within each level of the row variable.
<syntaxhighlight lang='rsplus'>
* Column Totals: If you’re interested in understanding the distribution of a variable within each column category, you would calculate percentages by dividing counts by column totals. This is often used when the column variable is the independent variable and you want to see how the row variable (dependent variable) is distributed within each level of the column variable.
D.index(x, surv.time, surv.event, weights, strat, alpha = 0.05,
        method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)
</syntaxhighlight>


=== AUC ===
[https://wiki.taichimd.us/view/Ggplot2#Barplot_with_colors_for_a_2nd_variable Barplot with colors for a 2nd variable].
See [[#ROC_curve_and_Brier_score|ROC curve]].


Comparison:
== Measure the association in a contingency table ==
* AUC <math> P(Z_1 > Z_0) </math>: the probability that a randomly selected '''case''' will have a higher test result (marker value) than a randomly selected '''control'''. It represents a measure of concordance between the marker and the disease status. ROC curves are particularly useful for comparing the discriminatory capacity of different potential biomarkers. (Heagerty & Zheng 2005)
<ul>
* C-statistic <math> P(\beta' Z_1 > \beta' Z_2|T_1 < T_2) </math>: the probability of concordance between predicted and observed responses. The probability that the predictions for a random pair of subjects are concordant with their outcomes. (Heagerty & Zheng 2005)
<li>'''Phi coefficient''': The Phi coefficient is a measure of association that is used for 2x2 contingency tables. It ranges from -1 to 1, with 0 indicating no association and values close to -1 or 1 indicating a strong association. The formula for Phi coefficient is:
Phi = (ad - bc) / sqrt((a+b)(c+d)(a+c)(b+d)), where a, b, c, and d are the frequency counts in the four cells of the contingency table.
<li>'''Cramer's V''': Cramer's V is a measure of association that is used for contingency tables of any size. It ranges from 0 to 1, with 0 indicating no association and values close to 1 indicating a strong association. The formula for Cramer's V is:
V = sqrt(Chi-Square / (n*(min(r,c)-1))), where Chi-Square is the Chi-Square statistic, n is the total sample size, and r and c are the number of rows and columns in the contingency table.
<li>'''Odds ratio''': The odds ratio is a measure of association that is commonly used in medical research and epidemiology. It compares the odds of an event occurring in one group compared to another group. The odds ratio can be calculated as:
OR = (a/b) / (c/d), where a, b, c, and d are the frequency counts in the four cells of the contingency table. An odds ratio of 1 indicates no association, while values greater than 1 indicate a positive association and values less than 1 indicate a negative association.
</ul>


p95 of Heagerty and Zheng (2005) gave a relationship of C-statistic:
== Odds ratio and Risk ratio ==
<ul>
<li>[https://en.wikipedia.org/wiki/Odds_ratio Odds ratio] and [https://en.wikipedia.org/wiki/Relative_risk Risk ratio/relative risk].
* In practice the odds ratio is commonly used for '''case-control studies''', as the relative risk cannot be estimated.
* Relative risk is used in the statistical analysis of the data of ecological, cohort, medical and '''intervention studies''', to estimate the strength of the association between exposures (treatments or risk factors) and outcomes.
<li>[https://www.r-bloggers.com/2022/02/odds-ratio-interpretation-quick-guide/ Odds Ratio Interpretation Quick Guide] </li>
<li>The odds ratio is often used to evaluate the strength of the '''association''' between two binary variables and to compare the '''risk of an event''' occurring between two groups.
* An odds ratio greater than 1 indicates that the event is more likely to occur in the first group, while an odds ratio less than 1 indicates that the event is more likely to occur in the second group.
* In general, a larger odds ratio indicates a stronger association between the two variables, while a smaller odds ratio indicates a weaker association.
<li>The ratio of the '''odds of an event''' occurring in one '''group''' to the odds of it occurring in another group
<pre>
                        Treatment  | Control 
-------------------------------------------------
Event occurs        |  A        |  B     
-------------------------------------------------
Event does not occur |  C        |  D     
-------------------------------------------------
Odds                |  A/C      |  B/D
-------------------------------------------------
Risk                |  A/(A+C)  |  B/(B+D)
</pre>
* '''Odds''' Ratio = (A / C) / (B / D) = (AD) / (BC)
* '''Risk''' Ratio = (A / (A+C)) / (C / (B+D))
</li>
<li>Real example. In a study published in the Journal of the American Medical Association, researchers investigated the '''association between''' the use of nonsteroidal anti-inflammatory drugs (''NSAIDs'') and the ''risk of developing gastrointestinal bleeding''. Suppose odds ratio = 2.5 and risk ratio is 1.5. The interpretation of the results in this study is as follows:
* The odds ratio of 2.5 indicates that the odds of gastrointestinal bleeding are 2.5 times higher in the group of patients taking NSAIDs compared to the group of patients not taking NSAIDs.
* The risk ratio of 1.5 indicates that the risk of gastrointestinal bleeding is 1.5 times higher in the group of patients taking NSAIDs compared to the group of patients not taking NSAIDs.
* In this example, both the odds ratio and the risk ratio indicate a significant association between NSAID use and the risk of gastrointestinal bleeding. However, the risk ratio is lower than the odds ratio, indicating that the overall prevalence of gastrointestinal bleeding in the study population is relatively low.
<li>What is the main difference in the interpretation of odds ratio and risk ratio?
* Odds are a measure of the probability of an event occurring, expressed as the ratio of the number of ways the event can occur to the number of ways it cannot occur. For example, if the probability of an event occurring is 0.5 (or 50%), the odds of the event occurring would be 1:1 (or 1 to 1).
* Risk is a measure of the probability of an event occurring, expressed as the ratio of the number of events that occur to the total number of events. For example, if 10 out of 100 people experience an event, the risk of the event occurring would be 10%.
* The main difference between the two measures is that the odds ratio is more sensitive to changes in the '''frequency of the event''', while the risk ratio is more sensitive to changes in the '''overall prevalence of the event'''.
* This means that the odds ratio is more useful for comparing the odds of an event occurring between two groups when the event is relatively '''rare''', while the risk ratio is more useful for comparing the risk of an event occurring between two groups when the event is more '''common'''.
</ul>


<math>
== Hypergeometric, One-tailed Fisher exact test ==
C = P(M_j > M_k | T_j < T_k) = \int_t \mbox{AUC(t) w(t)} \; dt
* [https://bioconductor.org/packages/release/bioc/vignettes/GSEABenchmarkeR/inst/doc/GSEABenchmarkeR.html ORA is inapplicable if there are few genes satisfying the significance threshold, or if almost all genes are DE]. See also the '''flexible''' adjustment method for the handling of multiple testing correction.
</math>
* https://www.bioconductor.org/help/course-materials/2009/SeattleApr09/gsea/ (Are interesting features over-represented? or are selected genes more often in the ''GO category'' than expected by chance?)
 
* https://en.wikipedia.org/wiki/Hypergeometric_distribution. '' In a test for over-representation of successes in the sample, the hypergeometric p-value is calculated as the probability of randomly drawing '''k''' or more successes from the population in '''n''' total draws. In a test for under-representation, the p-value is the probability of randomly drawing '''k''' or fewer successes.''
where ''M'' is the marker value and <math>w(t) = 2 \cdot f(t) \cdot S(t) </math>. When the interest is in the accuracy of a regression model we will use <math>M_i = Z_i^T \beta</math>.
* http://stats.stackexchange.com/questions/62235/one-tailed-fishers-exact-test-and-the-hypergeometric-distribution
* Two sided hypergeometric test
** http://stats.stackexchange.com/questions/155189/how-to-make-a-two-tailed-hypergeometric-test
** http://stats.stackexchange.com/questions/140107/p-value-in-a-two-tail-test-with-asymmetric-null-distribution
** http://stats.stackexchange.com/questions/19195/explaining-two-tailed-tests
* https://www.biostars.org/p/90662/ When computing the p-value (tail probability), consider to use 1 - Prob(observed -1) instead of 1 - Prob(observed) for discrete distribution.
* https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Hypergeometric.html p(x) = choose(m, x) choose(n, k-x) / choose(m+n, k).
<pre>
        drawn  | not drawn |
-------------------------------------
white |  x      |          | m
-------------------------------------
black |  k-x    |          | n
-------------------------------------
      |  k      |          | m+n
</pre>


The time-dependent AUC is also related to time-dependent C-index. <math> C_\tau = P(M_j > M_k | T_j < T_k, T_j < \tau) = \int_t \mbox{AUC(t)} \mbox{w}_{\tau}(t) \; dt  </math> where <math> w_\tau(t) = 2 \cdot f(t) \cdot S(t)/(1-S^2(\tau))</math>.
For example, k=100, m=100, m+n=1000,
 
{{Pre}}
=== Concordance index/C-index/C-statistic interpretation and R packages ===
> 1 - phyper(10, 100, 10^3-100, 100, log.p=F)
* The area under ROC curve (plot of sensitivity of 1-specificity) is also called C-statistic. It is a measure of discrimination generalized for survival data (Harrell 1982 & 2001). The ROC are functions of the sensitivity and specificity for each value of the measure of model. (Nancy Cook, 2007)
[1] 0.4160339
** The sensitivity of a test is the probability of a positive test result, or of a value above a threshold, among those with disease (cases).
> a <- dhyper(0:10, 100, 10^3-100, 100)
** The specificity of a test is the probability of a negative test result, or of a value below a threshold, among those without disease (noncases).
> cumsum(rev(a))
** Perfect discrimination corresponds to a c-statistic of 1 & is achieved if the scores for all the cases are higher than those for all the non-cases.
  [1] 1.566158e-140 1.409558e-135 3.136408e-131 3.067025e-127 1.668004e-123 5.739613e-120 1.355765e-116
** The c-statistic is the '''probability that the measure or predicted risk/risk score is higher for a case than for a noncase'''.  
  [8] 2.325536e-113 3.018276e-110 3.058586e-107 2.480543e-104 1.642534e-101  9.027724e-99  4.175767e-96
** The c-statistic is not the probability that individuals are classified correctly or that a person with a high test score will eventually become a case.
[15]  1.644702e-93  5.572070e-91  1.638079e-88  4.210963e-86 9.530281e-84  1.910424e-81  3.410345e-79
** C-statistic is a rank-based measure. The c-statistic describes how well models can rank order cases and noncases, but not a function of the actual predicted probabilities.
[22]  5.447786e-77  7.821658e-75  1.013356e-72  1.189000e-70  1.267638e-68  1.231736e-66  1.093852e-64
* [https://stats.stackexchange.com/questions/29815/how-to-interpret-the-output-for-calculating-concordance-index-c-index?noredirect=1&lq=1 How to interpret the output for calculating concordance index (c-index)?] <math>
[29]  8.900857e-63  6.652193e-61  4.576232e-59  2.903632e-57  1.702481e-55  9.240350e-54  4.650130e-52
P(\beta' Z_1 > \beta' Z_2|T_1 < T_2)
[36]  2.173043e-50  9.442985e-49  3.820823e-47  1.441257e-45  5.074077e-44  1.669028e-42  5.134399e-41
</math> where ''T'' is the survival time and ''Z'' is the covariates.
[43] 1.478542e-39  3.989016e-38  1.009089e-36  2.395206e-35  5.338260e-34  1.117816e-32  2.200410e-31
** It is the '''fraction of pairs in your data, where the observation with the higher survival time has the higher probability of survival predicted by your model'''.  
[50]  4.074043e-30  7.098105e-29  1.164233e-27  1.798390e-26  2.617103e-25  3.589044e-24  4.639451e-23
** High values mean that your model predicts higher probabilities of survival for higher observed survival times.
[57]  5.654244e-22  6.497925e-21  7.042397e-20  7.198582e-19  6.940175e-18  6.310859e-17  5.412268e-16
** The c index estimates the '''probability of concordance between predicted and observed responses'''. A value of 0.5 indicates no predictive discrimination and a value of 1.0 indicates perfect separation of patients with different outcomes. (p371 Harrell 1996)
[64]  4.377256e-15  3.338067e-14  2.399811e-13  1.626091e-12  1.038184e-11  6.243346e-11  3.535115e-10
* Drawback of C-statistics:
[71] 1.883810e-09  9.442711e-09  4.449741e-08  1.970041e-07  8.188671e-07  3.193112e-06  1.167109e-05
** Even though rank indexes such as c are widely applicable and easily interpretable, '''they are not sensitive for detecting small differences in discrimination ability between two models.''' This is due to the fact that a rank method considers the (prediction, outcome) pairs (0.01,0), (0.9, 1) as no more concordant than the pairs (0.05,0), (0.8, 1). A more sensitive likelihood-ratio Chi-square-based statistic that reduces to R2 in the linear regression case may be substituted. (p371 Harrell 1996)
[78]  3.994913e-05  1.279299e-04  3.828641e-04  1.069633e-03  2.786293e-03  6.759071e-03  1.525017e-02
** If the model is correct, the '''likelihood based measures may be more sensitive in detecting differences in prediction ability''', compared to rank-based measures such as C-indexes. (Uno 2011 p 1113)
[85] 3.196401e-02  6.216690e-02  1.120899e-01 1.872547e-01  2.898395e-01  4.160339e-01  5.550192e-01
* http://dmkd.cs.vt.edu/TUTORIAL/Survival/Slides.pdf
[92] 6.909666e-01  8.079129e-01  8.953150e-01  9.511926e-01  9.811343e-01  9.942110e-01  9.986807e-01
* [https://cran.r-project.org/web/packages/survival/vignettes/concordance.pdf Concordance] vignette from the survival package. It has a good summary of different ways (such as Kendall's tau and Somers' d) to calculate the '''concordance statistic'''. The ''concordance'' function in the survival package can be used with various types of models including logistic and linear regression.
[99]  9.998018e-01  9.999853e-01  1.000000e+00
* <span style="color: magenta"> Assessment of Discrimination in Survival Analysis (C-statistics, etc) </span> [https://rstudio-pubs-static.s3.amazonaws.com/3506_36a9509e9d544386bd3e69de30bca608.html webpage]
* [http://gaodoris.blogspot.com/2012/10/5-ways-to-estimate-concordance-index.html 5 Ways to Estimate Concordance Index for Cox Models in R, Why Results Aren't Identical?], [http://zeegroom.com/2015/10/10/cindex/ C-index/C-statistic 计算的5种不同方法及比较]. The 5 functions are rcorrcens() from Hmisc, summary()$concordance from survival, survConcordance() from survival, concordance.index() from survcomp and cph() from rms.
* Summary of R packages to compute C-statistic
 
: {| class="wikitable"
! Package
! Function
! New data?
|-
| survival
| summary(coxph(formula, data))$concordance["C"]
| no
|-
| survC1
| [https://www.rdocumentation.org/packages/survC1/versions/1.0-2/topics/Est.Cval Est.Cval()]
| no
|-
| survAUC
| [https://www.rdocumentation.org/packages/survAUC/versions/1.0-5/topics/UnoC UnoC()]
| yes
|-
| survcomp
| [https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/concordance.index concordance.index()]
| ?
|-
| Hmisc
| [https://www.rdocumentation.org/packages/Hmisc/versions/4.2-0/topics/rcorr.cens rcorr.cens()]
| no
|-
| pec
| [https://www.rdocumentation.org/packages/pec/versions/2018.07.26/topics/cindex cindex()]
| yes
|}
 
=== Integrated brier score (≈ "mean squared error" of prediction for survival data) ===
[http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(19990915/30)18:17/18%3C2529::AID-SIM274%3E3.0.CO;2-5/full Assessment and comparison of prognostic classification schemes for survival data] Graf et al Stat. Med. 1999 2529-45, [https://onlinelibrary.wiley.com/doi/pdf/10.1002/bimj.200610301 Consistent Estimation of the Expected Brier Score in General Survival Models with Right‐Censored Event Times] Gerds et al 2006.
 
* Because the point predictions of event-free times will almost inevitably given inaccurate and unsatisfactory result, the mean square error of prediction <math>\frac{1}{n}\sum_1^n (T_i - \hat{T}(X_i))^2</math> method will not be considered. See Parkes 1972 or [http://www.lcc.uma.es/~jja/recidiva/055.pdf Henderson] 2001.
* Another approach is to predict the survival or event status <math>Y=I(T > \tau)</math> at a fixed time point <math>\tau</math> for a patient with X=x. This leads to the expected Brier score <math>E[(Y - \hat{S}(\tau|X))^2]</math> where <math>\hat{S}(\tau|X)</math> is the estimated event-free probabilities (survival probability) at time <math>\tau</math> for subject with predictor variable <math>X</math>.
* The time-dependent Brier score (without censoring)
: <math>
\begin{align}
  \mbox{Brier}(\tau) &= \frac{1}{n}\sum_1^n (I(T_i>\tau) - \hat{S}(\tau|X_i))^2 
\end{align}
</math>
* The time-dependent Brier score (with censoring, C is the censoring variable)
: <math>
\begin{align}
  \mbox{Brier}(\tau) = \frac{1}{n}\sum_i^n\bigg[\frac{(\hat{S}_C(t_i))^2I(t_i \leq \tau, \delta_i=1)}{\hat{S}_C(t_i)} + \frac{(1 - \hat{S}_C(t_i))^2 I(t_i > \tau)}{\hat{S}_C(\tau)}\bigg]
\end{align}
</math>
where <math>\hat{S}_C(t_i) = P(C > t_i)</math>, the Kaplan-Meier estimate of the censoring distribution with <math>t_i</math> the survival time of patient ''i''.  
The integration of the Brier score can be done by over time <math>t \in [0, \tau]</math> with respect to some weight function W(t) for which a natual choice is <math>(1 - \hat{S}(t))/(1-\hat{S}(\tau))</math>. The lower the iBrier score, the larger the prediction accuracy is.
* Useful benchmark values for the Brier score are 33%, which corresponds to predicting the risk by a random number drawn from U[0, 1], and 25% which corresponds to predicting 50% risk for everyone. See [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4194196/pdf/nihms-589222.pdf Evaluating Random Forests for Survival Analysis using Prediction Error Curves] by Mogensen et al J. Stat Software 2012 ([https://cran.r-project.org/web/packages/pec/index.html pec] package). The paper has a good summary of different R package implementing Brier scores.  


R function
# Density plot
* [https://www.rdocumentation.org/packages/pec/versions/2.5.4 pec] by Thomas A. Gerds. The plot.pec() can plot '''prediction error curves''' (defined by Brier score). See an example from [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4841879/pdf/IJPH-45-239.pdf#page=5 this paper]. The .632+ bootstrap prediction error curves is from the paper [https://academic.oup.com/bioinformatics/article/25/7/890/211193#2275428 Boosting for high-dimensional time-to-event data with competing risks] 2009
plot(0:100, dhyper(0:100, 100, 10^3-100, 100), type='h')
* [https://www.rdocumentation.org/packages/peperr/versions/1.1-7 peperr] package. The package peperr is an early branch of pec.
</pre>
* [https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/sbrier.score2proba survcomp::sbrier.score2proba()].
[[:File:Dhyper.svg]]
* [https://www.rdocumentation.org/packages/ipred/versions/0.9-5/topics/sbrier ipred::sbrier()]


Papers on high dimensional covariates
Moreover,
* Assessment of survival prediction models based on microarray data, Bioinformatics , 2007, vol. 23 (pg. 1768-74)
<pre>
* Allowing for mandatory covariates in boosting estimation of sparse high-dimensional survival models, BMC Bioinformatics , 2008, vol. 9 pg. 14
  1 - phyper(q=10, m, n, k)  
= 1 - sum_{x=0}^{x=10} phyper(x, m, n, k)
= 1 - sum(a[1:11]) # R's index starts from 1.
</pre>


=== Kendall's tau, Goodman-Kruskal's gamma, Somers' d ===
Another example is the data from [https://david.ncifcrf.gov/helps/functional_annotation.html#fisher the functional annotation tool] in DAVID.  
* https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient
<pre>
* https://en.wikipedia.org/wiki/Goodman_and_Kruskal%27s_gamma
              | gene list | not gene list |
* https://en.wikipedia.org/wiki/Somers%27_D
-------------------------------------------------------
* [https://cran.r-project.org/web/packages/survival/vignettes/concordance.pdf Survival package] has a good summary. Especially '''concordance = (d+1)/2'''.
pathway        |  3  (q)  |              | 40 (m)
-------------------------------------------------------
not in pathway |  297      |              | 29960 (n)
-------------------------------------------------------
              |  300 (k)  |              | 30000
</pre>
The one-tailed p-value from the hypergeometric test is calculated as 1 - phyper(3-1, 40, 29960, 300) = 0.0074.


=== C-statistics ===
== [https://en.wikipedia.org/wiki/Fisher%27s_exact_test Fisher's exact test] ==
* C-statistics is the probability of concordance between predicted and observed survival.
Following the above example from the DAVID website, the following R command calculates the Fisher exact test for independence in 2x2 contingency tables.
* [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.6370 Comparing two correlated C indices with right‐censored survival outcome: a one‐shot nonparametric approach] Kang et al, Stat in Med, 2014. [https://cran.r-project.org/web/packages/compareC/index.html compareC] package for comparing two correlated C-indices with right censored outcomes. [https://support.sas.com/resources/papers/proceedings17/SAS0462-2017.pdf#page=13 Harrell’s Concordance]. The s.e. of the Harrell's C-statistics can be estimated by the delta method. <math>
{{Pre}}
\begin{align}
> fisher.test(matrix(c(3, 40, 297, 29960), nr=2)) # alternative = "two.sided" by default
C_H = \frac{\sum_{i,j}I(t_i < t_{j}) I(\hat{\beta} Z_i > \hat{\beta} Z_j) \delta_i}{\sum_{i,j} I(t_i < t_j) \delta_i}
\end{align}
</math> converges to a censoring-dependent quantity <math> P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \text{min}(D_1,D_2)).</math> Here ''D'' is the censoring variable.
* [http://europepmc.org/articles/PMC3079915 On the C-statistics for Evaluating Overall Adequacy of Risk Prediction Procedures with Censored Survival Data] by Uno et al 2011. Let <math>\tau</math> be a specified time point within the support of the censoring variable. <math>
\begin{align}
C(\tau) = \text{UnoC}(\hat{\pi}, \tau)
        = \frac{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) I(\hat{\beta}'Z_i > \hat{\beta}'Z_{i'}) \delta_i}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) \delta_i}
\end{align}
</math>, a measure of the concordance between <math>\hat{\beta} Z_i</math> (the linear predictor) and the survival time. <math>\hat{S}_C(t)</math> is the Kaplan-Meier estimator for the '''censoring distribution/variable/time''' (cf '''event time'''); flipping the definition of <math>\delta_i</math>/considering failure events as "censored" observations and censored observations as "failures" and computing the KM as usual; see p207 of [https://amstat.tandfonline.com/doi/abs/10.1198/000313001317098185#.WtS-pNPwY3F Satten 2001] and the [https://github.com/cran/survC1/blob/master/R/FUN-cstat-ver003b.R#L282 source code from the kmcens()] in survC1. Note that <math>C_\tau</math> converges to <math> P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \tau).</math>
** <span style="color: red">Uno's estimator does not require the fitted model to be correct </span>. See also table V in the simulation study where the true model is log-normal regression.
** <span style="color: red">Uno's estimator is consistent for a population concordance measure that is free of censoring</span>. See the coverage result in table IV and V from his simulation study. Other forms of C-statistic estimate population parameters that may depend on the current study-specific censoring distribution.
** To accommodate discrete risk scores, in survC1::Est.Cval(), it is using the formula <math>.
\begin{align}
\frac{\sum_{i,i'}[ (\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) I(\hat{\beta}'Z_i > \hat{\beta}'Z_{i'}) \delta_i +  0.5 * (\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) I(\hat{\beta}'Z_i = \hat{\beta}'Z_{i'}) \delta_i ]}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) \delta_i}
\end{align}
</math>. '''Note that pec::cindex() is using the same formula but survAUC::UnoC() does not.'''
** If the specified <math>\tau</math> (tau) is 'too' large such that very few events were observed or very few subjects were followed beyond this time point, the standard error estimate for <math>\hat{C}_\tau</math> can be quite large.
** Uno mentioned from (page 95) Heagerty and Zheng 2005 that when T is right censoring, one would typically consider <math>C_\tau</math> with a fixed, prespecified follow-up period <math>(0, \tau)</math>.
** Uno also mentioned that when the data is right censored, the censoring variable ''D'' is usually shorter than that of the failure time ''T'', the tail part of the estimated survival function of T is rather unstable. Thus we consider a truncated version of C.
** Heagerty and Zheng (2005) p95 said '''<math>C_\tau</math> is the probability that the predictions for a random pair of subjects are concordant with their outcomes, given that the smaller event time occurs in <math>(0, \tau)</math>'''.
** real data 1: fit a Cox model. Get risk scores <math>\hat{\beta}'Z</math>. Compute the point and confidence interval estimates (M=500 indep. random samples with the same sample size as the observation data) of <math>C_\tau</math> for different <math>\tau</math>. Compare them with the conventional C-index procedure (Korn).
** real data 1: compute <math>C_\tau</math> for a full model and a reduce model. Compute the difference of them (<math>C_\tau^{(A)} - C_\tau^{(B)} = .01</math>) and the 95% confidence interval (-0.00, .02) of the difference for testing the importance of some variable (HDL in this case). '''Though HDL is quite significant (p=0) with respect to the risk of CV disease but its incremental value evaluated via C-statistics is quite modest.'''
** real data 2: goal - evaluate the prognostic value of a new gene signature in predicting the time to death or metastasis for breast cancer patients. Two models were fitted; one with age+ER and the other is gene+age+ER. For each model we can calculate the point and interval estimates of <math>C_\tau</math> for different <math>\tau</math>s.
** simulation: T is from Weibull regression for case 1 and log-normal regression for case 2. Covariates = (age, ER, gene). 3 kinds of censoring were considered. Sample size is 100, 150, 200 and 300. 1000 iterations. Compute coverage probabilities and average length of 95% confidence intervals, bias and root mean square error for <math>\tau</math> equals to 10 and 15. Compared with the conventional approach, the new method has higher coverage probabilities and less bias in 6 scenarios.
* [https://academic.oup.com/ndt/article/25/5/1399/1843002 Statistical methods for the assessment of prognostic biomarkers (Part I): Discrimination] by Tripep et al 2010
* '''Gonen and Heller''' 2005 concordance index for Cox models
** <math>P(T_2>T_1|g(Z_1)>g(Z_2))</math>. Gonen and Heller's c statistic which is independent of censoring.
** [https://www.rdocumentation.org/packages/survAUC/versions/1.0-5/topics/GHCI GHCI()] from survAUC package. Strangely only one parameter is needed. survAUC allows for testing data but CPE package does not have an option for testing data. <syntaxhighlight lang='rsplus'>
TR <- ovarian[1:16,]
TE <- ovarian[17:26,]
train.fit  <- coxph(Surv(futime, fustat) ~ age,
                    x=TRUE, y=TRUE, method="breslow", data=TR)
lpnew <- predict(train.fit, newdata=TE)     
survAUC::GHCI(lpnew) # .8515


lpnew2 <- predict(train.fit, newdata = TR)
        Fisher's Exact Test for Count Data
survAUC::GHCI(lpnew2) # 0.8079495


CPE::phcpe(train.fit, CPE.SE = TRUE)  
data: matrix(c(3, 40, 297, 29960), nr = 2)
# $CPE
p-value = 0.008853
# [1] 0.8079495
alternative hypothesis: true odds ratio is not equal to 1
# $CPE.SE
95 percent confidence interval:
# [1] 0.0670646
  1.488738 23.966741
sample estimates:
odds ratio
  7.564602


Hmisc::rcorr.cens(-TR$age, Surv(TR$futime, TR$fustat))["C Index"]
> fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="greater")
# 0.7654321
Hmisc::rcorr.cens(TR$age, Surv(TR$futime, TR$fustat))["C Index"]
# 0.2345679
</syntaxhighlight>
** Used by [https://bioconductor.org/packages/release/bioc/vignettes/simulatorZ/inst/doc/simulatorZ-vignette.pdf#page=11 simulatorZ] package
* '''Uno's C-statistics (2011)''' and some examples using different packages
** C-statistic may or may not be a decreasing function of '''tau'''. However, AUC(t) may not be decreasing; see Fig 1 of Blanche et al 2018. <syntaxhighlight lang='rsplus'>
library(survAUC); library(pec)
set.seed(1234)
dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001) # simulWebib was defined above
#    coef exp(coef) se(coef)    z      p
# x -0.744    0.475    0.269 -2.76 0.0057
TR <- dat[1:80,]
TE <- dat[81:100,]
train.fit  <- coxph(Surv(time, status) ~ x, data=TR)
plot(survfit(Surv(time, status) ~ 1, data =TR))


lpnew <- predict(train.fit, newdata=TE)
         Fisher's Exact Test for Count Data
Surv.rsp <- Surv(TR$time, TR$status)
Surv.rsp.new <- Surv(TE$time, TE$status)             
sapply(c(.25, .5, .75),
      function(qtl) UnoC(Surv.rsp, Surv.rsp.new, lpnew, time=quantile(TR$time, qtl)))
# [1] 0.2580193 0.2735142 0.2658271
sapply(c(.25, .5, .75),
      function(qtl) cindex( list(matrix( -lpnew, nrow = nrow(TE))),
         formula = Surv(time, status) ~ x,
        data = TE,
        eval.times = quantile(TR$time, qtl))$AppC$matrix)
# [1] 0.5041490 0.5186850 0.5106746
</syntaxhighlight>
** Four elements are needed for computing truncated C-statistic using survAUC::UnoC. But it seems pec::cindex does not need the training data.
*** training data including covariates,
*** testing data including covariates,
*** predictor from new data,
*** truncation time/evaluation time/prediction horizon.
** (From ?UnoC) Uno's estimator is based on '''inverse-probability-of-censoring weights''' and '''does not assume a specific working model for deriving the predictor lpnew'''. It is assumed, however, that there is a one-to-one relationship between the predictor and the expected survival times conditional on the predictor. Note that the estimator implemented in UnoC is restricted to situations where the random censoring assumption holds.
** [https://rdrr.io/cran/survAUC/man/UnoC.html survAUC::UnoC()]. The '''tau''' parameter: Truncation time. The resulting C tells how well the given prediction model works in predicting events that occur in the time range from 0 to tau. <math> P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \tau).</math> Con: no confidence interval estimate for <math>C_\tau</math> nor <math>C_\tau^{(A)} - C_\tau^{(B)}</math>
** [https://www.rdocumentation.org/packages/pec/versions/2.4.9/topics/cindex pec::cindex()]. At each timepoint of '''eval.times''' the c-index is computed using only those pairs where one of the event times is known to be earlier than this timepoint. If eval.times is missing or Inf then the '''largest uncensored''' event time is used. See a more general example from [https://github.com/tagteam/webappendix-cindex-not-proper/blob/bdc0a70778955f36aeb1d6566590a51d1913702f/R/cindex-t-year-risk-supplementary-material.R#L118 here]
** Est.Cval() from the [https://cran.r-project.org/web/packages/survC1/index.html survC1] package (the only package gives confidence intervals of C-statistic or deltaC, authored by H. Uno). It doesn't take new data nor the vector of predictors obtained from the test data. Pro: [https://www.rdocumentation.org/packages/survC1/versions/1.0-2/topics/Inf.Cval Inf.Cval()] can compute the confidence interval (perturbation-resampling based) of <math>C_\tau</math> & [https://www.rdocumentation.org/packages/survC1/versions/1.0-2/topics/Inf.Cval.Delta Inf.Cval.Delta()] for the difference <math>C_\tau^{(A)} - C_\tau^{(B)}</math>.  <syntaxhighlight lang='rsplus'>
library(survAUC)
# require training and predict sets
TR <- ovarian[1:16,]
TE <- ovarian[17:26,]
train.fit  <- coxph(Surv(futime, fustat) ~ age, data=TR)


lpnew <- predict(train.fit, newdata=TE)
data:  matrix(c(3, 40, 297, 29960), nr = 2)
Surv.rsp <- Surv(TR$futime, TR$fustat)
p-value = 0.008853
Surv.rsp.new <- Surv(TE$futime, TE$fustat)             
alternative hypothesis: true odds ratio is greater than 1
95 percent confidence interval:
1.973  Inf
sample estimates:
odds ratio
  7.564602


UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*1)
> fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="less")
# [1] 0.9761905
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*2)
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*3)
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*4)  
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*5)  
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors)
# [1] 0.7308979
# So the function UnoC() can obtain the exact result as Est.Cval().
# Now try on a new data set. Question: why do we need Surv.rsp?
UnoC(Surv.rsp, Surv.rsp.new, lpnew)
# [1] 0.7333333
UnoC(Surv.rsp, Surv.rsp.new, lpnew, time=365.25*2)
# [1] 0.7333333


library(pec)
         Fisher's Exact Test for Count Data
cindex( list(matrix( -lpnew, nrow = nrow(TE))),
         formula = Surv(futime, fustat) ~ age,
        data = TE, eval.times = 365.25*2)$AppC
# $matrix
# [1] 0.7333333


library(survC1)
data:  matrix(c(3, 40, 297, 29960), nr = 2)
Est.Cval(cbind(TE, lpnew), tau = 365.25*2, nofit = TRUE)$Dhat
p-value = 0.9991
# [1] 0.7333333
alternative hypothesis: true odds ratio is less than 1
95 percent confidence interval:
  0.00000 20.90259
sample estimates:
odds ratio
  7.564602
</pre>
[https://www.statsandr.com/blog/fisher-s-exact-test-in-r-independence-test-for-a-small-sample/ Fisher's exact test in R: independence test for a small sample]


# tau is mandatory (>0), no need to have training and predict sets
From the documentation of [https://stat.ethz.ch/R-manual/R-devel/library/stats/html/fisher.test.html fisher.test]
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*1)$Dhat
<pre>
# [1] 0.9761905
Usage:
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*2)$Dhat
    fisher.test(x, y = NULL, workspace = 200000, hybrid = FALSE,
# [1] 0.7308979
                control = list(), or = 1, alternative = "two.sided",
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*3)$Dhat
                conf.int = TRUE, conf.level = 0.95,
# [1] 0.7308979
                simulate.p.value = FALSE, B = 2000)
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*4)$Dhat
</pre>
# [1] 0.7308979
* For 2 by 2 cases, p-values are obtained directly using the (central or non-central) hypergeometric distribution.
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*5)$Dhat
* For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one.
# [1] 0.7308979
* The alternative for a one-sided test is based on the odds ratio, so ‘alternative = "greater"’ is a test of the odds ratio being bigger than ‘or’.
 
* Two-sided tests are based on the probabilities of the tables, and take as ‘more extreme’ all tables with probabilities less than or equal to that of the observed table, the p-value being the sum of such probabilities.
svg("~/Downloads/c_stat_scatter.svg", width=8, height=5)
par(mfrow=c(1,2))
plot(TR$futime, train.fit$linear.predictors, main="training data",  
    xlab="time", ylab="predictor")
mtext("C=.731 at t=2", 3)
plot(TE$futime, lpnew, main="testing data", xlab="time", ylab="predictor")
mtext("C=.733 at t=2", 3)
dev.off()
</syntaxhighlight> [[File:C stat scatter.svg|600px]]
* Assessing the prediction accuracy of a cure model for censored survival data with long-term survivors: Application to breast cancer data
* The use of ROC for defining the validity of the prognostic index in censored data
* [http://circ.ahajournals.org/content/115/7/928 Use and Misuse of the Receiver Operating Characteristic Curve in Risk Prediction] Cook 2007
* '''Evaluating Discrimination of Risk Prediction Models: The C Statistic''' by Pencina et al, JAMA 2015
* '''Blanche et al(2018)''' [https://academic.oup.com/biostatistics/advance-article-abstract/doi/10.1093/biostatistics/kxy006/4864363?redirectedFrom=fulltext The c-index is not proper for the evaluation of t-year predicted risks]
** There is a bug on script [https://github.com/tagteam/webappendix-cindex-not-proper/blob/master/R/cindex-t-year-risk-supplementary-material.R#L154 line 154].
** With a fixed prediction horizon, '''the concordance index can be higher for a misspecified model than for a correctly specified model'''. The time-dependent AUC does not have this problem.
** (page 8) ''We now show that when a misspecified prediction model satisfies the ranking condition but the true distribution does not, then it is possible that the misspecified model achieves a misleadingly high c-index.''
** The traditional C‐statistic used for the survival models is not guaranteed to identify the “best” model for estimating the risk of t-year survival. In contrast, measures of predicted error do not suffer from these limitations. See this paper [https://onlinelibrary.wiley.com/doi/full/10.1111/ajt.15132 The relationship between the C‐statistic and the accuracy of program‐specific evaluations] by Wey et al 2018
** Unfortunately, a drawback of Harrell’s c-index for the time to event and competing risk settings is that the measure does not provide a value specific to the time horizon of prediction (e.g., a 3-year risk). See this paper [https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-018-0029-2 The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models] by Kattan and Gerds 2018.
** In Fig 1 Y-axis is concordance (AUC/C) and X-axis is time, the caption said '''The ability of (some variable) to discriminate patients who will either die or be transplanted within the next t-years from those who will be event-free at time t'''.
** The <math>\tau</math> considered here is the maximal end of follow-up time
** AUC (riskRegression::Score()), Uno-C (pec::cindex()), Harrell's C (Hmisc::rcorr.cens() for censored and summary(fit)$concordance for uncensored) are considered.
** The C_IPCW(t) or C_Harrell(t) is obtained by artificially censoring the outcome at time t. So C_IPCW(t) is different from Uno's version.
 
=== C-statistic limitations ===
See the discussion section of [https://onlinelibrary.wiley.com/doi/full/10.1111/ajt.15132 The relationship between the C‐statistic and the accuracy of program‐specific evaluations] by Wey 2018
* '''Correctly specified models''' can have low or high C‐statistics. Thus, the C‐statistic cannot identify a correctly specified model.
* the traditional C‐statistic used for the survival models is not guaranteed to identify the “best” model for estimating the risk of, for example, 1‐year survival
 
Importantly, there exists no measure of risk discrimination or predicted error that can identify a correctly specified model, because they all depend on unknown characteristics of the data. For example, the C‐statistic depends on the variability in recipient‐level risk, while measures of squared error such as the Brier Score depend on residual variability.


[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3157029/ Analysis of Biomarker Data: logs, odds ratios and ROC curves]. This paper does not consider the survival time data. It has some summary about C-statistic (interpretation, warnings).
== Boschloo's test ==
* The C-statistic is relatively '''insensitive''' to the added contribution of a new marker when the two models, with and without biomarker, estimate risk on a continuous scale. In fact, many new biomarkers provide only minimal increase in the C-statistic when added to the Framingham model for CHD risk.
https://en.wikipedia.org/wiki/Boschloo%27s_test
* The classical C-statistic assumes that high sensitivity and high specificity are equally desirable. This is not always the case – for example, when screening the general population for a low-prevalence outcome requiring invasive follow-up, high specificity is important, while cancer screening in a high-risk group would emphasize high sensitivity.
* To achieve a noticeable increase in the C-statistic, a biomarker must have a very strong independent association with the event risk (say ORs of 10 or higher per 1 SD increase).


=== C-statistic applications ===
== IID assumption ==
* [https://www.tandfonline.com/doi/pdf/10.1080/01621459.2018.1482756 Semiparametric Regression Analysis of Multiple Right- and Interval-Censored Events] by Gao et al, JASA 2018
[https://www.r-bloggers.com/2024/06/ignoring-the-iid-assumption-isnt-a-great-idea/ Ignoring the IID assumption isn’t a great idea]
* A c statistic of 0.7–0.8 is considered good, while >0.8 is considered excellent. See [https://www.sciencedirect.com/science/article/pii/S0168827817322481#bb0090 this paper]. 2018
* The C statistic, also termed concordance statistic or c-index, is analogous to the area under the curve and is a global measure of model discrimination. Discrimination refers to the ability of a risk prediction model to separate patients who develop a health outcome from patients who do not develop a health outcome. Effectively, the C statistic is the probability that a model will result in a higher-risk score for a patient who develops the outcomes of interest compared with a patient who does not develop the outcomes of interest. See [https://jamanetwork.com/journals/jamanetworkopen/article-abstract/2703140 the paper] JAMA 2018


=== C-statistic vs LRT comparing nested models ===
== Chi-square independence test ==
1. Binary data
* https://en.wikipedia.org/wiki/Chi-squared_test.  
<syntaxhighlight lang='rsplus'>
** Chi-Square = Σ[(O - E)^2 / E]
# https://stats.stackexchange.com/questions/46523/how-to-simulate-artificial-data-for-logistic-regression
** We can see expected_{ij} = n_{i.}*n_{.j}/n_{..}
set.seed(666)
** The Chi-Square test statistic follows a Chi-Square distribution with degrees of freedom equal to (r-1) x (c-1)
x1 = rnorm(1000)          # some continuous variables
** The Chi-Square test is generally a '''two-sided''' test, meaning that it tests for a significant difference between the observed and expected frequencies in both directions (i.e., either a greater than or less than difference).
x2 = rnorm(1000)
* [https://statsandr.com/blog/chi-square-test-of-independence-by-hand/ Chi-square test of independence by hand]
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
<pre>
pr = 1/(1+exp(-z))        # pass through an inv-logit function
> chisq.test(matrix(c(14,0,4,10), nr=2), correct=FALSE)
y = rbinom(1000,1,pr)      # bernoulli response variable
df = data.frame(y=y,x1=x1,x2=x2)
fit <- glm( y~x1+x2,data=df,family="binomial")
summary(fit)  
# Estimate Std. Error z value Pr(>|z|)   
# (Intercept)  0.9915    0.1185  8.367  <2e-16 ***
#  x1            2.2731    0.1789  12.709  <2e-16 ***
#  x2            3.1853    0.2157  14.768  <2e-16 ***
---
#  Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for binomial family taken to be 1)
#
# Null deviance: 1355.16  on 999  degrees of freedom
# Residual deviance:  582.93  on 997  degrees of freedom
# AIC: 588.93
confint.default(fit)
#                2.5 %  97.5 %
# (Intercept) 0.7592637 1.223790
# x1          1.9225261 2.623659
# x2          2.7625861 3.608069


# LRT - likelihood ratio test
Pearson's Chi-squared test
fit2 <- glm( y~x1,data=df,family="binomial")
anova.res <- anova(fit2, fit)
# Analysis of Deviance Table
#
# Model 1: y ~ x1
# Model 2: y ~ x1 + x2
#  Resid. Df Resid. Dev Df Deviance
# 1      998    1186.16           
# 2      997    582.93  1  603.23
1-pchisq( abs(anova.res$Deviance[2]), abs(anova.res$Df[2]))
# [1] 0


# Method 1: use ROC package to compute AUC
data: matrix(c(14, 0, 4, 10), nr = 2)
library(ROC)
X-squared = 15.556, df = 1, p-value = 8.012e-05
set.seed(123)
markers <- predict(fit, newdata = data.frame(x1, x2), type = "response")
roc1 <- rocdemo.sca( truth=y, data=markers, rule=dxrule.sca )
auc <- AUC(roc1); print(auc) # [1] 0.9459085


markers2 <- predict(fit2, newdata = data.frame(x1), type = "response")
# How about the case if expected=0 for some elements?
roc2 <- rocdemo.sca( truth=y, data=markers2, rule=dxrule.sca )
> chisq.test(matrix(c(14,0,4,0), nr=2), correct=FALSE)
auc2 <- AUC(roc2); print(auc2) # [1] 0.7259098
auc - auc2 # [1] 0.2199987


# Method 2: use pROC package to compute AUC
Pearson's Chi-squared test
roc_obj <- pROC::roc(y, markers)
pROC::auc(roc_obj) # Area under the curve: 0.9459


# Method 3: Compute AUC by hand
data: matrix(c(14, 0, 4, 0), nr = 2)
# https://www.r-bloggers.com/calculating-auc-the-area-under-a-roc-curve/
X-squared = NaN, df = 1, p-value = NA
auc_probability <- function(labels, scores, N=1e7){
  pos <- sample(scores[labels], N, replace=TRUE)
  neg <- sample(scores[!labels], N, replace=TRUE)
  # sum( (1 + sign(pos - neg))/2)/N # does the same thing
  (sum(pos > neg) + sum(pos == neg)/2) / N # give partial credit for ties
}
auc_probability(as.logical(y), markers) # [1] 0.945964
</syntaxhighlight>


2. Survival data
Warning message:
<syntaxhighlight lang='rsplus'>
In chisq.test(matrix(c(14, 0, 4, 0), nr = 2), correct = FALSE) :
library(survival)
  Chi-squared approximation may be incorrect
data(ovarian)
</pre>
head(ovarian)
[https://www.rdatagen.net/post/a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence-part-2/ Exploring the underlying theory of the chi-square test through simulation - part 2]
range(ovarian$futime) # [1]  59 1227
plot(survfit(Surv(futime, fustat) ~ 1, data = ovarian))


coxph(Surv(futime, fustat) ~ rx + age, data = ovarian)
The result of Fisher exact test and chi-square test can be quite different.
#        coef exp(coef) se(coef)    z      p
<pre>
# rx  -0.8040    0.4475  0.6320 -1.27 0.2034
# https://myweb.uiowa.edu/pbreheny/7210/f15/notes/9-24.pdf#page=4
# age  0.1473    1.1587  0.0461  3.19 0.0014
R> Job <- matrix(c(16,48,67,21,0,19,53,88), nr=2, byrow=T)
#
R> dimnames(Job) <- list(A=letters[1:2],B=letters[1:4])
# Likelihood ratio test=15.9  on 2 df, p=0.000355
R> fisher.test(Job)
# n= 26, number of events= 12


require(survC1)
Fisher's Exact Test for Count Data
covs0 <- as.matrix(ovarian[, c("rx")])
covs1 <- as.matrix(ovarian[, c("rx", "age")])
tau=365.25*1
Delta=Inf.Cval.Delta(ovarian[, 1:2], covs0, covs1, tau, itr=200)
round(Delta, digits=3)
#          Est    SE Lower95 Upper95
# Model1 0.844 0.119  0.611  1.077
# Model0 0.659 0.148  0.369  0.949
# Delta  0.185 0.197  -0.201  0.572
</syntaxhighlight>


* [http://r.789695.n4.nabble.com/Comparing-differences-in-AUC-from-2-different-models-td858746.html Comparing differences in AUC from 2 different models]
data: Job
p-value < 2.2e-16
alternative hypothesis: two.sided


=== Time dependent ROC curves ===
R> chisq.test(c(16,48,67,21), c(0,19,53,88))
[https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/tdrocc tdrocc()]


== Prognostic markers vs predictive markers (and other biomarkers) ==
Pearson's Chi-squared test
* '''[https://en.wikipedia.org/wiki/Prognosis_marker Prognostic marker]''' (某種疾病的危險因子) are biomarkers used to measure the progress of a disease in the patient sample. Prognostic markers are useful to stratify the patients into groups, guiding towards precise medicine discovery. They inform about likely disease outcome independent of the treatment received. See [http://europepmc.org/articles/PMC3888208 Statistical and practical considerations for clinical evaluation of predictive biomarkers] by Mei-Yin Polley et al 2013.
* '''Predictive marker/treatment selection markers''' provide information about likely outcomes with application of specific interventions. See [http://annals.org/aim/fullarticle/746812/measuring-performance-markers-guiding-treatment-decisions Measuring the performance of markers for guiding treatment decisions] by Janes, et al 2011.
* [https://academic.oup.com/annonc/article/27/12/2160/2736334 Statistical controversies in clinical research: prognostic gene signatures are not (yet) useful in clinical practice] by Michiels 2016.
* Diagnostic biomarker, prognostic biomarker and predictive biomarkers. Disease-related biomarkers and drug-related biomarkers. https://en.wikipedia.org/wiki/Biomarker_(medicine)
* Diagnostic biomarker, prognostic biomarker and predictive biomarkers. https://en.wikipedia.org/wiki/Cancer_biomarker
* '''Diagnostic''' (確定是某種疾病): diagnose conditions, as in the case of identifying early stage cancers
* [https://onlinelibrary.wiley.com/doi/full/10.1002/sim.8091 Statistical methods for building better biomarkers of chronic kidney disease] by Pencina et al 2019.


== Computation for gene expression (microarray) data ==
data: c(16, 48, 67, 21) and c(0, 19, 53, 88)
* [https://github.com/cran/survival survival] package (basic package, not designed for gene expression)
X-squared = 12, df = 9, p-value = 0.2133
* [https://github.com/cran/GSA/blob/master/R/GSA.morefuns.R gsa] package
* [https://github.com/cran/samr/blob/master/R/samr.morefuns.R samr] package
* [https://github.com/cran/pamr/blob/master/R/pamr.survfuns.R pamr] package
* [http://www.bioconductor.org/packages/release/bioc/manuals/genefilter/man/genefilter.pdf#page=4 (Bioconductor) genefilter], [https://github.com/Bioconductor/genefilter/blob/master/R/all.R source]. genefilter() & coxfilter(). apply() was used.
* [https://github.com/cran/survcomp/blob/master/R/logpl.R logpl()] from [http://www.bioconductor.org/packages/release/bioc/vignettes/survcomp/inst/doc/survcomp.pdf#page=24 survcomp] package


<syntaxhighlight lang='rsplus'>
Warning message:
n <- 500
In chisq.test(c(16, 48, 67, 21), c(0, 19, 53, 88)) :
g <- 10000
  Chi-squared approximation may be incorrect
y <- rexp(n)
</pre>
status <- ifelse(runif(n) < .7, 1, 0)
x <- matrix(rnorm(n*g), nr=g)
treat <- rbinom(n, 1, .5)
# Method 1
system.time(for(i in 1:g) coxph(Surv(y, status) ~ x[i, ] + treat + treat:x[i, ]))
# 28 seconds


# Method 2
== Cochran-Armitage test for trend (2xk) ==
system.time(apply(x, 1, function(z) coxph(Surv(y, status) ~ z + treat + treat:z)))
* [https://en.wikipedia.org/wiki/Cochran%E2%80%93Armitage_test_for_trend Cochran–Armitage test for trend]
# 29 seconds
* [https://search.r-project.org/CRAN/refmans/DescTools/html/CochranArmitageTest.html CochranArmitageTest()]. CochranArmitageTest(dose, alternative="one.sided") if dose is a 2xk or kx2 matrix.
* [https://rdocumentation.org/packages/stats/versions/3.6.2/topics/prop.trend.test ?prop.trend.test]. prop.trend.test(dose[2,] , colSums(dose))


# Method 3 (Windows)
== PAsso: Partial Association between ordinal variables after adjustment ==
dyn.load("C:/Program Files (x86)/ArrayTools/Fortran/surv64.dll")  
https://github.com/XiaoruiZhu/PAsso
tme <- y
sorted <- order(tme)
stime <- as.double(tme[sorted])
sstat <- as.integer(status[sorted])
x1 <- x[,sorted]
imodel <- 1  # imodel=1, fit univariate gene expression. Return p-values vector.
nvar <- 1
system.time(outx1 <- .Fortran("coxfitc", as.integer(n), as.integer(g), as.integer(0),
                stime, sstat, t(x1), as.double(0), as.integer(imodel),
                double(2*n+2*nvar*nvar+3*nvar), logdiff = double(g)))
# 1.69 seconds on R i386
# 0.79 seconds on R x64


# method 4: GSA
== Cochran-Mantel-Haenszel (CMH) & Association Tests for Ordinal Table ==
genenames=paste("g", 1:g, sep="")
* [https://predictivehacks.com/contingency-tables-in-r/ Contingency Tables In R]
#create some random gene sets
* [https://rcompanion.org/handbook/H_09.html Association Tests for Ordinal Table]
genesets=vector("list", 50)
* [https://online.stat.psu.edu/stat504/lesson/5/5.3/5.3.5 5.3.5 - Cochran-Mantel-Haenszel Test] psu.edu
for(i in 1:50){
* https://en.wikipedia.org/wiki/Cochran%E2%80%93Mantel%E2%80%93Haenszel_statistics
  genesets[[i]]=paste("g", sample(1:g,size=30), sep="")
}
geneset.names=paste("set",as.character(1:50),sep="")
debug(GSA.func)
GSA.obj<-GSA(x,y, genenames=genenames, genesets=genesets, 
            censoring.status=status,
            resp.type="Survival", nperms=1)
Browse[3]> str(catalog.unique)
int [1:1401] 7943 227 4069 3011 8402 1586 2443 2777 673 9021 ...
Browse[3]> system.time(cox.func(x[catalog.unique,], y, censoring.status, s0=0))
# 1.3 seconds
Browse[2]> system.time(cox.func(x, y, censoring.status, s0=0))
# 7.259 seconds
</syntaxhighlight>


== Single gene vs mult-gene survival models ==
== GSEA ==
[https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2430-9 A comparative study of survival models for breast cancer prognostication revisited: the benefits of multi-gene models] by Grzadkowski et al 2018. To concordance of biomarker performance, the authors use the '''Concordance Correlation Coefficient (CCC)''' as introduced by Lin (1989) and further amended in Lin (2000).
See [[GSEA|GSEA]].


== Random papers using C-index, AUC or Brier scores ==
== McNemar’s test on paired nominal data ==
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4841879/pdf/IJPH-45-239.pdf Predicting the Survival Time for Bladder Cancer Using an Additive Hazards Model in Microarray Data] 2016. AUC, Brier scores and C-index were used
https://en.wikipedia.org/wiki/McNemar%27s_test


== More ==
== R ==
* This pdf file from [http://data.princeton.edu/pop509/NonParametricSurvival.pdf data.princeton.edu] contains estimation, hypothesis testing, time varying covariates and baseline survival estimation.
[https://predictivehacks.com/contingency-tables-in-r/ Contingency Tables In R]. Two-Way Tables, Mosaic plots, Proportions of the Contingency Tables, Rows and Columns Totals, Statistical Tests, Three-Way Tables, Cochran-Mantel-Haenszel (CMH) Methods.
* [http://www.petrkeil.com/?p=2425 Survival analysis: basic terms, the exponential model, censoring, examples in R and JAGS]
* [https://stats.stackexchange.com/questions/36015/prediction-in-cox-regression Survival analysis is not commonly used to predict future times to an event]. Cox model would require specification of the baseline hazard function.


= Logistic regression =
= Case control study =
* See an example from the '''odds ratio''' calculation in https://en.wikipedia.org/wiki/Odds_ratio where it shows odds ratio can be calculated but '''relative risk''' cannot in the '''case-control study''' (useful in a rare-disease case).
* https://www.statisticshowto.datasciencecentral.com/case-control-study/
* https://medical-dictionary.thefreedictionary.com/case-control+study
* https://en.wikipedia.org/wiki/Case%E2%80%93control_study Cf. '''randomized controlled trial''', '''cohort study'''
* https://www.students4bestevidence.net/blog/2017/12/06/case-control-and-cohort-studies-overview/
* https://quizlet.com/16214330/case-control-study-flash-cards/


== Simulate binary data from the logistic model ==
= Confidence vs Credibility Intervals =
https://stats.stackexchange.com/questions/46523/how-to-simulate-artificial-data-for-logistic-regression
http://freakonometrics.hypotheses.org/18117
<syntaxhighlight lang='rsplus'>
set.seed(666)
x1 = rnorm(1000)          # some continuous variables
x2 = rnorm(1000)
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
pr = 1/(1+exp(-z))        # pass through an inv-logit function
y = rbinom(1000,1,pr)      # bernoulli response variable
#now feed it to glm:
df = data.frame(y=y,x1=x1,x2=x2)
glm( y~x1+x2,data=df,family="binomial")
</syntaxhighlight>


== Building a Logistic Regression model from scratch ==
== T-distribution vs normal distribution ==
https://www.analyticsvidhya.com/blog/2015/10/basics-logistic-regression
* [https://www.statology.org/normal-distribution-vs-t-distribution/ Normal Distribution vs. t-Distribution: What’s the Difference?]
* Test normal distribution
<pre>
set.seed(1); shapiro.test(rnorm(5000) )
# Shapiro-Wilk normality test
# data:  rnorm(5000)
# W = 0.99957, p-value = 0.3352. --> accept H0


== Odds ratio ==
set.seed(1234567); shapiro.test(rnorm(5000) )
Calculate the odds ratio from the coefficient estimates; see [https://stats.stackexchange.com/questions/8661/logistic-regression-in-r-odds-ratio this post].
# Shapiro-Wilk normality test
<syntaxhighlight lang='rsplus'>
# data: rnorm(5000)
require(MASS)
# W = 0.99934, p-value = 0.06508 --> accept H0, but close to .05
N  <- 100              # generate some data
</pre>
X1 <- rnorm(N, 175, 7)
 
X2 <- rnorm(N, 30, 8)
= Power analysis/Sample Size determination =
X3 <- abs(rnorm(N, 60, 30))
See [[Power|Power]].
Y  <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 12)


# dichotomize Y and do logistic regression
= Common covariance/correlation structures =
Yfac  <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi"))
See [https://onlinecourses.science.psu.edu/stat502/node/228 psu.edu]. Assume covariance <math>\Sigma = (\sigma_{ij})_{p\times p} </math>
glmFit <- glm(Yfac ~ X1 + X2 + X3, family=binomial(link="logit"))


exp(cbind(coef(glmFit), confint(glmFit)))
* Diagonal structure: <math>\sigma_{ij} = 0</math> if <math>i \neq j</math>.
* Compound symmetry: <math>\sigma_{ij} = \rho</math> if <math>i \neq j</math>.
* First-order autoregressive AR(1) structure: <math>\sigma_{ij} = \rho^{|i - j|}</math>. <syntaxhighlight lang='rsplus'>
rho <- .8
p <- 5
blockMat <- rho ^ abs(matrix(1:p, p, p, byrow=T) - matrix(1:p, p, p))
</syntaxhighlight>
</syntaxhighlight>
* Banded matrix: <math>\sigma_{ii}=1, \sigma_{i,i+1}=\sigma_{i+1,i} \neq 0, \sigma_{i,i+2}=\sigma_{i+2,i} \neq 0</math> and <math>\sigma_{ij}=0</math> for <math>|i-j| \ge 3</math>.
* Spatial Power
* Unstructured Covariance
* [https://en.wikipedia.org/wiki/Toeplitz_matrix Toeplitz structure]


= Medical applications =
To create blocks of correlation matrix, use the "%x%" operator. See [https://www.rdocumentation.org/packages/base/versions/3.4.3/topics/kronecker kronecker()].
== Subgroup analysis ==
{{Pre}}
Other related keywords: recursive partitioning, randomized clinical trials (RCT)
covMat <- diag(n.blocks) %x% blockMat
</pre>
 
= Counter/Special Examples =
* [https://www.tandfonline.com/doi/full/10.1080/00031305.2021.2004922 Myths About Linear and Monotonic Associations: Pearson’s r, Spearman’s ρ, and Kendall’s τ] van den Heuvel 2022
 
== Math myths ==
* [https://twitter.com/mathladyhazel/status/1557225372890152960 How 1+2+3+4+5+6+7+..... equals a negative number! ] S=-1/8
* [https://en.wikipedia.org/wiki/1_+_2_+_3_+_4_+_%E2%8B%AF 1 + 2 + 3 + 4 + ⋯ = -1/12]


* [https://www.rdatagen.net/post/sub-group-analysis-in-rct/ Thinking about different ways to analyze sub-groups in an RCT]
== Correlated does not imply independence ==
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.7064/full Tutorial in biostatistics: data-driven subgroup identification and analysis in clinical trials] I Lipkovich, A Dmitrienko - Statistics in medicine, 2017
Suppose X is a normally-distributed random variable with zero mean. Let Y = X^2. Clearly X and Y are not independent: if you know X, you also know Y. And if you know Y, you know the absolute value of X.
* Personalized medicine:Four perspectives of tailored medicine SJ Ruberg, L Shen - Statistics in Biopharmaceutical Research, 2015
* Berger, J. O., Wang, X., and Shen, L. (2014), “A Bayesian Approach to Subgroup Identification,” Journal of Biopharmaceutical Statistics, 24, 110–129.
* [https://rpsychologist.com/treatment-response-subgroup Change over time is not "treatment response"]


== Interaction analysis ==
The covariance of X and Y is
* Goal: '''assessing the predictiveness of biomarkers''' by testing their '''interaction (strength) with the treatment'''.
<pre>
* [https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.7608 Evaluation of biomarkers for treatment selection usingindividual participant data from multiple clinical trials] Kang et al 2018
  Cov(X,Y) = E(XY) - E(X)E(Y) = E(X^3) - 0*E(Y) = E(X^3)
* http://www.stat.purdue.edu/~ghobbs/STAT_512/Lecture_Notes/ANOVA/Topic_27.pdf#page=15. For survival data, y-axis is the survival time and B1=treatment, B2=control and X-axis is treatment-effect modifying score. But as seen on [http://www.stat.purdue.edu/~ghobbs/STAT_512/Lecture_Notes/ANOVA/Topic_27.pdf#page=16 page16], the effects may not be separated.
          = 0,  
* [http://onlinelibrary.wiley.com/doi/10.1002/bimj.201500234/full Identification of biomarker-by-treatment interactions in randomized clinical trials with survival outcomes and high-dimensional spaces] N Ternès, F Rotolo, G Heinze, S Michiels - Biometrical Journal, 2017
</pre>
* [https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.6564 Designing a study to evaluate the benefitof a biomarker for selectingpatient treatment] Janes 2015
because the distribution of X is symmetric around zero.  Thus the correlation r(X,Y) = Cov(X,Y)/Sqrt[Var(X)Var(Y)] = 0, and we have a situation where the variables are not independent, yet
* [https://onlinelibrary.wiley.com/doi/epdf/10.1002/pst.1728 A visualization method measuring theperformance of biomarkers for guidingtreatment decisions] Yang et al 2015. Predictiveness curves were used a lot.
have (linear) correlation r(X,Y) = 0.
* [https://onlinelibrary.wiley.com/doi/epdf/10.1111/biom.12191 Combining Biomarkers to Optimize Patient TreatmentRecommendations] Kang et al 2014. Several simulations are conducted.
 
* [https://www.ncbi.nlm.nih.gov/pubmed/24695044 An approach to evaluating and comparing biomarkers for patient treatment selection] Janes et al 2014
This example shows how a linear correlation coefficient does not encapsulate anything about the quadratic dependence of Y upon X.
* [http://journals.sagepub.com/doi/pdf/10.1177/0272989X13493147 A Framework for Evaluating Markers Used to Select Patient Treatment] Janes et al 2014
* Tian, L., Alizaden, A. A., Gentles, A. J., and Tibshirani, R. (2014) “A Simple Method for Detecting Interactions Between a Treatment and a Large Number of Covariates,” and the [https://books.google.com/books?hl=en&lr=&id=2gG3CgAAQBAJ&oi=fnd&pg=PA79&ots=y5LqF3vk-T&sig=r2oaOxf9gcjK-1bvFHVyfvwscP8#v=onepage&q&f=true book chapter].
* [https://biostats.bepress.com/cgi/viewcontent.cgi?article=1228&context=uwbiostat Statistical Methods for Evaluating and Comparing Biomarkers for Patient Treatment Selection] Janes et al 2013
* [https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1541-0420.2011.01722.x Assessing Treatment-Selection Markers using a Potential Outcomes Framework] Huang et al 2012
* [https://biostats.bepress.com/cgi/viewcontent.cgi?article=1223&context=uwbiostat Methods for Evaluating Prediction Performance of Biomarkers and Tests] Pepe et al 2012
* Measuring the performance of markers for guiding treatment decisions by Janes, et al 2011. <syntaxhighlight lang='rsplus'>
cf <- c(2, 1, .5, 0)
f1 <- function(x) { z <- cf[1] + cf[3] + (cf[2]+cf[4])*x; 1/ (1 + exp(-z)) }
f0 <- function(x) { z <- cf[1] + cf[2]*x; 1/ (1 + exp(-z)) }
par(mfrow=c(1,3))
curve(f1, -3, 3, col = 'red', ylim = c(0, 1),  
      ylab = '5-year DFS Rate', xlab = 'Marker A/D Value',  
      main = 'Predictiveness Curve', lwd = 2)
curve(f0, -3, 3, col = 'black', ylim = c(0, 1),
      xlab = '', ylab = '', lwd = 2, add = TRUE)
legend(.5, .4, c("control", "treatment"),
      col = c("black", "red"), lwd = 2)


cf <- c(.1, 1, -.1, .5)
== Significant p value but no correlation ==
curve(f1, -3, 3, col = 'red', ylim = c(0, 1),
[https://stats.stackexchange.com/a/333752 Post] where p-value = 1.18e-06 but cor=0.067. p-value does not say anything about the size of r.
      ylab = '5-year DFS Rate', xlab = 'Marker G Value',
      main = 'Predictiveness Curve', lwd = 2)
curve(f0, -3, 3, col = 'black', ylim = c(0, 1),
      xlab = '', ylab = '', lwd = 2, add = TRUE)
legend(.5, .4, c("control", "treatment"),
      col = c("black", "red"), lwd = 2)
abline(v= - cf[3]/cf[4], lty = 2)


cf <- c(1, -1, 1, 2)
== Spearman vs Pearson correlation ==
curve(f1, -3, 3, col = 'red', ylim = c(0, 1),
Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship. https://stats.stackexchange.com/questions/8071/how-to-choose-between-pearson-and-spearman-correlation
      ylab = '5-year DFS Rate', xlab = 'Marker B Value',  
 
      main = 'Predictiveness Curve', lwd = 2)
[https://en.wikipedia.org/wiki/Pearson_correlation_coefficient#Testing_using_Student's_t-distribution Testing using Student's t-distribution] cor.test() (T-distribution with n-1 d.f.). The normality assumption is used in test. For estimation, it affects the unbiased and efficiency. See [https://en.wikipedia.org/wiki/Pearson_correlation_coefficient#Sensitivity_to_the_data_distribution Sensitivity to the data distribution].
curve(f0, -3, 3, col = 'black', ylim = c(0, 1),
<pre>
      xlab = '', ylab = '', lwd = 2, add = TRUE)
x=(1:100)
legend(.5, .85, c("control", "treatment"),
y=exp(x);                       
      col = c("black", "red"), lwd = 2)
cor(x,y, method='spearman') # 1
abline(v= - cf[3]/cf[4], lty = 2)
cor(x,y, method='pearson') # .25
</syntaxhighlight> [[File:PredcurveLogit.svg|500px]]
</pre>
* [https://www.degruyter.com/downloadpdf/j/ijb.2014.10.issue-1/ijb-2012-0052/ijb-2012-0052.pdf An Approach to Evaluating and Comparing Biomarkers for Patient Treatment Selection] The International Journal of Biostatistics by Janes, 2014. Y-axis is risk given marker, not P(T > t0|X). Good details.
 
* Gunter, L., Zhu, J., and Murphy, S. (2011), “Variable Selection for Qualitative Interactions in Personalized Medicine While Controlling the Family-Wise Error Rate,” Journal of Biopharmaceutical Statistics, 21, 1063–1078.
[https://stats.stackexchange.com/a/344758 How to know whether Pearson's or Spearman's correlation is better to use?] &
[https://statisticsbyjim.com/basics/spearmans-correlation/ Spearman’s Correlation Explained]. Spearman's 𝜌 is better than Pearson correlation since
* it doesn't assume linear relationship between variables
* it is resistant to outliers
* it handles ordinal data that are not interval-scaled


= Statistical Learning =
== Spearman vs Wilcoxon ==
* [http://statweb.stanford.edu/~tibs/ElemStatLearn/ Elements of Statistical Learning] Book homepage
By [http://www.talkstats.com/threads/wilcoxon-signed-rank-test-or-spearmans-rho.42395/ this post]
* [http://heather.cs.ucdavis.edu/draftregclass.pdf From Linear Models to Machine Learning] by Norman Matloff
* Wilcoxon used to compare categorical versus non-normal continuous variable
* [http://www.kdnuggets.com/2017/04/10-free-must-read-books-machine-learning-data-science.html 10 Free Must-Read Books for Machine Learning and Data Science]
* Spearman's rho used to compare two continuous (including '''ordinal''') variables that one or both aren't normally distributed
* [https://towardsdatascience.com/the-10-statistical-techniques-data-scientists-need-to-master-1ef6dbd531f7 10 Statistical Techniques Data Scientists Need to Master]
*# Linear regression
*# Classification: Logistic Regression, Linear Discriminant Analysis, Quadratic Discriminant Analysis
*# Resampling methods: Bootstrapping and Cross-Validation
*# Subset selection: Best-Subset Selection, Forward Stepwise Selection, Backward Stepwise Selection, Hybrid Methods
*# Shrinkage/regularization: Ridge regression, Lasso
*# Dimension reduction: Principal Components Regression, Partial least squares
*# Nonlinear models: Piecewise function, Spline, generalized additive model
*# Tree-based methods: Bagging, Boosting, Random Forest
*# Support vector machine
*# Unsupervised learning: PCA, k-means, Hierarchical
* [https://www.listendata.com/2018/03/regression-analysis.html?m=1 15 Types of Regression you should know]


== LDA (Fisher's linear discriminant), QDA ==
== Spearman vs [https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient Kendall correlation] ==
* https://en.wikipedia.org/wiki/Linear_discriminant_analysis
* Kendall's tau coefficient (after the Greek letter τ), is a statistic used to measure the '''ordinal''' association between two measured quantities.
* [https://datascienceplus.com/how-to-perform-logistic-regression-lda-qda-in-r/ How to perform Logistic Regression, LDA, & QDA in R]
* [https://statisticaloddsandends.wordpress.com/2019/07/08/spearmans-rho-and-kendalls-tau/ Spearman’s rho and Kendall’s tau] from Statistical Odds & Ends
* [http://r-posts.com/discriminant-analysis-statistics-all-the-way/ Discriminant Analysis: Statistics All The Way]
* [https://stats.stackexchange.com/questions/3943/kendall-tau-or-spearmans-rho Kendall Tau or Spearman's rho?]
* [https://onlinelibrary.wiley.com/doi/10.1111/biom.13065 Multiclass linear discriminant analysis with ultrahigh‐dimensional features] Li 2019
* [https://finnstats.com/index.php/2021/06/10/kendalls-rank-correlation-in-r-correlation-test/ Kendall’s Rank Correlation in R-Correlation Test]
* Kendall’s tau is also '''more robust (less sensitive) to ties and outliers''' than Spearman’s rho. However, if the data are continuous or nearly so, Spearman’s rho may be more appropriate.
* Kendall’s tau is preferred when dealing with '''small samples'''. [https://datascience.stackexchange.com/questions/64260/pearson-vs-spearman-vs-kendall Pearson vs Spearman vs Kendall].
* '''Interpretation of concordant and discordant pairs''': Kendall’s tau quantifies the difference between the percentage of concordant and discordant pairs among all possible pairwise events, which can be a more direct interpretation in certain contexts
* Although Kendall’s tau has a higher computation complexity (O(n^2)) compared to Spearman’s rho (O(n logn)), it can still be preferred in certain scenarios.


== Bagging ==
== Pearson/Spearman/Kendall correlations ==
Chapter 8 of the book.
* [https://www.r-bloggers.com/2023/09/pearson-spearman-and-kendall-correlation-coefficients-by-hand/ Calculate Pearson, Spearman and Kendall correlation coefficients by hand]
* [https://datascience.stackexchange.com/questions/64260/pearson-vs-spearman-vs-kendall Pearson vs Spearman vs Kendall]. Formula in one page.
* [https://ademos.people.uic.edu/Chapter22.html Chapter 22: Correlation Types and When to Use Them] from uic.edu


* Bootstrap mean is approximately a posterior average.
== [http://en.wikipedia.org/wiki/Anscombe%27s_quartet Anscombe quartet] ==
* Bootstrap aggregation or bagging average: Average the prediction over a collection of bootstrap samples, thereby reducing its variance. The bagging estimate is defined by
:<math>\hat{f}_{bag}(x) = \frac{1}{B}\sum_{b=1}^B \hat{f}^{*b}(x).</math>


[https://statcompute.wordpress.com/2016/01/02/where-bagging-might-work-better-than-boosting/ Where Bagging Might Work Better Than Boosting]
Four datasets have almost same properties: same mean in X, same mean in Y, same variance in X, (almost) same variance in Y, same correlation in X and Y, same linear regression.  


[https://freakonometrics.hypotheses.org/52777 CLASSIFICATION FROM SCRATCH, BAGGING AND FORESTS 10/8]
[[:File:Anscombe quartet 3.svg]]


== Boosting ==
== phi correlation for binary variables ==
* Ch8.2 Bagging, Random Forests and Boosting of [http://www-bcf.usc.edu/~gareth/ISL/ An Introduction to Statistical Learning] and the [http://www-bcf.usc.edu/~gareth/ISL/Chapter%208%20Lab.txt code].
https://en.wikipedia.org/wiki/Phi_coefficient. A Pearson correlation coefficient estimated for two binary variables will return the phi coefficient.
* [http://freakonometrics.hypotheses.org/19874 An Attempt To Understand Boosting Algorithm]
<pre>
* [http://cran.r-project.org/web/packages/gbm/index.html gbm] package. An implementation of extensions to Freund and Schapire's '''AdaBoost algorithm''' and Friedman's '''gradient boosting machine'''. Includes regression methods for least squares, absolute loss, t-distribution loss, [http://mathewanalytics.com/2015/11/13/applied-statistical-theory-quantile-regression/ quantile regression], logistic, multinomial logistic, Poisson, Cox proportional hazards partial likelihood, AdaBoost exponential loss, Huberized hinge loss, and Learning to Rank measures (LambdaMart).
set.seed(1)
* https://www.biostat.wisc.edu/~kendzior/STAT877/illustration.pdf
data <- data.frame(x=sample(c(0,1), 100, replace = T), y= sample(c(0,1), 100, replace = T))
* http://www.is.uni-freiburg.de/ressourcen/business-analytics/10_ensemblelearning.pdf and [http://www.is.uni-freiburg.de/ressourcen/business-analytics/homework_ensemblelearning_questions.pdf exercise]
cor(data$x, data$y)
* [https://freakonometrics.hypotheses.org/52782 Classification from scratch]
# [1] -0.03887781


=== AdaBoost ===
library(psych)
AdaBoost.M1 by Freund and Schapire (1997):
psych::phi(table(data$x, data$y))
# [1] -0.04
</pre>


The error rate on the training sample is
== The real meaning of spurious correlations ==
<math>
https://nsaunders.wordpress.com/2017/02/03/the-real-meaning-of-spurious-correlations/
\bar{err} = \frac{1}{N} \sum_{i=1}^N I(y_i \neq G(x_i)),
{{Pre}}
</math>
library(ggplot2)
set.seed(123)
spurious_data <- data.frame(x = rnorm(500, 10, 1),
                            y = rnorm(500, 10, 1),
                            z = rnorm(500, 30, 3))
cor(spurious_data$x, spurious_data$y)
# [1] -0.05943856
spurious_data %>% ggplot(aes(x, y)) + geom_point(alpha = 0.3) +
theme_bw() + labs(title = "Plot of y versus x for 500 observations with N(10, 1)")


Sequentially apply the weak classification algorithm to repeatedly modified versions of the data, thereby producing a sequence of weak classifiers <math>G_m(x), m=1,2,\dots,M.</math>
cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
 
# [1] 0.4517972
The predictions from all of them are combined through a weighted majority vote to produce the final prediction:
spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) +
<math>
theme_bw() + geom_smooth(method = "lm") +
G(x) = sign[\sum_{m=1}^M \alpha_m G_m(x)].
scale_color_gradientn(colours = c("red", "white", "blue")) +
</math>
labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 3)")
Here <math> \alpha_1,\alpha_2,\dots,\alpha_M</math> are computed by the boosting algorithm and weight the contribution of each respective <math>G_m(x)</math>. Their effect is to give higher influence to the more accurate classifiers in the sequence.
 
spurious_data$z <- rnorm(500, 30, 6)
cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
# [1] 0.8424597
spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) +
theme_bw() + geom_smooth(method = "lm") +
scale_color_gradientn(colours = c("red", "white", "blue")) +
labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 6)")
</pre>


=== Dropout regularization ===
== A New Coefficient of Correlation ==
[https://statcompute.wordpress.com/2017/08/20/dart-dropout-regularization-in-boosting-ensembles/ DART: Dropout Regularization in Boosting Ensembles]
[https://towardsdatascience.com/a-new-coefficient-of-correlation-64ae4f260310 A New Coefficient of Correlation] Chatterjee, 2020 Jasa


=== Gradient boosting ===
= Time series =
* https://en.wikipedia.org/wiki/Gradient_boosting
* Time Series in 5-Minutes
* [https://shirinsplayground.netlify.com/2018/11/ml_basics_gbm/ Machine Learning Basics - Gradient Boosting & XGBoost]
** [https://www.business-science.io/code-tools/2020/08/26/five-minute-time-series-seasonality.html Part 4: Seasonality]
* [http://www.sthda.com/english/articles/35-statistical-machine-learning-essentials/139-gradient-boosting-essentials-in-r-using-xgboost/ Gradient Boosting Essentials in R Using XGBOOST]
* [http://ellisp.github.io/blog/2016/12/07/arima-prediction-intervals Why time series forecasts prediction intervals aren't as good as we'd hope]


== Gradient descent ==
== Structural change ==
Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function ([https://en.wikipedia.org/wiki/Gradient_descent Wikipedia]).
[https://datascienceplus.com/structural-changes-in-global-warming/ Structural Changes in Global Warming]
 
* [https://spin.atomicobject.com/2014/06/24/gradient-descent-linear-regression/ An Introduction to Gradient Descent and Linear Regression] Easy to understand based on simple linear regression. Code is provided too.
== AR(1) processes and random walks ==
* [http://gradientdescending.com/applying-gradient-descent-primer-refresher/ Applying gradient descent – primer / refresher]
[https://fdabl.github.io/r/Spurious-Correlation.html Spurious correlations and random walks]
* [http://sebastianruder.com/optimizing-gradient-descent/index.html An overview of Gradient descent optimization algorithms]
* [https://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-ridge-lasso-regression-python/ A Complete Tutorial on Ridge and Lasso Regression in Python]
* How to choose the learning rate?
** [http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&doc=exercises/ex3/ex3.html Machine learning] from Andrew Ng
** http://scikit-learn.org/stable/modules/sgd.html
* R packages
** https://cran.r-project.org/web/packages/gradDescent/index.html, https://www.rdocumentation.org/packages/gradDescent/versions/2.0
** https://cran.r-project.org/web/packages/sgd/index.html


The error function from a simple linear regression looks like
= Measurement Error model =
: <math>
* [https://en.wikipedia.org/wiki/Errors-in-variables_models Errors-in-variables models/errors-in-variables models or measurement error models]
\begin{align}
* [https://onlinelibrary.wiley.com/doi/10.1111/biom.13112 Simulation‐‐Selection‐‐Extrapolation: Estimation in High‐‐Dimensional Errors‐‐in‐‐Variables Models] Nghiem 2019
Err(m,b) &= \frac{1}{N}\sum_{i=1}^n (y_i - (m x_i + b))^2, \\
\end{align}
</math>


We compute the gradient first for each parameters.
= Polya Urn Model =
: <math>
[https://blog.ephorie.de/the-polya-urn-model-a-simple-simulation-of-the-rich-get-richer The Pólya Urn Model: A simple Simulation of “The Rich get Richer”]
\begin{align}
\frac{\partial Err}{\partial m} &= \frac{2}{n} \sum_{i=1}^n -x_i(y_i - (m x_i + b)), \\
\frac{\partial Err}{\partial b} &= \frac{2}{n} \sum_{i=1}^n -(y_i - (m x_i + b))
\end{align}
</math>


The gradient descent algorithm uses an iterative method to update the estimates using a tuning parameter called '''learning rate'''.
= Dictionary =
<pre>
* '''Prognosis''' is the probability that an event or diagnosis will result in a particular outcome.
new_m &= m_current - (learningRate * m_gradient)  
** For example, on the paper [http://clincancerres.aacrjournals.org/content/18/21/6065.figures-only Developing and Validating Continuous Genomic Signatures in Randomized Clinical Trials for Predictive Medicine] by Matsui 2012, the prognostic score .1 (0.9) represents a '''good (poor)''' prognosis.
new_b &= b_current - (learningRate * b_gradient)  
** Prostate cancer has a much higher one-year overall survival rate than pancreatic cancer, and thus has a better prognosis. See [https://en.wikipedia.org/wiki/Survival_rate Survival rate] in wikipedia.
</pre>


After each iteration, derivative is closer to zero. [http://blog.hackerearth.com/gradient-descent-algorithm-linear-regression Coding in R] for the simple linear regression.
= Statistical guidance =
* [https://osf.io/preprints/metaarxiv/q6ajt Statistical guidance to authors at top-ranked scientific journals: A cross-disciplinary assessment]
* [https://www.youtube.com/watch?v=iu4VsEv1WIo How to get your article rejected by the BMJ: 12 common statistical issues] Richard Riley


=== Gradient descent vs Newton's method ===
= Books, learning material =
* [https://stackoverflow.com/a/12066869 What is the difference between Gradient Descent and Newton's Gradient Descent?]
* [https://leanpub.com/biostatmethods Methods in Biostatistics with R] ($)
* [http://www.santanupattanayak.com/2017/12/19/newtons-method-vs-gradient-descent-method-in-tacking-saddle-points-in-non-convex-optimization/ Newton's Method vs Gradient Descent Method in tacking saddle points in Non-Convex Optimization]
* [http://web.stanford.edu/class/bios221/book/ Modern Statistics for Modern Biology] (free)
* [https://dinh-hung-tu.github.io/gradient-descent-vs-newton-method/ Gradient Descent vs Newton Method]
* Principles of Applied Statistics, by David Cox & Christl Donnelly
* [https://www.amazon.com/Freedman-Robert-Pisani-Statistics-Hardcover/dp/B004QNRMDK/ Statistics] by David Freedman,Robert Pisani, Roger Purves
* [https://onlinelibrary.wiley.com/topic/browse/000113 Wiley Online Library -> Statistics] (Access by NIH Library)
* [https://web.stanford.edu/~hastie/CASI/ Computer Age Statistical Inference: Algorithms, Evidence and Data Science] by Efron and Hastie 2016
* [https://si.biostat.washington.edu/suminst/sisg2020/modules UW Biostatistics Summer Courses] (4 institutes)
* [https://www.springer.com/series/2848/books Statistics for Biology and Health] Springer.
* [https://pyoflife.com/bayesian-essentials-with-r/ Bayesian Essentials with R]
* [https://www.maths.ed.ac.uk/~swood34/core-statistics.pdf Core Statistics] Simon Wood


== Classification and Regression Trees (CART) ==
= Social =
=== Construction of the tree classifier ===
== JSM ==
* Node proportion
* 2019
:<math> p(1|t) + \dots + p(6|t) =1 </math> where <math>p(j|t)</math> define the node proportions (class proportion of class ''j'' on node ''t''. Here we assume there are 6 classes.
** [https://minecr.shinyapps.io/jsm2019-schedule/ JSM 2019] and the [http://www.citizen-statistician.org/2019/07/shiny-for-jsm-2019/ post].
* Impurity of node t
** [https://rviews.rstudio.com/2019/07/19/an-r-users-guide-to-jsm-2019/ An R Users Guide to JSM 2019]
:<math>i(t)</math> is a nonnegative function <math>\phi</math> of the <math>p(1|t), \dots, p(6|t)</math> such that <math> \phi(1/6,1/6,\dots,1/6)</math> = maximumm <math>\phi(1,0,\dots,0)=0, \phi(0,1,0,\dots,0)=0, \dots, \phi(0,0,0,0,0,1)=0</math>. That is, the node impurity is largest when all classes are equally mixed together in it, and smallest when the node contains only one class.
* Gini index of impurity
:<math>i(t) = - \sum_{j=1}^6 p(j|t) \log p(j|t).</math>
* Goodness of the split s on node t
:<math>\Delta i(s, t) = i(t) -p_Li(t_L) - p_Ri(t_R). </math> where <math>p_R</math> are the proportion of the cases in t go into the left node <math>t_L</math> and a proportion <math>p_R</math> go into right node <math>t_R</math>.
A tree was grown in the following way: At the root node <math>t_1</math>, a search was made through all candidate splits to find that split <math>s^*</math> which gave the largest decrease in impurity;
:<math>\Delta i(s^*, t_1) = \max_{s} \Delta i(s, t_1).</math>
* Class character of a terminal node was determined by the plurality rule. Specifically, if <math>p(j_0|t)=\max_j p(j|t)</math>, then ''t'' was designated as a class <math>j_0</math> terminal node.


=== R packages ===
== Following ==
* [http://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf rpart]
* [http://jtleek.com/ Jeff Leek], https://twitter.com/jtleek
* http://exploringdatablog.blogspot.com/2013/04/classification-tree-models.html
* Revolutions, http://blog.revolutionanalytics.com/  
 
* RStudio Blog, https://blog.rstudio.com/
== Partially additive (generalized) linear model trees ==
* Sean Davis, https://twitter.com/seandavis12, https://github.com/seandavi
* https://eeecon.uibk.ac.at/~zeileis/news/palmtree/
* [http://stephenturner.us/post/ Stephen Turner], https://twitter.com/genetics_blog
* https://cran.r-project.org/web/packages/palmtree/index.html
 
== Supervised Classification, Logistic and Multinomial ==
* http://freakonometrics.hypotheses.org/19230
 
== Variable selection ==
=== Review ===
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5969114/ Variable selection – A review and recommendations for the practicing statistician] by Heinze et al 2018.
 
=== Variable selection and variable importance plot ===
* http://freakonometrics.hypotheses.org/19835
 
=== Variable selection and cross-validation ===
* http://freakonometrics.hypotheses.org/19925
* http://ellisp.github.io/blog/2016/06/05/bootstrap-cv-strategies/
 
=== Mallow ''C<sub>p</sub>'' ===
Mallows's ''C<sub>p</sub>'' addresses the issue of overfitting. The Cp statistic calculated on a sample of data estimates the '''mean squared prediction error (MSPE)'''.
:<math>
E\sum_j (\hat{Y}_j - E(Y_j\mid X_j))^2/\sigma^2,
</math>
The ''C<sub>p</sub>'' statistic is defined as
:<math> C_p={SSE_p \over S^2} - N + 2P. </math>
 
* https://en.wikipedia.org/wiki/Mallows%27s_Cp
* Used in Yuan & Lin (2006) group lasso. The degrees of freedom is estimated by the bootstrap or perturbation methods. Their paper mentioned the performance is comparable with that of 5-fold CV but is computationally much faster.
 
=== Variable selection for mode regression ===
http://www.tandfonline.com/doi/full/10.1080/02664763.2017.1342781 Chen & Zhou, Journal of applied statistics ,June 2017
 
== Neural network ==
* [http://junma5.weebly.com/data-blog/build-your-own-neural-network-classifier-in-r Build your own neural network in R]
* (Video) [https://youtu.be/ntKn5TPHHAk 10.2: Neural Networks: Perceptron Part 1 - The Nature of Code] from the Coding Train. The book [http://natureofcode.com/book/chapter-10-neural-networks/ THE NATURE OF CODE] by DANIEL SHIFFMAN
* [https://freakonometrics.hypotheses.org/52774 CLASSIFICATION FROM SCRATCH, NEURAL NETS]. The ROCR package was used to produce the ROC curve.
 
== Support vector machine (SVM) ==
* [https://statcompute.wordpress.com/2016/03/19/improve-svm-tuning-through-parallelism/ Improve SVM tuning through parallelism] by using the '''foreach''' and '''doParallel''' packages.
 
== Quadratic Discriminant Analysis (qda), KNN ==
[https://datarvalue.blogspot.com/2017/05/machine-learning-stock-market-data-part_16.html Machine Learning. Stock Market Data, Part 3: Quadratic Discriminant Analysis and KNN]
 
== [https://en.wikipedia.org/wiki/Regularization_(mathematics) Regularization] ==
Regularization is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting
 
=== Ridge regression ===
* [https://stats.stackexchange.com/questions/52653/what-is-ridge-regression What is ridge regression?]
* [https://stats.stackexchange.com/questions/118712/why-does-ridge-estimate-become-better-than-ols-by-adding-a-constant-to-the-diago Why does ridge estimate become better than OLS by adding a constant to the diagonal?] The estimates become more stable if the covariates are highly correlated.
* (In ridge regression) the matrix we need to invert no longer has determinant near zero, so the solution does not lead to uncomfortably large variance in the estimated parameters. And that’s a good thing. See [https://tamino.wordpress.com/2011/02/12/ridge-regression/ this post].
* [https://www.tandfonline.com/doi/abs/10.1080/02664763.2018.1526891?journalCode=cjas20 Multicolinearity and ridge regression: results on type I errors, power and heteroscedasticity]
 
Since L2 norm is used in the regularization, ridge regression is also called L2 regularization.
 
[https://drsimonj.svbtle.com/ridge-regression-with-glmnet ridge regression with glmnet]
 
Hoerl and Kennard (1970a, 1970b) introduced ridge regression, which minimizes RSS subject to a constraint <math>\sum|\beta_j|^2 \le t</math>. Note that though ridge regression shrinks the OLS estimator toward 0 and yields a biased estimator <math>\hat{\beta} = (X^TX + \lambda X)^{-1} X^T y </math> where <math>\lambda=\lambda(t)</math>, a function of ''t'', the variance is smaller than that of the OLS estimator.
 
The solution exists if <math>\lambda >0</math> even if <math>n < p </math>.
 
Ridge regression (L2 penalty) only shrinks the coefficients. In contrast, Lasso method (L1 penalty) tries to shrink some coefficient estimators to exactly zeros. This can be seen from comparing the coefficient path plot from both methods.
 
Geometrically (contour plot of the cost function), the L1 penalty (the sum of absolute values of coefficients) will incur a probability of some zero coefficients (i.e. some coefficient hitting the corner of a diamond shape in the 2D case). For example, in the 2D case (X-axis=<math>\beta_0</math>, Y-axis=<math>\beta_1</math>), the shape of the L1 penalty <math>|\beta_0| + |\beta_1|</math> is a diamond shape whereas the shape of the L2 penalty (<math>\beta_0^2 + \beta_1^2</math>) is a circle.
 
=== Lasso/glmnet, adaptive lasso and FAQs ===
* https://en.wikipedia.org/wiki/Lasso_(statistics). It has a discussion when two covariates are highly correlated. For example if gene <math>i</math> and gene <math>j</math> are identical, then the values of <math>\beta _{j}</math> and <math>\beta _{k}</math> that minimize the lasso objective function are not uniquely determined. Elastic Net has been designed to address this shortcoming.
** Strongly correlated covariates have similar regression coefficients, is referred to as the '''grouping''' effect. From the wikipedia page ''"one would like to find all the associated covariates, rather than selecting only one from each set of strongly correlated covariates, as lasso often does. In addition, selecting only a single covariate from each group will typically result in increased prediction error, since the model is less robust (which is why ridge regression often outperforms lasso)"''.
* [https://web.stanford.edu/~hastie/Papers/Glmnet_Vignette.pdf Glmnet Vignette]. It tries to minimize <math>RSS(\beta) + \lambda [(1-\alpha)||\beta||_2^2/2 + \alpha ||\beta||_1] </math>. The ''elastic-net'' penalty is controlled by <math>\alpha</math>, and bridge the gap between lasso (<math>\alpha = 1</math>) and ridge (<math>\alpha = 0</math>). Following is a CV curve (adaptive lasso) using the example from glmnet(). Two vertical lines are indicated: left one is '''lambda.min''' (that gives minimum mean cross-validated error) and right one is '''lambda.1se''' (the most ''regularized'' model such that error is within one standard error of the minimum). For the tuning parameter <math>\lambda</math>,
** The larger the <math>\lambda</math>, more coefficients are becoming zeros (think about '''coefficient path''' plots) and thus the simpler (more '''regularized''') the model.
** If <math>\lambda</math> becomes zero, it reduces to the regular regression and if <math>\lambda</math> becomes infinity, the coefficients become zeros.
** In terms of the bias-variance tradeoff, the larger the <math>\lambda</math>, the higher the bias and the lower the variance of the coefficient estimators.
 
[[File:Glmnetplot.svg|250px]]  [[File:Glmnet path.svg|280px]] [[File:Glmnet l1norm.svg|280px]]
: <syntaxhighlight lang='rsplus'>
set.seed(1010)
n=1000;p=100
nzc=trunc(p/10)
x=matrix(rnorm(n*p),n,p)
beta=rnorm(nzc)
fx= x[,seq(nzc)] %*% beta
eps=rnorm(n)*5
y=drop(fx+eps)
px=exp(fx)
px=px/(1+px)
ly=rbinom(n=length(px),prob=px,size=1)
 
## Full lasso
set.seed(999)
cv.full <- cv.glmnet(x, ly, family='binomial', alpha=1, parallel=TRUE)
plot(cv.full)  # cross-validation curve and two lambda's
plot(glmnet(x, ly, family='binomial', alpha=1), xvar="lambda", label=TRUE) # coefficient path plot
plot(glmnet(x, ly, family='binomial', alpha=1))  # L1 norm plot
log(cv.full$lambda.min) # -4.546394
log(cv.full$lambda.1se) # -3.61605
sum(coef(cv.full, s=cv.full$lambda.min) != 0) # 44
 
## Ridge Regression to create the Adaptive Weights Vector
set.seed(999)
cv.ridge <- cv.glmnet(x, ly, family='binomial', alpha=0, parallel=TRUE)
wt <- 1/abs(matrix(coef(cv.ridge, s=cv.ridge$lambda.min)
                  [, 1][2:(ncol(x)+1)] ))^1 ## Using gamma = 1, exclude intercept
## Adaptive Lasso using the 'penalty.factor' argument
set.seed(999)
cv.lasso <- cv.glmnet(x, ly, family='binomial', alpha=1, parallel=TRUE, penalty.factor=wt)
# defautl type.measure="deviance" for logistic regression
plot(cv.lasso)
log(cv.lasso$lambda.min) # -2.995375
log(cv.lasso$lambda.1se) # -0.7625655
sum(coef(cv.lasso, s=cv.lasso$lambda.min) != 0) # 34
</syntaxhighlight>
* A list of potential lambdas: see [http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html#lin Linear Regression] case. The λ sequence is determined by '''lambda.max''' and '''lambda.min.ratio'''. The latter (default is ifelse(nobs<nvars,0.01,0.0001)) is the ratio of smallest value of the generated λ sequence (say ''lambda.min'') to  ''lambda.max''. The program then generated ''nlambda'' values linear on the log scale from ''lambda.max'' down to ''lambda.min''. ''lambda.max'' is not given, but easily computed from the input x and y; it is the smallest value for ''lambda'' such that all the coefficients are zero.
* [https://privefl.github.io/blog/choosing-hyper-parameters-in-penalized-regression/ Choosing hyper-parameters (α and λ) in penalized regression] by Florian Privé
* [https://stats.stackexchange.com/questions/70249/feature-selection-model-with-glmnet-on-methylation-data-pn lambda.min vs lambda.1se]
** The '''lambda.1se''' represents the value of λ in the search that was simpler than the best model ('''lambda.min'''), but which has error within 1 standard error of the best model. In other words, using the value of ''lambda.1se'' as the selected value for λ results in a model that is slightly simpler than the best model but which cannot be distinguished from the best model in terms of error given the uncertainty in the k-fold CV estimate of the error of the best model.
** The '''lambda.min''' option refers to value of λ at the lowest CV error. The error at this value of λ is the average of the errors over the k folds and hence this estimate of the error is uncertain.
* https://www.rdocumentation.org/packages/glmnet/versions/2.0-10/topics/glmnet
* [http://blog.revolutionanalytics.com/2016/11/glmnetutils.html glmnetUtils: quality of life enhancements for elastic net regression with glmnet]
* Mixing parameter: alpha=1 is the '''lasso penalty''', and alpha=0 the '''ridge penalty''' and anything between 0–1 is '''Elastic net'''.
** RIdge regression uses Euclidean distance/L2-norm as the penalty. It won't remove any variables.
** Lasso uses L1-norm as the penalty. Some of the coefficients may be shrunk exactly to zero.
* [https://www.quora.com/In-ridge-regression-and-lasso-what-is-lambda In ridge regression and lasso, what is lambda?]
** Lambda is a penalty coefficient. Large lambda will shrink the coefficients.
** cv.glment()$lambda.1se gives the most regularized model such that error is within one standard error of the minimum
* cv.glmnet() has a logical parameter '''parallel''' which is useful if a cluster or cores have been previously allocated
* [http://statweb.stanford.edu/~tibs/sta305files/Rudyregularization.pdf Ridge regression and the LASSO]
* Standard error/Confidence interval
** [https://www.reddit.com/r/statistics/comments/1vg8k0/standard_errors_in_glmnet/ Standard Errors in GLMNET] and [https://stackoverflow.com/questions/39750965/confidence-intervals-for-ridge-regression Confidence intervals for Ridge regression]
** '''[https://cran.r-project.org/web/packages/penalized/vignettes/penalized.pdf#page=18 Why SEs are not meaningful and are usually not provided in penalized regression?]'''
**# Hint:  standard errors are not very meaningful for strongly biased estimates such as arise from penalized estimation methods.
**# '''Penalized estimation is a procedure that reduces the variance of estimators by introducing substantial bias.'''
**# The bias of each estimator is therefore a major component of its mean squared error, whereas its variance may contribute only a small part.
**# Any bootstrap-based calculations can only give an assessment of the variance of the estimates.
**# Reliable estimates of the bias are only available if reliable unbiased estimates are available, which is typically not the case in situations in which penalized estimates are used.
** [https://stats.stackexchange.com/tags/glmnet/hot Hottest glmnet questions from stackexchange].
** [https://stats.stackexchange.com/questions/91462/standard-errors-for-lasso-prediction-using-r Standard errors for lasso prediction]. There might not be a consensus on a statistically valid method of calculating standard errors for the lasso predictions.
** [https://www4.stat.ncsu.edu/~lu/programcodes.html Code] for Adaptive-Lasso for Cox's proportional hazards model by Zhang & Lu (2007). This can compute the SE of estimates. The weights are originally based on the maximizers of the log partial likelihood. However, the beta may not be estimable in cases such as high-dimensional gene data, or the beta may be unstable if strong collinearity exists among covariates. In such cases, robust estimators such as ridge regression estimators would be used to determine the weights.
* LASSO vs Least angle regression
** https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf
** [http://web.stanford.edu/~hastie/TALKS/larstalk.pdf Least Angle Regression, Forward Stagewise and the Lasso]
** https://www.quora.com/What-is-Least-Angle-Regression-and-when-should-it-be-used
** [http://statweb.stanford.edu/~tibs/lasso/simple.html A simple explanation of the Lasso and Least Angle Regression]
** https://stats.stackexchange.com/questions/4663/least-angle-regression-vs-lasso
** https://cran.r-project.org/web/packages/lars/index.html
* '''Oracle property''' and '''adaptive lasso'''
** [http://www.stat.wisc.edu/~shao/stat992/fan-li2001.pdf Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties], Fan & Li (2001) JASA
** [http://ricardoscr.github.io/how-to-adaptive-lasso.html Adaptive Lasso: What it is and how to implement in R]. Adaptive lasso weeks to minimize <math> RSS(\beta) + \lambda \sum_1^p \hat{\omega}_j |\beta_j| </math> where <math>\lambda</math> is the tuning parameter, <math>\hat{\omega}_j = \frac{1}{(|\hat{\beta}_j^{ini}|)^\gamma}</math> is the adaptive weights vector and <math>\hat{\beta}_j^{ini}</math> is an initial estimate of the coefficients obtained through ridge regression. '''Adaptive Lasso ends up penalizing more those coefficients with lower initial estimates.''' <math>\gamma</math> is a positive constant for adjustment of the adaptive weight vector, and the authors suggest the possible values of 0.5, 1 and 2.
** When n goes to infinity, <math>\hat{\omega}_j |\beta_j|  </math> converges to <math>I(\beta_j \neq 0) </math>. So the adaptive Lasso procedure can be regarded as an automatic implementation of best-subset selection in some asymptotic sense.
** [https://stats.stackexchange.com/questions/229142/what-is-the-oracle-property-of-an-estimator What is the oracle property of an estimator?] An oracle estimator must be consistent in 1) '''variable selection''' and 2) '''consistent parameter estimation'''.
** (Linear regression) The adaptive lasso and its oracle properties Zou (2006, JASA)
** (Cox model) Adaptive-LASSO for Cox's proportional hazard model by Zhang and Lu (2007, Biometrika)
**[https://insightr.wordpress.com/2017/06/14/when-the-lasso-fails/ When the LASSO fails???]. Adaptive lasso is used to demonstrate its usefulness.
* [https://statisticaloddsandends.wordpress.com/2018/11/13/a-deep-dive-into-glmnet-penalty-factor/ A deep dive into glmnet: penalty.factor], [https://statisticaloddsandends.wordpress.com/2018/11/15/a-deep-dive-into-glmnet-standardize/ standardize], [https://statisticaloddsandends.wordpress.com/2019/01/09/a-deep-dive-into-glmnet-offset/ offset]
** Lambda sequence is not affected by the "penalty.factor"
** How "penalty.factor" used by the objective function may need to be corrected
* Some issues:
** With group of highly correlated features, Lasso tends to select amongst them arbitrarily.
** Often empirically ridge has better predictive performance than lasso but lasso leads to sparser solution
** Elastic-net (Zou & Hastie '05) aims to address these issues: hybrid between Lasso and ridge regression, uses L1 and L2 penalties.
* [https://statcompute.wordpress.com/2019/02/23/gradient-free-optimization-for-glmnet-parameters/ Gradient-Free Optimization for GLMNET Parameters]
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2656-1 Gsslasso Cox]: a Bayesian hierarchical model for predicting survival and detecting associated genes by incorporating pathway information, Tang et al BMC Bioinformatics 2019
 
=== Lasso logistic regression ===
https://freakonometrics.hypotheses.org/52894
 
=== Lagrange Multipliers ===
[https://medium.com/@andrew.chamberlain/a-simple-explanation-of-why-lagrange-multipliers-works-253e2cdcbf74 A Simple Explanation of Why Lagrange Multipliers Works]
 
=== How to solve lasso/convex optimization ===
* [https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf Convex Optimization] by Boyd S, Vandenberghe L, Cambridge 2004. It is cited by Zhang & Lu (2007). The '''interior point algorithm''' can be used to solve the optimization problem in adaptive lasso.
* Review of '''gradient descent''':
** Finding maximum: <math>w^{(t+1)} = w^{(t)} + \eta \frac{dg(w)}{dw}</math>, where <math>\eta</math> is stepsize.
** Finding minimum: <math>w^{(t+1)} = w^{(t)} - \eta \frac{dg(w)}{dw}</math>.
** [https://stackoverflow.com/questions/12066761/what-is-the-difference-between-gradient-descent-and-newtons-gradient-descent What is the difference between Gradient Descent and Newton's Gradient Descent?] Newton's method requires <math>g''(w)</math>, more smoothness of g(.).
** Finding minimum for multiple variables ('''gradient descent'''): <math>w^{(t+1)} = w^{(t)} - \eta \Delta g(w^{(t)})</math>. For the least squares problem, <math>g(w) = RSS(w)</math>.
** Finding minimum for multiple variables in the least squares problem (minimize <math>RSS(w)</math>):  <math>\text{partial}(j) = -2\sum h_j(x_i)(y_i - \hat{y}_i(w^{(t)}), w_j^{(t+1)} = w_j^{(t)} - \eta \; \text{partial}(j)</math>
** Finding minimum for multiple variables in the ridge regression problem (minimize <math>RSS(w)+\lambda ||w||_2^2=(y-Hw)'(y-Hw)+\lambda w'w</math>): <math>\text{partial}(j) = -2\sum h_j(x_i)(y_i - \hat{y}_i(w^{(t)}), w_j^{(t+1)} = (1-2\eta \lambda) w_j^{(t)} - \eta \; \text{partial}(j)</math>. Compared to the closed form approach: <math>\hat{w} = (H'H + \lambda I)^{-1}H'y</math> where 1. the inverse exists even N<D as long as <math>\lambda > 0</math> and 2. the complexity of inverse is <math>O(D^3)</math>, D is the dimension of the covariates.
* '''Cyclical coordinate descent''' was used ([https://cran.r-project.org/web/packages/glmnet/vignettes/glmnet_beta.pdf#page=1 vignette]) in the glmnet package. See also '''[https://en.wikipedia.org/wiki/Coordinate_descent coordinate descent]'''. The reason we call it 'descent' is because we want to 'minimize' an objective function.
** <math>\hat{w}_j = \min_w g(\hat{w}_1, \cdots, \hat{w}_{j-1},w, \hat{w}_{j+1}, \cdots, \hat{w}_D)</math>
** See [https://www.jstatsoft.org/article/view/v033i01 paper] on JSS 2010. The Cox PHM case also uses the cyclical coordinate descent method; see the [https://www.jstatsoft.org/article/view/v039i05 paper] on JSS 2011.
** Coursera's [https://www.coursera.org/learn/ml-regression/lecture/rb179/feature-selection-lasso-and-nearest-neighbor-regression Machine learning course 2: Regression] at 1:42. [http://web.stanford.edu/~hastie/TALKS/CD.pdf#page=12 Soft-thresholding] the coefficients is the key for the L1 penalty. The range for the thresholding is controlled by <math>\lambda</math>. Note to view the videos and all materials in coursera we can enroll to audit the course without starting a trial.
** No step size is required as in gradient descent.
** [https://sandipanweb.wordpress.com/2017/05/04/implementing-lasso-regression-with-coordinate-descent-and-the-sub-gradient-of-the-l1-penalty-with-soft-thresholding/ Implementing LASSO Regression with Coordinate Descent, Sub-Gradient of the L1 Penalty and Soft Thresholding in Python]
** Coordinate descent in the least squares problem: <math>\frac{\partial}{\partial w_j} RSS(w)= -2 \rho_j + 2 w_j</math>; i.e. <math>\hat{w}_j = \rho_j</math>.
** Coordinate descent in the Lasso problem (for normalized features): <math>
\hat{w}_j =
\begin{cases}
\rho_j + \lambda/2, & \text{if }\rho_j < -\lambda/2 \\
0, & \text{if } -\lambda/2 \le \rho_j \le \lambda/2\\
\rho_j- \lambda/2, & \text{if }\rho_j > \lambda/2
\end{cases}
</math>
** Choosing <math>\lambda</math> via cross validation tends to favor less sparse solutions and thus smaller <math>\lambda</math> then optimal choice for feature selection. See "Machine learning: a probabilistic perspective", Murphy 2012.
* Classical: Least angle regression (LARS) Efron et al 2004.
* [https://www.mathworks.com/help/stats/lasso.html?s_tid=gn_loc_drop Alternating Direction Method of Multipliers (ADMM)]. Boyd, 2011. “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers.” Foundations and Trends in Machine Learning. Vol. 3, No. 1, 2010, pp. 1–122.
** https://stanford.edu/~boyd/papers/pdf/admm_slides.pdf
** [https://cran.r-project.org/web/packages/ADMM/ ADMM] package
** [https://www.quora.com/Convex-Optimization-Whats-the-advantage-of-alternating-direction-method-of-multipliers-ADMM-and-whats-the-use-case-for-this-type-of-method-compared-against-classic-gradient-descent-or-conjugate-gradient-descent-method What's the advantage of alternating direction method of multipliers (ADMM), and what's the use case for this type of method compared against classic gradient descent or conjugate gradient descent method?]
* [https://math.stackexchange.com/questions/771585/convexity-of-lasso If some variables in design matrix are correlated, then LASSO is convex or not?]
* Tibshirani. [http://www.jstor.org/stable/2346178 Regression shrinkage and selection via the lasso] (free). JRSS B 1996.
* [http://www.econ.uiuc.edu/~roger/research/conopt/coptr.pdf Convex Optimization in R] by Koenker & Mizera 2014.
* [https://web.stanford.edu/~hastie/Papers/pathwise.pdf Pathwise coordinate optimization] by Friedman et al 2007.
* [http://web.stanford.edu/~hastie/StatLearnSparsity/ Statistical learning with sparsity: the Lasso and generalizations] T. Hastie, R. Tibshirani, and M. Wainwright, 2015 (book)
* Element of Statistical Learning (book)
* https://youtu.be/A5I1G1MfUmA StatsLearning Lect8h 110913
* Fu's (1998) shooting algorithm for Lasso ([http://web.stanford.edu/~hastie/TALKS/CD.pdf#page=11 mentioned] in the history of coordinate descent) and Zhang & Lu's (2007) modified shooting algorithm for adaptive Lasso.
* [https://www.cs.ubc.ca/~murphyk/MLbook/ Machine Learning: a Probabilistic Perspective] Choosing <math>\lambda</math> via cross validation tends to favor less sparse solutions and thus smaller <math>\lambda</math> than optimal choice for feature selection.
 
=== Quadratic programming ===
* https://en.wikipedia.org/wiki/Quadratic_programming
* https://en.wikipedia.org/wiki/Lasso_(statistics)
* [https://cran.r-project.org/web/views/Optimization.html CRAN Task View: Optimization and Mathematical Programming]
* [https://cran.r-project.org/web/packages/quadprog/ quadprog] package and [https://www.rdocumentation.org/packages/quadprog/versions/1.5-5/topics/solve.QP solve.QP()] function
* [https://rwalk.xyz/solving-quadratic-progams-with-rs-quadprog-package/ Solving Quadratic Progams with R’s quadprog package]
* [https://rwalk.xyz/more-on-quadratic-programming-in-r/ More on Quadratic Programming in R]
* https://optimization.mccormick.northwestern.edu/index.php/Quadratic_programming
* [https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssb.12273 Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects] where the algorithm from [https://ieeexplore.ieee.org/abstract/document/7448814/ Lee] 2016 was used.
 
=== Highly correlated covariates ===
'''1. Elastic net'''
 
''' 2. Group lasso'''
* [http://pages.stat.wisc.edu/~myuan/papers/glasso.final.pdf Yuan and Lin 2006] JRSSB
* https://cran.r-project.org/web/packages/gglasso/, http://royr2.github.io/2014/04/15/GroupLasso.html
* https://cran.r-project.org/web/packages/grpreg/
* https://cran.r-project.org/web/packages/grplasso/ by Lukas Meier ([http://people.ee.duke.edu/~lcarin/lukas-sara-peter.pdf paper]), used in the '''biospear''' package for survival data
* https://cran.r-project.org/web/packages/SGL/index.html, http://royr2.github.io/2014/05/20/SparseGroupLasso.html, http://web.stanford.edu/~hastie/Papers/SGLpaper.pdf
 
=== Other Lasso ===
* [https://statisticaloddsandends.wordpress.com/2019/01/14/pclasso-a-new-method-for-sparse-regression/ pcLasso]
* [https://www.biorxiv.org/content/10.1101/630079v1 A Fast and Flexible Algorithm for Solving the Lasso in Large-scale and Ultrahigh-dimensional Problems] Qian et al 2019 and the [https://github.com/junyangq/snpnet snpnet] package
 
== Comparison by plotting ==
If we are running simulation, we can use the [https://github.com/pbiecek/DALEX DALEX] package to visualize the fitting result from different machine learning methods and the true model. See http://smarterpoland.pl/index.php/2018/05/ml-models-what-they-cant-learn.
 
== UMAP ==
* https://arxiv.org/abs/1802.03426
* https://www.biorxiv.org/content/early/2018/04/10/298430
* https://cran.r-project.org/web/packages/umap/index.html
 
= Imbalanced Classification =
* [https://www.analyticsvidhya.com/blog/2016/03/practical-guide-deal-imbalanced-classification-problems/ Practical Guide to deal with Imbalanced Classification Problems in R]
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/ The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets]
* [https://github.com/dariyasydykova/open_projects/tree/master/ROC_animation Roc animation]
 
= Deep Learning =
* [https://bcourses.berkeley.edu/courses/1453965/wiki CS294-129 Designing, Visualizing and Understanding Deep Neural Networks] from berkeley.
* https://www.youtube.com/playlist?list=PLkFD6_40KJIxopmdJF_CLNqG3QuDFHQUm
* [https://www.r-bloggers.com/deep-learning-from-first-principles-in-python-r-and-octave-part-5/ Deep Learning from first principles in Python, R and Octave – Part 5]
 
== Tensor Flow (tensorflow package) ==
* https://tensorflow.rstudio.com/
* [https://youtu.be/atiYXm7JZv0 Machine Learning with R and TensorFlow] (Video)
* [https://developers.google.com/machine-learning/crash-course/ Machine Learning Crash Course] with TensorFlow APIs
* [http://www.pnas.org/content/early/2018/03/09/1717139115 Predicting cancer outcomes from histology and genomics using convolutional networks] Pooya Mobadersany et al, PNAS 2018
 
== Biological applications ==
* [https://academic.oup.com/bioinformatics/article-abstract/33/22/3685/4092933 An introduction to deep learning on biological sequence data: examples and solutions]
 
== Machine learning resources ==
* [https://www.makeuseof.com/tag/machine-learning-courses/ These Machine Learning Courses Will Prepare a Career Path for You]
* [https://blog.datasciencedojo.com/machine-learning-algorithms/ 101 Machine Learning Algorithms for Data Science with Cheat Sheets]
 
= Randomization inference =
* Google: randomization inference in r
* [http://www.personal.psu.edu/ljk20/zeros.pdf Randomization Inference for Outcomes with Clumping at Zero], [https://amstat.tandfonline.com/doi/full/10.1080/00031305.2017.1385535#.W09zpdhKg3E The American Statistician] 2018
* [https://jasonkerwin.com/nonparibus/2017/09/25/randomization-inference-vs-bootstrapping-p-values/ Randomization inference vs. bootstrapping for p-values]
 
= Bootstrap =
* [https://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 Bootstrap] from Wikipedia.
** This contains an overview of different methods for computing bootstrap confidence intervals.
** [https://www.rdocumentation.org/packages/boot/versions/1.3-20/topics/boot.ci boot.ci()] from the 'boot' package provides a short explanation for different methods for computing bootstrap confidence intervals.
* [https://github.com/jtleek/slipper Bootstrapping made easy and tidy with slipper]
* [https://cran.r-project.org/web/packages/bootstrap/ bootstrap] package. "An Introduction to the Bootstrap" by B. Efron and R. Tibshirani, 1993
* [https://cran.r-project.org/web/packages/boot/ boot] package. Functions and datasets for bootstrapping from the book [https://books.google.com/books?id=_uKcAgAAQBAJ Bootstrap Methods and Their Application] by A. C. Davison and D. V. Hinkley (1997, CUP). A short course material can be found [https://www.researchgate.net/publication/37434447_Bootstrap_Methods_and_Their_Application here].The main functions are '''boot()''' and '''boot.ci()'''.
** https://www.rdocumentation.org/packages/boot/versions/1.3-20
** [https://www.statmethods.net/advstats/bootstrapping.html R in Action] Nonparametric bootstrapping <syntaxhighlight lang='rsplus'>
# Compute the bootstrapped 95% confidence interval for R-squared in the linear regression
rsq <- function(data, indices, formula) {
  d <- data[indices,] # allows boot to select sample
  fit <- lm(formula, data=d)
  return(summary(fit)$r.square)
} # 'formula' is optional depends on the problem
 
# bootstrapping with 1000 replications
set.seed(1234)
bootobject <- boot(data=mtcars, statistic=rsq, R=1000,
                  formula=mpg~wt+disp)
plot(bootobject) # or plot(bootobject, index = 1) if we have multiple statistics
ci <- boot.ci(bootobject, conf = .95, type=c("perc", "bca") )
    # default type is "all" which contains c("norm","basic", "stud", "perc", "bca").
    # 'bca' (Bias Corrected and Accelerated) by Efron 1987 uses
    # percentiles but adjusted to account for bias and skewness.
# Level    Percentile            BCa         
# 95%  ( 0.6838,  0.8833 )  ( 0.6344,  0.8549 )
# Calculations and Intervals on Original Scale
# Some BCa intervals may be unstable
ci$bca[4:5] 
# [1] 0.6343589 0.8549305
# the mean is not the same
mean(c(0.6838,  0.8833 ))
# [1] 0.78355
mean(c(0.6344,  0.8549 ))
# [1] 0.74465
summary(lm(mpg~wt+disp, data = mtcars))$r.square
# [1] 0.7809306
</syntaxhighlight>
** [https://www.r-project.org/doc/Rnews/Rnews_2002-3.pdf#page=2 Resampling Methods in R: The boot Package] by Canty
** [https://pdfs.semanticscholar.org/0203/d0902185dd819bf38c8dacd077df0122b89d.pdf An introduction to bootstrap with applications with R] by Davison and Kuonen.
** http://people.tamu.edu/~alawing/materials/ESSM689/Btutorial.pdf
** http://statweb.stanford.edu/~tibs/sta305files/FoxOnBootingRegInR.pdf
** http://www.stat.wisc.edu/~larget/stat302/chap3.pdf
** https://www.stat.cmu.edu/~cshalizi/402/lectures/08-bootstrap/lecture-08.pdf. Variance, se, bias, confidence interval (basic, percentile), hypothesis testing, parametric & non-parametric bootstrap, bootstrapping regression models.
* http://www.math.ntu.edu.tw/~hchen/teaching/LargeSample/references/R-bootstrap.pdf  No package is used
* http://web.as.uky.edu/statistics/users/pbreheny/621/F10/notes/9-21.pdf Bootstrap confidence interval
* http://www-stat.wharton.upenn.edu/~stine/research/spida_2005.pdf
* Optimism corrected bootstrapping ([https://www4.stat.ncsu.edu/~lu/ST745/sim_modelchecking.pdf#page=12 Harrell et al 1996])
** [http://thestatsgeek.com/2014/10/04/adjusting-for-optimismoverfitting-in-measures-of-predictive-ability-using-bootstrapping/ Adjusting for optimism/overfitting in measures of predictive ability using bootstrapping]
** [https://intobioinformatics.wordpress.com/2018/12/25/optimism-corrected-bootstrapping-a-problematic-method/ Part 1]: Optimism corrected bootstrapping: a problematic method
** [https://intobioinformatics.wordpress.com/2018/12/26/part-2-optimism-corrected-bootstrapping-is-definitely-bias-further-evidence/ Part 2]: Optimism corrected bootstrapping is definitely bias, further evidence
** [https://intobioinformatics.wordpress.com/2018/12/27/part-3-two-more-implementations-of-optimism-corrected-bootstrapping-show-shocking-bias/ Part 3]: Two more implementations of optimism corrected bootstrapping show shocking bias
** [https://intobioinformatics.wordpress.com/2018/12/28/part-4-more-bias-and-why-does-bias-occur-in-optimism-corrected-bootstrapping/ Part 4]: Why does bias occur in optimism corrected bootstrapping?
** [https://intobioinformatics.wordpress.com/2018/12/29/part-5-corrections-to-optimism-corrected-bootstrapping-series-but-it-still-is-problematic/ Part 5]: Code corrections to optimism corrected bootstrapping series
 
== Nonparametric bootstrap ==
This is the most common bootstrap method
 
[https://academic.oup.com/biostatistics/advance-article/doi/10.1093/biostatistics/kxy054/5106666 The upstrap] Crainiceanu & Crainiceanu, Biostatistics 2018
 
== Parametric bootstrap ==
* Parametric bootstraps resample a known distribution function, whose parameters are estimated from your sample
* http://www.math.ntu.edu.tw/~hchen/teaching/LargeSample/notes/notebootstrap.pdf#page=3 No package is used
* [http://influentialpoints.com/Training/nonparametric-or-parametric_bootstrap.htm A parametric or non-parametric bootstrap?]
* https://www.stat.cmu.edu/~cshalizi/402/lectures/08-bootstrap/lecture-08.pdf#page=11
* [https://bioconductor.org/packages/release/bioc/vignettes/simulatorZ/inst/doc/simulatorZ-vignette.pdf simulatorZ] Bioc package
 
= Cross Validation =
R packages:
* [https://cran.r-project.org/web/packages/rsample/index.html rsample] (released July 2017)
* [https://cran.r-project.org/web/packages/CrossValidate/index.html CrossValidate] (released July 2017)
 
== Difference between CV & bootstrapping ==
[https://stats.stackexchange.com/a/18355 Differences between cross validation and bootstrapping to estimate the prediction error]
* CV tends to be less biased but K-fold CV has fairly large variance.
* Bootstrapping tends to drastically reduce the variance but gives more biased results (they tend to be pessimistic).
* The 632 and 632+ rules methods have been adapted to deal with the bootstrap bias
* Repeated CV does K-fold several times and averages the results similar to regular K-fold
 
== .632 and .632+ bootstrap ==
* 0.632 bootstrap: Efron's paper [https://www.jstor.org/stable/pdf/2288636.pdf  Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation] in 1983.
* 0.632+ bootstrap: The CV estimate of prediction error is nearly unbiased but can be highly variable. See [https://www.tandfonline.com/doi/pdf/10.1080/01621459.1997.10474007 Improvements on Cross-Validation: The .632+ Bootstrap Method] by Efron and Tibshirani, JASA 1997.
* Chap 17.7 from "An Introduction to the Bootstrap" by Efron and Tibshirani. Chapman & Hall.
* Chap 7.4 (resubstitution error <math>\overline{err} </math>) and chap 7.11 (<math>Err_{boot(1)}</math>leave-one-out bootstrap estimate of prediction error) from "The Elements of Statistical Learning" by Hastie, Tibshirani and Friedman. Springer.
* [http://stats.stackexchange.com/questions/96739/what-is-the-632-rule-in-bootstrapping What is the .632 bootstrap]?
: <math>
Err_{.632} = 0.368 \overline{err} + 0.632 Err_{boot(1)}
</math>
* [https://link.springer.com/referenceworkentry/10.1007/978-1-4419-9863-7_1328 Bootstrap, 0.632 Bootstrap, 0.632+ Bootstrap] from Encyclopedia of Systems Biology by Springer.
* bootpred() from bootstrap function.
* The .632 bootstrap estimate can be extended to statistics other than prediction error. See the paper [https://www.tandfonline.com/doi/full/10.1080/10543406.2016.1226329 Issues in developing multivariable molecular signatures for guiding clinical care decisions] by Sachs. [https://github.com/sachsmc/signature-tutorial Source code]. Let <math>\phi</math> be a performance metric, <math>S_b</math> a sample of size n from a bootstrap, <math>S_{-b}</math> subset of <math>S</math> that is disjoint from <math>S_b</math>; test set.
: <math>
\hat{E}^*[\phi_{\mathcal{F}}(S)] = .368 \hat{E}[\phi_{f}(S)] + 0.632 \hat{E}[\phi_{f_b}(S_{-b})]
</math>
: where <math>\hat{E}[\phi_{f}(S)]</math> is the naive estimate of <math>\phi_f</math> using the entire dataset.
* For survival data
** [https://cran.r-project.org/web/packages/ROC632/ ROC632] package, [https://repositorium.sdum.uminho.pt/bitstream/1822/52744/1/paper4_final_version_CatarinaSantos_ACB.pdf Overview], and the paper [https://www.degruyter.com/view/j/sagmb.2012.11.issue-6/1544-6115.1815/1544-6115.1815.xml?format=INT Time Dependent ROC Curves for the Estimation of True Prognostic Capacity of Microarray Data] by Founcher 2012.
** [https://onlinelibrary.wiley.com/doi/full/10.1111/j.1541-0420.2007.00832.x Efron-Type Measures of Prediction Error for Survival Analysis] Gerds 2007.
** [https://academic.oup.com/bioinformatics/article/23/14/1768/188061 Assessment of survival prediction models based on microarray data] Schumacher 2007. Brier score.
** [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4194196/ Evaluating Random Forests for Survival Analysis using Prediction Error Curves] Mogensen, 2012. [https://cran.r-project.org/web/packages/pec/ pec] package
** [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis] Chen 2012. Concordance, ROC... But bootstrap was not used.
** [https://www.sciencedirect.com/science/article/pii/S1672022916300390#b0150 Comparison of Cox Model Methods in A Low-dimensional Setting with Few Events] 2016. Concordance, calibration slopes RMSE are considered.
 
== Create partitions ==
[http://r-exercises.com/2016/11/13/sampling-exercise-1/ set.seed(), sample.split(),createDataPartition(), and createFolds()] functions.
 
[https://drsimonj.svbtle.com/k-fold-cross-validation-with-modelr-and-broom k-fold cross validation with modelr and broom]
 
== Nested resampling ==
* [http://appliedpredictivemodeling.com/blog/2017/9/2/njdc83d01pzysvvlgik02t5qnaljnd Nested Resampling with rsample]
* https://stats.stackexchange.com/questions/292179/whats-the-meaning-of-nested-resampling
 
Nested resampling is need when we want to '''tuning a model''' by using a grid search. The default settings of a model are likely not optimal for each data set out. So an inner CV has to be performed with the aim to find the best parameter set of a learner for each fold.
 
See a diagram at https://i.stack.imgur.com/vh1sZ.png
 
In BRB-ArrayTools -> class prediction with multiple methods, the ''alpha'' (significant level of threshold used for gene selection, 2nd option in individual genes) can be viewed as a tuning parameter for the development of a classifier.
 
== Pre-validation ==
* [https://www.degruyter.com/view/j/sagmb.2002.1.1/sagmb.2002.1.1.1000/sagmb.2002.1.1.1000.xml Pre-validation and inference in microarrays]  Tibshirani and Efron, Statistical Applications in Genetics and Molecular Biology, 2002.
* http://www.stat.columbia.edu/~tzheng/teaching/genetics/papers/tib_efron.pdf#page=5. In each CV, we compute the estimate of the response. This estimate of the response will serve as a new predictor ('''pre-validated predictor''') in the final fitting model.
* P1101 of Sachs 2016. With pre-validation, instead of computing the statistic <math>\phi</math> for each of the held-out subsets (<math>S_{-b}</math> for the bootstrap or <math>S_{k}</math> for cross-validation), the fitted signature <math>\hat{f}(X_i)</math> is estimated for <math>X_i \in S_{-b}</math> where <math>\hat{f}</math> is estimated using <math>S_{b}</math>. This process is repeated to obtain a set of '''pre-validated signature''' estimates <math>\hat{f}</math>. Then an association measure <math>\phi</math> can be calculated using the pre-validated signature estimates and the true outcomes <math>Y_i, i = 1, \ldots, n</math>.
* In CV, left-out samples = hold-out cases = test set
 
= Clustering =
See [[Heatmap#Clustering|Clustering]].
 
= Mixed Effect Model =
 
* Paper by [http://www.stat.cmu.edu/~brian/463/week07/laird-ware-biometrics-1982.pdf Laird and Ware 1982]
* [http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf John Fox's Linear Mixed Models] Appendix to An R and S-PLUS Companion to Applied Regression. Very clear. It provides 2 typical examples (hierarchical data and longitudinal data) of using the mixed effects model. It also uses Trellis plots to examine the data.
* Chapter 10 Random and Mixed Effects from Modern Applied Statistics with S by Venables and Ripley.
* (Book) lme4: Mixed-effects modeling with R by Douglas Bates.
* (Book) Mixed-effects modeling in S and S-Plus by José Pinheiro and Douglas Bates.
* [http://educate-r.org//2016/06/29/user2016.html Simulation and power analysis of generalized linear mixed models]
* [https://poissonisfish.wordpress.com/2017/12/11/linear-mixed-effect-models-in-r/ Linear mixed-effect models in R] by poissonisfish
* [https://www.statforbiology.com/2019/stat_general_correlationindependence2/ Dealing with correlation in designed field experiments]: part II
 
= Model selection criteria =
* [http://r-video-tutorial.blogspot.com/2017/07/assessing-accuracy-of-our-models-r.html Assessing the Accuracy of our models (R Squared, Adjusted R Squared, RMSE, MAE, AIC)]
* [https://forecasting.svetunkov.ru/en/2018/03/22/comparing-additive-and-multiplicative-regressions-using-aic-in-r/ Comparing additive and multiplicative regressions using AIC in R]
* [https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1459316?src=recsys Model Selection and Regression t-Statistics] Derryberry 2019
 
== Akaike information criterion/AIC ==
* https://en.wikipedia.org/wiki/Akaike_information_criterion.
:<math>\mathrm{AIC} \, = \, 2k - 2\ln(\hat L)</math>, where k be the number of estimated parameters in the model.
* Smaller is better
* Akaike proposed to approximate the expectation of the cross-validated log likelihood  <math>E_{test}E_{train} [log L(x_{test}| \hat{\beta}_{train})]</math> by <math>log L(x_{train} | \hat{\beta}_{train})-k </math>.
* Leave-one-out cross-validation is asymptotically equivalent to AIC, for ordinary linear regression models.
* AIC can be used to compare two models even if they are not hierarchically nested.
* [https://www.rdocumentation.org/packages/stats/versions/3.6.0/topics/AIC AIC()] from the stats package.
 
== BIC ==
:<math>\mathrm{BIC} \, = \, \ln(n) \cdot 2k - 2\ln(\hat L)</math>, where k be the number of estimated parameters in the model.
 
== Overfitting ==
[https://stats.stackexchange.com/questions/81576/how-to-judge-if-a-supervised-machine-learning-model-is-overfitting-or-not How to judge if a supervised machine learning model is overfitting or not?]
 
== AIC vs AUC ==
[https://stats.stackexchange.com/a/51278 What is the difference in what AIC and c-statistic (AUC) actually measure for model fit?]
 
Roughly speaking:
* AIC is telling you how good your model fits for a specific mis-classification cost.
* AUC is telling you how good your model would work, on average, across all mis-classification costs.
 
'''Frank Harrell''': AUC (C-index) has the advantage of measuring the concordance probability as you stated, aside from cost/utility considerations. To me the bottom line is the AUC should be used to describe discrimination of one model, not to compare 2 models. For comparison we need to use the most powerful measure: deviance and those things derived from deviance: generalized 𝑅<sup>2</sup> and AIC.
 
= Entropy =
== Definition ==
Entropy is defined by -log2(p) where p is a probability. '''Higher entropy represents higher unpredictable of an event'''.
 
Some examples:
* Fair 2-side die: Entropy = -.5*log2(.5) - .5*log2(.5) = 1.
* Fair 6-side die: Entropy = -6*1/6*log2(1/6) = 2.58
* Weighted 6-side die: Consider pi=.1 for i=1,..,5 and p6=.5. Entropy = -5*.1*log2(.1) - .5*log2(.5) = 2.16 (less unpredictable than a fair 6-side die).
 
== Use ==
When entropy was applied to the variable selection, we want to select a class variable which gives a largest entropy difference between without any class variable (compute entropy using response only) and with that class variable (entropy is computed by adding entropy in each class level) because this variable is most discriminative and it gives most '''information gain'''. For example,
* entropy (without any class)=.94,
* entropy(var 1) = .69,
* entropy(var 2)=.91,
* entropy(var 3)=.725.
We will choose variable 1 since it gives the largest gain (.94 - .69) compared to the other variables (.94 -.91, .94 -.725).
 
Why is picking the attribute with the most information gain beneficial? It ''reduces'' entropy, which increases predictability. A decrease in entropy signifies an decrease in unpredictability, which also means an increase in predictability.
 
Consider a split of a continuous variable. Where should we cut the continuous variable to create a binary partition with the highest gain? Suppose cut point c1 creates an entropy .9 and another cut point c2 creates an entropy .1. We should choose c2.
 
== Related ==
In addition to information gain, gini (dʒiːni) index is another metric used in decision tree. See [http://en.wikipedia.org/wiki/Decision_tree_learning wikipedia page] about decision tree learning.
 
= Ensembles =
* Combining classifiers. Pro: better classification performance. Con: time consuming.
* Comic http://flowingdata.com/2017/09/05/xkcd-ensemble-model/
* [http://www.win-vector.com/blog/2019/07/common-ensemble-models-can-be-biased/ Common Ensemble Models can be Biased]
 
== Bagging ==
Draw N bootstrap samples and summary the results (averaging for regression problem, majority vote for classification problem). Decrease variance without changing bias. Not help much with underfit or high bias models.
 
=== Random forest ===
'''Variance importance''': if you scramble the values of a variable, and the accuracy of your tree does not change much, then the variable is not very important.
 
Why is it useful to compute variance importance? So the model's predictions are easier to interpret (not improve the prediction performance).
 
Random forest has advantages of easier to run in parallel and suitable for small n large p problems.
 
[https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2264-5 Random forest versus logistic regression: a large-scale benchmark experiment] by Raphael Couronné, BMC Bioinformatics 2018
 
[https://github.com/suiji/arborist Arborist]: Parallelized, Extensible Random Forests
 
== Boosting ==
Instead of selecting data points randomly with the boostrap, it favors the misclassified points.
 
Algorithm:
* Initialize the weights
* Repeat
** resample with respect to weights
** retrain the model
** recompute weights
 
Since boosting requires computation in iterative and bagging can be run in parallel, bagging has an advantage over boosting when the data is very large.
 
== Time series ==
[https://petolau.github.io/Ensemble-of-trees-for-forecasting-time-series/ Ensemble learning for time series forecasting in R]
 
= p-values =
==  p-values ==
* Prob(Data | H0)
* https://en.wikipedia.org/wiki/P-value
* [https://amstat.tandfonline.com/toc/utas20/73/sup1 Statistical Inference in the 21st Century: A World Beyond p < 0.05] The American Statistician, 2019
* [https://matloff.wordpress.com/2016/03/07/after-150-years-the-asa-says-no-to-p-values/ THE ASA SAYS NO TO P-VALUES] The problem is that with large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. We have the opposite problem with small samples: The power of the test is low, and we will announce that there is “no significant effect” when in fact we may have too little data to know whether the effect is important.
* [http://www.r-statistics.com/2016/03/its-not-the-p-values-fault-reflections-on-the-recent-asa-statement/ It’s not the p-values’ fault]
* [https://stablemarkets.wordpress.com/2016/05/21/exploring-p-values-with-simulations-in-r/ Exploring P-values with Simulations in R] from Stable Markets.
* p-value and [https://en.wikipedia.org/wiki/Effect_size effect size]. http://journals.sagepub.com/doi/full/10.1177/1745691614553988
 
== Distribution of p values in medical abstracts ==
* http://www.ncbi.nlm.nih.gov/pubmed/26608725
* [https://github.com/jtleek/tidypvals An R package with several million published p-values in tidy data sets] by Jeff Leek.
 
== nominal p-value and Empirical p-values ==
* Nominal p-values are based on asymptotic null distributions
* Empirical p-values are computed from simulations/permutations
 
== (nominal) alpha level ==
Conventional methodology for statistical testing is, in advance of undertaking the test, to set a NOMINAL ALPHA CRITERION LEVEL (often 0.05). The outcome is classified as showing STATISTICAL SIGNIFICANCE if the actual ALPHA (probability of the outcome under the null hypothesis) is no greater than this NOMINAL ALPHA CRITERION LEVEL.
* http://www.translationdirectory.com/glossaries/glossary033.htm
* http://courses.washington.edu/p209s07/lecturenotes/Week%205_Monday%20overheads.pdf
 
== Normality assumption ==
[https://www.biorxiv.org/content/early/2018/12/20/498931 Violating the normality assumption may be the lesser of two evils]
 
= T-statistic =
Let <math style="vertical-align:-.3em">\scriptstyle\hat\beta</math> be an [[estimator]] of parameter ''β'' in some [[statistical model]]. Then a '''''t''-statistic''' for this parameter is any quantity of the form
: <math>
    t_{\hat{\beta}} = \frac{\hat\beta - \beta_0}{\mathrm{s.e.}(\hat\beta)},
  </math>
where ''β''<sub>0</sub> is a non-random, known constant, and <math style="vertical-align:-.3em">\scriptstyle s.e.(\hat\beta)</math> is the [[standard error (statistics)|standard error]] of the estimator <math style="vertical-align:-.3em">\scriptstyle\hat\beta</math>.
 
== Two sample test assuming equal variance ==
* [http://en.wikipedia.org/wiki/Pooled_variance Pooled variance]
* [http://en.wikipedia.org/wiki/Student%27s_t-test Student's t-test]
 
The ''t'' statistic (df = <math> n_1 + n_2 - 2</math>) to test whether the means are different can be calculated as follows:
:<math>t = \frac{\bar {X}_1 - \bar{X}_2}{s_{X_1X_2} \cdot \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}</math>
 
where
 
:<math> s_{X_1X_2} = \sqrt{\frac{(n_1-1)s_{X_1}^2+(n_2-1)s_{X_2}^2}{n_1+n_2-2}}.</math>
 
<math>s_{X_1X_2}</math> is an estimator of the common/pooled [[standard deviation]] of the two samples. The square-root of a pooled variance estimator is known as a pooled standard deviation.
 
* [https://en.wikipedia.org/wiki/Pooled_variance Pooled variance] from Wikipedia
* The pooled sample variance is an unbiased estimator of the common variance if Xi and Yi are following the normal distribution.
* (From [https://support.minitab.com/en-us/minitab/18/help-and-how-to/statistics/basic-statistics/supporting-topics/data-concepts/what-is-the-pooled-standard-deviation/ minitab]) The pooled standard deviation is the average spread of all data points about their group mean (''not the overall mean''). It is a weighted average of each group's standard deviation. The weighting gives larger groups a proportionally greater effect on the overall estimate.
* [https://heuristicandrew.blogspot.com/2018/01/type-i-error-rates-in-two-sample-t-test.html Type I error rates in two-sample t-test by simulation]
 
== Two sample test assuming unequal variance ==
The ''t'' statistic (Behrens-Welch test statistic) to test whether the population means are different is calculated as:
 
:<math>t = {\overline{X}_1 - \overline{X}_2 \over s_{\overline{X}_1 - \overline{X}_2}}</math>
 
where
 
:<math>s_{\overline{X}_1 - \overline{X}_2} = \sqrt{{s_1^2 \over n_1} + {s_2^2  \over n_2}}.
</math>
 
Here ''s''<sup>2</sup> is the [[unbiased estimator]] of the [[variance]] of the two samples.
 
The degrees of freedom is evaluated using the [http://en.wikipedia.org/wiki/Welch%E2%80%93Satterthwaite_equation Satterthwaite's approximation]
 
:<math>df = { ({s_1^2 \over n_1} + {s_2^2 \over n_2})^2  \over {({s_1^2 \over n_1})^2 \over n_1-1} + {({s_2^2 \over n_2})^2 \over n_2-1} }. </math>
 
== Paired test ==
[https://www.rdatagen.net/post/thinking-about-the-run-of-the-mill-pre-post-analysis/ Have you ever asked yourself, "how should I approach the classic pre-post analysis?"]
 
== [http://en.wikipedia.org/wiki/Standard_score Z-value/Z-score] ==
If the population parameters are known, then rather than computing the t-statistic, one can compute the z-score.
 
== Nonparametric test: Wilcoxon rank sum test ==
Sensitive to differences in location
 
== Nonparametric test: Kolmogorov-Smirnov test ==
Sensitive to difference in shape and location of the distribution functions of two groups
 
== Limma: Empirical Bayes method ==
* Some Bioconductor packages: limma, RnBeads, IMA, minfi packages.
* The '''moderated T-statistics''' used in Limma is defined on Limma's [https://bioconductor.org/packages/release/bioc/vignettes/limma/inst/doc/usersguide.pdf#page=63 user guide].
* Diagram of usage [https://www.rdocumentation.org/packages/limma/versions/3.28.14/topics/makeContrasts ?makeContrasts], [https://www.rdocumentation.org/packages/limma/versions/3.28.14/topics/contrasts.fit ?contrasts.fit], [https://www.rdocumentation.org/packages/limma/versions/3.28.14/topics/ebayes ?eBayes] <syntaxhighlight>
          lmFit        contrasts.fit          eBayes      topTable
        x ------> fit ------------------> fit2  -----> fit2  --------->
                  ^                      ^
                  |                      |
    model.matrix  |    makeContrasts      |
class ---------> design ----------> contrasts
</syntaxhighlight>
* Examples of contrasts (search '''contrasts.fit''' and/or '''model.matrix''' from the user guide) <syntaxhighlight lang='rsplus'>
# Ex 1 (Single channel design):
design <- model.matrix(~ 0+factor(c(1,1,1,2,2,3,3,3))) # number of samples x number of groups
colnames(design) <- c("group1", "group2", "group3")
fit <- lmFit(eset, design)
contrast.matrix <- makeContrasts(group2-group1, group3-group2, group3-group1,
                                levels=design)        # number of groups x number of contrasts
fit2 <- contrasts.fit(fit, contrast.matrix)
fit2 <- eBayes(fit2)
topTable(fit2, coef=1, adjust="BH")
topTable(fit2, coef=1, sort = "none", n = Inf, adjust="BH")$adj.P.Val
 
# Ex 2 (Common reference design):
targets <- readTargets("runxtargets.txt")
design <- modelMatrix(targets, ref="EGFP")
contrast.matrix <- makeContrasts(AML1,CBFb,AML1.CBFb,AML1.CBFb-AML1,AML1.CBFb-CBFb,
                                levels=design)
fit <- lmFit(MA, design)
fit2 <- contrasts.fit(fit, contrasts.matrix)
fit2 <- eBayes(fit2)
 
# Ex 3 (Direct two-color design):
design <- modelMatrix(targets, ref="CD4")
contrast.matrix <- cbind("CD8-CD4"=c(1,0),"DN-CD4"=c(0,1),"CD8-DN"=c(1,-1))
rownames(contrast.matrix) <- colnames(design)
fit <- lmFit(eset, design)
fit2 <- contrasts.fit(fit, contrast.matrix)
 
# Ex 4 (Single channel + Two groups):
fit <- lmFit(eset, design)
cont.matrix <- makeContrasts(MUvsWT=MU-WT, levels=design)
fit2 <- contrasts.fit(fit, cont.matrix)
fit2 <- eBayes(fit2)
 
# Ex 5 (Single channel + Several groups):
f <- factor(targets$Target, levels=c("RNA1","RNA2","RNA3"))
design <- model.matrix(~0+f)
colnames(design) <- c("RNA1","RNA2","RNA3")
fit <- lmFit(eset, design)
contrast.matrix <- makeContrasts(RNA2-RNA1, RNA3-RNA2, RNA3-RNA1,
                                levels=design)
fit2 <- contrasts.fit(fit, contrast.matrix)
fit2 <- eBayes(fit2)
 
# Ex 6 (Single channel + Interaction models 2x2 Factorial Designs) :
cont.matrix <- makeContrasts(
  SvsUinWT=WT.S-WT.U,
  SvsUinMu=Mu.S-Mu.U,
  Diff=(Mu.S-Mu.U)-(WT.S-WT.U),
  levels=design)
fit2 <- contrasts.fit(fit, cont.matrix)
fit2 <- eBayes(fit2)
</syntaxhighlight>
* Example from user guide 17.3 (Mammary progenitor cell populations) <syntaxhighlight lang='rsplus'>
setwd("~/Downloads/IlluminaCaseStudy")
url <- c("http://bioinf.wehi.edu.au/marray/IlluminaCaseStudy/probe%20profile.txt.gz",
  "http://bioinf.wehi.edu.au/marray/IlluminaCaseStudy/control%20probe%20profile.txt.gz",
  "http://bioinf.wehi.edu.au/marray/IlluminaCaseStudy/Targets.txt")
for(i in url)  system(paste("wget ", i))
system("gunzip probe%20profile.txt.gz")
system("gunzip control%20probe%20profile.txt.gz")
 
source("http://www.bioconductor.org/biocLite.R")
biocLite("limma")
biocLite("statmod")
library(limma)
targets <- readTargets()
targets
 
x <- read.ilmn(files="probe profile.txt",ctrlfiles="control probe profile.txt",
  other.columns="Detection")
options(digits=3)
head(x$E)
boxplot(log2(x$E),range=0,ylab="log2 intensity")
y <- neqc(x)
dim(y)
expressed <- rowSums(y$other$Detection < 0.05) >= 3
y <- y[expressed,]
dim(y) # 24691 12
plotMDS(y,labels=targets$CellType)
 
ct <- factor(targets$CellType)
design <- model.matrix(~0+ct)
colnames(design) <- levels(ct)
dupcor <- duplicateCorrelation(y,design,block=targets$Donor) # need statmod
dupcor$consensus.correlation
 
fit <- lmFit(y, design, block=targets$Donor, correlation=dupcor$consensus.correlation)
contrasts <- makeContrasts(ML-MS, LP-MS, ML-LP, levels=design)
fit2 <- contrasts.fit(fit, contrasts)
fit2 <- eBayes(fit2, trend=TRUE)
summary(decideTests(fit2, method="global"))
topTable(fit2, coef=1) # Top ten differentially expressed probes between ML and MS
#                SYMBOL TargetID logFC AveExpr    t  P.Value adj.P.Val    B
# ILMN_1766707    IL17B    <NA> -4.19    5.94 -29.0 2.51e-12  5.19e-08 18.1
# ILMN_1706051      PLD5    <NA> -4.00    5.67 -27.8 4.20e-12  5.19e-08 17.7
# ...
tT <- topTable(fit2, coef=1, number = Inf)
dim(tT)
# [1] 24691    8
</syntaxhighlight>
* Three groups comparison (What is the difference of A vs Other AND A vs (B+C)/2?). [https://grokbase.com/t/r/bioconductor/092bnp4147/bioc-limma-contrasts-comparing-one-factor-to-multiple-others Contrasts comparing one factor to multiple others] <syntaxhighlight lang='rsplus'>
library(limma)
set.seed(1234)
n <- 100
testexpr <- matrix(rnorm(n * 10, 5, 1), nc= 10)
testexpr[, 6:7] <- testexpr[, 6:7] + 7  # mean is 12
 
design1 <- model.matrix(~ 0 + as.factor(c(rep(1,5),2,2,3,3,3)))
design2 <- matrix(c(rep(1,5),rep(0,5),rep(0,5),rep(1,5)),ncol=2)
colnames(design1) <- LETTERS[1:3]
colnames(design2) <- c("A", "Other")
 
fit1 <- lmFit(testexpr,design1)
contrasts.matrix1 <- makeContrasts("AvsOther"=A-(B+C)/2, levels = design1)
fit1 <- eBayes(contrasts.fit(fit1,contrasts=contrasts.matrix1))
 
fit2 <- lmFit(testexpr,design2)
contrasts.matrix2 <- makeContrasts("AvsOther"=A-Other, levels = design2)
fit2 <- eBayes(contrasts.fit(fit2,contrasts=contrasts.matrix2))
 
t1 <- topTable(fit1,coef=1, number = Inf)
t2 <- topTable(fit2,coef=1, number = Inf)
 
rbind(head(t1, 3), tail(t1, 3))
#        logFC  AveExpr        t      P.Value    adj.P.Val        B
# 92 -5.293932 5.810926 -8.200138 1.147084e-15 1.147084e-13 26.335702
# 81 -5.045682 5.949507 -7.815607 2.009706e-14 1.004853e-12 23.334600
# 37 -4.720906 6.182821 -7.312539 7.186627e-13 2.395542e-11 19.625964
# 27 -2.127055 6.854324 -3.294744 1.034742e-03 1.055859e-03 -1.141991
# 86 -1.938148 7.153142 -3.002133 2.776390e-03 2.804434e-03 -2.039869
# 75 -1.876490 6.516004 -2.906626 3.768951e-03 3.768951e-03 -2.314869
rbind(head(t2, 3), tail(t2, 3))
#        logFC  AveExpr          t    P.Value adj.P.Val        B
# 92 -4.518551 5.810926 -2.5022436 0.01253944 0.2367295 -4.587080
# 81 -4.500503 5.949507 -2.4922492 0.01289503 0.2367295 -4.587156
# 37 -4.111158 6.182821 -2.2766414 0.02307100 0.2367295 -4.588728
# 27 -1.496546 6.854324 -0.8287440 0.40749644 0.4158127 -4.595601
# 86 -1.341607 7.153142 -0.7429435 0.45773401 0.4623576 -4.595807
# 75 -1.171366 6.516004 -0.6486690 0.51673851 0.5167385 -4.596008
 
var(as.numeric(testexpr[, 6:10]))
# [1] 12.38074
var(as.numeric(testexpr[, 6:7]))
# [1] 0.8501378
var(as.numeric(testexpr[, 8:10]))
# [1] 0.9640699
</syntaxhighlight> As we can see the p-values returned from the first contrast are very small (large mean but small variance) but the p-values returned from the 2nd contrast are large (still large mean but very large variance). The variance from the "Other" group can be calculated from a mixture distribution ( pdf = .4 N(12, 1) + .6 N(5, 1), VarY = E(Y^2) - (EY)^2 where E(Y^2) = .4 (VarX1 + (EX1)^2) + .6 (VarX2 + (EX2)^2) = 73.6 and EY = .4 * 12 + .6 * 5 = 7.8; so VarY = 73.6 - 7.8^2 = 12.76).
* [https://support.bioconductor.org/p/67984/ Correct assumptions of using limma moderated t-test] and the paper [http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0012336 Should We Abandon the t-Test in the Analysis of Gene Expression Microarray Data: A Comparison of Variance Modeling Strategies].
** Evaluation: statistical power (figure 3, 4, 5), false-positive rate (table 2), execution time and ease of use (table 3)
** Limma presents several advantages
** RVM inflates the expected number of false-positives when sample size is small. On the other hand the, RVM is very close to Limma from either their formulas (p3 of the supporting info) or the Hierarchical clustering (figure 2) of two examples.
** [https://www.slideshare.net/nahla0tammam/b4-jeanmougin Slides]
* [https://support.bioconductor.org/p/80398/ Use Limma to run ordinary T tests] <syntaxhighlight lang='rsplus'>
# where 'fit' is the output from lmFit() or contrasts.fit().
unmod.t <- fit$coefficients/fit$stdev.unscaled/fit$sigma
pval <- 2*pt(-abs(unmod.t), fit$df.residual)
 
# Following the above example
t.test(testexpr[1, 1:5], testexpr[1, 6:10], var.equal = T)
# Two Sample t-test
#
# data:  testexpr[1, 1:5] and testexpr[1, 6:10]
# t = -1.2404, df = 8, p-value = 0.25
# alternative hypothesis: true difference in means is not equal to 0
# 95 percent confidence interval:
#  -7.987791  2.400082
# sample estimates:
#  mean of x mean of y
# 4.577183  7.371037
fit2$coefficients[1] / (fit2$stdev.unscaled[1] * fit2$sigma[1]) # Ordinary t-statistic
# [1] -1.240416
fit2$coefficients[1] / (fit2$stdev.unscaled[1] * sqrt(fit2$s2.post[1])) # moderated t-statistic
# [1] -1.547156
topTable(fit2,coef=1, sort.by = "none")[1,]
#      logFC  AveExpr        t  P.Value adj.P.Val        B
# 1 -2.793855 5.974110 -1.547156 0.1222210 0.2367295 -4.592992
 
# Square root of the pooled variance
fit2$sigma[1]
# [1] 3.561284
(((5-1)*var(testexpr[1, 1:5]) + (5-1)*var(testexpr[1, 6:10]))/(5+5-2)) %>% sqrt()
# [1] 3.561284   
</syntaxhighlight>
* Comparison of ordinary T-statistic, RVM T-statistic and Limma/eBayes moderated T-statistic.
{| class="wikitable"
|-
!  !! Test statistic for gene g !!
|-
| [https://en.wikipedia.org/wiki/Student%27s_t-test#Equal_or_unequal_sample_sizes,_equal_variance Ordinary T-test] || <math> \frac{\overline{y}_{g1} - \overline{y}_{g2}}{S_g^{Pooled}/\sqrt{1/n_1 + 1/n_2}}</math> || <math>(S_g^{Pooled})^2 = \frac{(n_1-1)S_{g1}^2 + (n_2-1)S_{g2}^2}{n1+n2-2} </math>
|-
| [https://academic.oup.com/bioinformatics/article/19/18/2448/194552 RVM] || <math> \frac{\overline{y}_{g1} - \overline{y}_{g2}}{S_g^{RVM}/\sqrt{1/n_1 + 1/n_2}}</math> || <math>(S_g^{RVM})^2 = \frac{(n_1+n_2-2)S_{g}^2 + 2*a*(a*b)^{-1}}{n1+n2-2+2*a} </math>
|-
| Limma || <math> \frac{\overline{y}_{g1} - \overline{y}_{g2}}{S_g^{Limma}/\sqrt{1/n_1 + 1/n_2}}</math> || <math>(S_g^{Limma})^2 = \frac{d_0 S_0^2 + d_g S_g^2}{d_0 + d_g} </math>
|}
* In Limma,
** <math>\sigma_g^2</math> assumes an inverse Chi-square distribution with mean <math>S_0^2</math> and <math>d_0</math> degrees of freedom
** <math>d_0</math> (fit$df.prior) and <math>d_g</math> are, respectively, prior and residual/empirical degrees of freedom.
** <math>S_0^2</math> (fit$s2.prior) is the prior distribution and <math>S_g^2</math> is the pooled variance.
** <math>(S_g^{Limma})^2</math> can be obtained from fit$s2.post.
* [https://arxiv.org/abs/1901.10679 Empirical Bayes estimation of normal means, accounting for uncertainty in estimated standard errors] Lu 2019
 
= ANOVA =
* [https://cloud.r-project.org/doc/contrib/Faraway-PRA.pdf Practical Regression and Anova using R] by Julian J. Faraway, 2002
* [http://wiekvoet.blogspot.com/2016/01/a-simple-anova.html A simple ANOVA]
* [http://r-exercises.com/2016/11/29/repeated-measures-anova-in-r-exercises/ Repeated measures ANOVA in R Exercises]
* [http://singmann.org/mixed-models-for-anova-designs-with-one-observation-per-unit-of-observation-and-cell-of-the-design/ Mixed models for ANOVA designs with one observation per unit of observation and cell of the design]
* [http://singmann.org/anova-in-r-afex-may-be-the-solution-you-are-looking-for/ afex] package, [http://singmann.org/afex_plot/ afex_plot(): Publication-Ready Plots for Factorial Designs]
* [http://r-video-tutorial.blogspot.com/2017/07/experiment-designs-for-agriculture.html Experiment designs for Agriculture]
 
== Common tests are linear models ==
https://lindeloev.github.io/tests-as-linear/
 
== Post-hoc test ==
Determine which levels have significantly different means.
 
* http://jamesmarquezportfolio.com/one_way_anova_with_post_hocs_in_r.html
* [https://stats.idre.ucla.edu/r/faq/how-can-i-do-post-hoc-pairwise-comparisons-in-r/ pairwise.t.test()] for one-way ANOVA
* [https://www.r-bloggers.com/post-hoc-pairwise-comparisons-of-two-way-anova/ Post-hoc Pairwise Comparisons of Two-way ANOVA] using TukeyHSD().
* post-hoc tests: pairwise.t.test versus TukeyHSD test
 
== TukeyHSD (Honestly Significant Difference), diagnostic checking ==
https://datascienceplus.com/one-way-anova-in-r/, [https://brownmath.com/stat/anova1.htm#HSD Tukey HSD for Post-Hoc Analysis] (detailed explanation including the type 1 error problem in multiple testings)
 
* TukeyHSD for the pairwise tests
** You can’t just perform a series of t tests, because that would greatly increase your likelihood of a Type I error.
** compute something analogous to a t score for each pair of means, but you don’t compare it to the Student’s t distribution. Instead, you use a new distribution called the '''[https://en.wikipedia.org/wiki/Studentized_range_distribution studentized range]''' (from Wikipedia) or '''q distribution'''.
** Suppose that we take a sample of size ''n'' from each of ''k'' populations with the same normal distribution ''N''(''μ'',&nbsp;''σ'') and suppose that <math>\bar{y}</math><sub>min</sub> is the smallest of these sample means and <math>\bar{y}</math><sub>max</sub> is the largest of these sample means, and suppose ''S''<sup>2</sup> is the pooled sample variance from these samples. Then the following random variable has a Studentized range distribution: <math>q = \frac{\overline{y}_{\max} - \overline{y}_{\min}}{S/\sqrt{n}}</math>
** [http://www.sthda.com/english/wiki/one-way-anova-test-in-r#tukey-multiple-pairwise-comparisons One-Way ANOVA Test in R] from sthda.com. <syntaxhighlight lang='rsplus'>
res.aov <- aov(weight ~ group, data = PlantGrowth)
summary(res.aov)
#              Df Sum Sq Mean Sq F value Pr(>F) 
#  group        2  3.766  1.8832  4.846 0.0159 *
#  Residuals  27 10.492  0.3886               
TukeyHSD(res.aov)
# Tukey multiple comparisons of means
# 95% family-wise confidence level
#
# Fit: aov(formula = weight ~ group, data = PlantGrowth)
#
# $group
#            diff        lwr      upr    p adj
# trt1-ctrl -0.371 -1.0622161 0.3202161 0.3908711
# trt2-ctrl  0.494 -0.1972161 1.1852161 0.1979960
# trt2-trt1  0.865  0.1737839 1.5562161 0.0120064
 
# Extra:
# Check your data
my_data <- PlantGrowth
levels(my_data$group)
set.seed(1234)
dplyr::sample_n(my_data, 10)
 
# compute the summary statistics by group
library(dplyr)
group_by(my_data, group) %>%
  summarise(
    count = n(),
    mean = mean(weight, na.rm = TRUE),
    sd = sd(weight, na.rm = TRUE)
  )
</syntaxhighlight>
** Or we can use Benjamini-Hochberg method for p-value adjustment in pairwise comparisons <syntaxhighlight lang='rsplus'>
library(multcomp)
pairwise.t.test(my_data$weight, my_data$group,
                p.adjust.method = "BH")
#      ctrl  trt1
# trt1 0.194 -   
# trt2 0.132 0.013
#
# P value adjustment method: BH
</syntaxhighlight>
* Shapiro-Wilk test for normality <syntaxhighlight lang='rsplus'>
# Extract the residuals
aov_residuals <- residuals(object = res.aov )
# Run Shapiro-Wilk test
shapiro.test(x = aov_residuals )
</syntaxhighlight>
* Bartlett test and Levene test for the homogeneity of variances across the groups
 
== Repeated measure ==
* [https://neuropsychology.github.io/psycho.R//2018/05/01/repeated_measure_anovas.html How to do Repeated Measures ANOVAs in R]
* [https://onlinecourses.science.psu.edu/stat502/node/206 Cross-over Repeated Measure Designs]
* [https://www.rdatagen.net/post/when-the-research-question-doesn-t-fit-nicely-into-a-standard-study-design/ Cross-over study design with a major constraint]
 
== Combining insignificant factor levels ==
[https://freakonometrics.hypotheses.org/55451 COMBINING AUTOMATICALLY FACTOR LEVELS IN R]
 
== Omnibus tests ==
* https://en.wikipedia.org/wiki/Omnibus_test
* [https://stats.stackexchange.com/questions/59891/understanding-the-definition-of-omnibus-tests Understanding the definition of omnibus tests] Tests are refereed to as omnibus if after rejecting the null hypothesis you do not know where the differences assessed by the statistical test are. In the case of F tests they are omnibus when there is more than one df in the numerator (3 or more groups) it is omnibus.
 
= [https://en.wikipedia.org/wiki/Goodness_of_fit Goodness of fit] =
== [https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test Chi-square tests] ==
* [http://freakonometrics.hypotheses.org/20531 An application of chi-square tests]
 
== Fitting distribution ==
[https://magesblog.com/post/2011-12-01-fitting-distributions-with-r/ Fitting distributions with R]
 
= Contingency Tables =
== [https://en.wikipedia.org/wiki/Odds_ratio Odds ratio and Risk ratio] ==
The ratio of the odds of an event occurring in one group to the odds of it occurring in another group
<pre>
        drawn  | not drawn |
-------------------------------------
white |  A      |  B      | Wh
-------------------------------------
black |  C      |  D      | Bk
</pre>
* Odds Ratio = (A / C) / (B / D) = (AD) / (BC)
* Risk Ratio = (A / Wh) / (C / Bk)
 
== Hypergeometric, One-tailed Fisher exact test ==
* https://www.bioconductor.org/help/course-materials/2009/SeattleApr09/gsea/ (Are interesting features over-represented? or are selected genes more often in the ''GO category'' than expected by chance?)
* https://en.wikipedia.org/wiki/Hypergeometric_distribution. '' In a test for over-representation of successes in the sample, the hypergeometric p-value is calculated as the probability of randomly drawing '''k''' or more successes from the population in '''n''' total draws. In a test for under-representation, the p-value is the probability of randomly drawing '''k''' or fewer successes.''
* http://stats.stackexchange.com/questions/62235/one-tailed-fishers-exact-test-and-the-hypergeometric-distribution
* Two sided hypergeometric test
** http://stats.stackexchange.com/questions/155189/how-to-make-a-two-tailed-hypergeometric-test
** http://stats.stackexchange.com/questions/140107/p-value-in-a-two-tail-test-with-asymmetric-null-distribution
** http://stats.stackexchange.com/questions/19195/explaining-two-tailed-tests
* https://www.biostars.org/p/90662/ When computing the p-value (tail probability), consider to use 1 - Prob(observed -1) instead of 1 - Prob(observed) for discrete distribution.
* https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Hypergeometric.html p(x) = choose(m, x) choose(n, k-x) / choose(m+n, k).
<pre>
        drawn  | not drawn |
-------------------------------------
white |  x      |          | m
-------------------------------------
black |  k-x    |          | n
-------------------------------------
      |  k      |          | m+n
</pre>
 
For example, k=100, m=100, m+n=1000,
<syntaxhighlight lang='rsplus'>
> 1 - phyper(10, 100, 10^3-100, 100, log.p=F)
[1] 0.4160339
> a <- dhyper(0:10, 100, 10^3-100, 100)
> cumsum(rev(a))
  [1] 1.566158e-140 1.409558e-135 3.136408e-131 3.067025e-127 1.668004e-123 5.739613e-120 1.355765e-116
  [8] 2.325536e-113 3.018276e-110 3.058586e-107 2.480543e-104 1.642534e-101  9.027724e-99  4.175767e-96
[15]  1.644702e-93  5.572070e-91  1.638079e-88  4.210963e-86  9.530281e-84  1.910424e-81  3.410345e-79
[22]  5.447786e-77  7.821658e-75  1.013356e-72  1.189000e-70  1.267638e-68  1.231736e-66  1.093852e-64
[29]  8.900857e-63  6.652193e-61  4.576232e-59  2.903632e-57  1.702481e-55  9.240350e-54  4.650130e-52
[36]  2.173043e-50  9.442985e-49  3.820823e-47  1.441257e-45  5.074077e-44  1.669028e-42  5.134399e-41
[43]  1.478542e-39  3.989016e-38  1.009089e-36  2.395206e-35  5.338260e-34  1.117816e-32  2.200410e-31
[50]  4.074043e-30  7.098105e-29  1.164233e-27  1.798390e-26  2.617103e-25  3.589044e-24  4.639451e-23
[57]  5.654244e-22  6.497925e-21  7.042397e-20  7.198582e-19  6.940175e-18  6.310859e-17  5.412268e-16
[64]  4.377256e-15  3.338067e-14  2.399811e-13  1.626091e-12  1.038184e-11  6.243346e-11  3.535115e-10
[71]  1.883810e-09  9.442711e-09  4.449741e-08  1.970041e-07  8.188671e-07  3.193112e-06  1.167109e-05
[78]  3.994913e-05  1.279299e-04  3.828641e-04  1.069633e-03  2.786293e-03  6.759071e-03  1.525017e-02
[85]  3.196401e-02  6.216690e-02  1.120899e-01  1.872547e-01  2.898395e-01  4.160339e-01  5.550192e-01
[92]  6.909666e-01  8.079129e-01  8.953150e-01  9.511926e-01  9.811343e-01  9.942110e-01  9.986807e-01
[99]  9.998018e-01  9.999853e-01  1.000000e+00
 
# Density plot
plot(0:100, dhyper(0:100, 100, 10^3-100, 100), type='h')
</syntaxhighlight>
[[File:Dhyper.svg|200px]]
 
Moreover,
<pre>
  1 - phyper(q=10, m, n, k)
= 1 - sum_{x=0}^{x=10} phyper(x, m, n, k)
= 1 - sum(a[1:11]) # R's index starts from 1.
</pre>
 
Another example is the data from [https://david.ncifcrf.gov/helps/functional_annotation.html#fisher the functional annotation tool] in DAVID.
<pre>
              | gene list | not gene list |
-------------------------------------------------------
pathway        |  3  (q)  |              | 40 (m)
-------------------------------------------------------
not in pathway |  297      |              | 29960 (n)
-------------------------------------------------------
              |  300 (k)  |              | 30000
</pre>
The one-tailed p-value from the hypergeometric test is calculated as 1 - phyper(3-1, 40, 29960, 300) = 0.0074.
 
== [https://en.wikipedia.org/wiki/Fisher%27s_exact_test Fisher's exact test] ==
Following the above example from the DAVID website, the following R command calculates the Fisher exact test for independence in 2x2 contingency tables.
<syntaxhighlight lang='rsplus'>
> fisher.test(matrix(c(3, 40, 297, 29960), nr=2)) #  alternative = "two.sided" by default
 
        Fisher's Exact Test for Count Data
 
data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.008853
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
  1.488738 23.966741
sample estimates:
odds ratio
  7.564602
 
> fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="greater")
 
        Fisher's Exact Test for Count Data
 
data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.008853
alternative hypothesis: true odds ratio is greater than 1
95 percent confidence interval:
1.973  Inf
sample estimates:
odds ratio
  7.564602
 
> fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="less")
 
        Fisher's Exact Test for Count Data
 
data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.9991
alternative hypothesis: true odds ratio is less than 1
95 percent confidence interval:
  0.00000 20.90259
sample estimates:
odds ratio
  7.564602
</syntaxhighlight>
 
From the documentation of [https://stat.ethz.ch/R-manual/R-devel/library/stats/html/fisher.test.html fisher.test]
<pre>
Usage:
    fisher.test(x, y = NULL, workspace = 200000, hybrid = FALSE,
                control = list(), or = 1, alternative = "two.sided",
                conf.int = TRUE, conf.level = 0.95,
                simulate.p.value = FALSE, B = 2000)
</pre>
* For 2 by 2 cases, p-values are obtained directly using the (central or non-central) hypergeometric distribution.
* For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one.
* The alternative for a one-sided test is based on the odds ratio, so ‘alternative = "greater"’ is a test of the odds ratio being bigger than ‘or’.
* Two-sided tests are based on the probabilities of the tables, and take as ‘more extreme’ all tables with probabilities less than or equal to that of the observed table, the p-value being the sum of such probabilities.
 
== Chi-square independence test ==
[https://www.rdatagen.net/post/a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence-part-2/ Exploring the underlying theory of the chi-square test through simulation - part 2]
 
== [https://en.wikipedia.org/wiki/Gene_set_enrichment_analysis GSEA] ==
Determines whether an a priori defined set of genes shows statistically significant, concordant differences between two biological states
 
* https://www.bioconductor.org/help/course-materials/2015/SeattleApr2015/E_GeneSetEnrichment.html
* http://software.broadinstitute.org/gsea/index.jsp
* [http://www.biorxiv.org/content/biorxiv/early/2017/09/08/186288.full.pdf Statistical power of gene-set enrichment analysis is a function of gene set correlation structure] by SWANSON 2017
* [https://www.biorxiv.org/content/10.1101/674267v1 Towards a gold standard for benchmarking gene set enrichment analysis], [http://bioconductor.org/packages/release/bioc/html/GSEABenchmarkeR.html GSEABenchmarkeR] package
 
Two categories of GSEA procedures:
* Competitive:  compare genes in the test set relative to all other genes.
* Self-contained: whether the gene-set is more DE than one were to expect under the null of no association between two phenotype conditions (without reference to other genes in the genome). For example the method by [http://home.cc.umanitoba.ca/~psgendb/birchhomedir/doc/MeV/manual/gsea.html Jiang & Gentleman Bioinformatics 2007]
 
= Confidence vs Credibility Intervals =
http://freakonometrics.hypotheses.org/18117
 
= Power analysis/Sample Size determination =
* [https://en.wikipedia.org/wiki/Sample_size_determination Sample size determination] from Wikipedia
* Power and Sample Size Determination http://www.stat.wisc.edu/~st571-1/10-power-2.pdf#page=12
* http://biostat.mc.vanderbilt.edu/wiki/pub/Main/AnesShortCourse/HypothesisTestingPart1.pdf#page=40
* [http://r-video-tutorial.blogspot.com/2017/07/power-analysis-and-sample-size.html Power analysis and sample size calculation for Agriculture] ('''pwr, lmSupport, simr''' packages are used)
* [http://daniellakens.blogspot.com/2016/11/why-within-subject-designs-require-less.html Why Within-Subject Designs Require Fewer Participants than Between-Subject Designs]
 
== Power analysis for default Bayesian t-tests ==
http://daniellakens.blogspot.com/2016/01/power-analysis-for-default-bayesian-t.html
 
== Using simulation for power analysis: an example based on a stepped wedge study design ==
https://www.rdatagen.net/post/using-simulation-for-power-analysis-an-example/
 
== Power analysis and sample size calculation for Agriculture ==
http://r-video-tutorial.blogspot.com/2017/07/power-analysis-and-sample-size.html
 
== Power calculation for proportions (shiny app) ==
https://juliasilge.shinyapps.io/power-app/
 
== Derive the formula/manual calculation ==
* [http://powerandsamplesize.com/Knowledge/derive-z-test-1-sample-1-sided One-sample 1-sided test], [http://www.cyclismo.org/tutorial/R/power.html#calculating-the-power-using-a-normal-distribution One sample 2-sided test]
* [http://gchang.people.ysu.edu/class/s5817/L/L5817_1_2_PowerSampleSize_n.pdf#page=6 Two-sample 2-sided T test] (<math>n</math> is the sample size in each group)
:<math> 
\begin{align}
Power & = P_{\mu_1-\mu_2 = \Delta}(\frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\sigma^2/n + \sigma^2/n}} > Z_{\alpha /2}) +
    P_{\mu_1-\mu_2 = \Delta}(\frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\sigma^2/n + \sigma^2/n}} < -Z_{\alpha /2}) \\
    &  \approx P_{\mu_1-\mu_2 = \Delta}(\frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\sigma^2/n + \sigma^2/n}} > Z_{\alpha /2}) \\
    & =  P_{\mu_1-\mu_2 = \Delta}(\frac{\bar{X}_1 - \bar{X}_2 - \Delta}{\sqrt{2 * \sigma^2/n}} > Z_{\alpha /2} - \frac{\Delta}{\sqrt{2 * \sigma^2/n}}) \\
    & = \Phi(-(Z_{\alpha /2} - \frac{\Delta}{\sqrt{2 * \sigma^2/n}})) \\
    & = 1 - \beta =\Phi(Z_\beta)
\end{align}
</math>
Therefore
:<math>
\begin{align}
Z_{\beta} &= - Z_{\alpha/2} + \frac{\Delta}{\sqrt{2 * \sigma^2/n}} \\
Z_{\beta} + Z_{\alpha/2} & =  \frac{\Delta}{\sqrt{2 * \sigma^2/n}}  \\
2 * (Z_{\beta} + Z_{\alpha/2})^2 * \sigma^2/\Delta^2 & =  n \\
n & = 2 * (Z_{\beta} + Z_{\alpha/2})^2 * \sigma^2/\Delta^2
\end{align}
</math>
<syntaxhighlight lang='rsplus'>
# alpha = .05, delta = 200, n = 79.5, sigma=450
1 - pnorm(1.96 - 200*sqrt(79.5)/(sqrt(2)*450)) + pnorm(-1.96 - 200*sqrt(79.5)/(sqrt(2)*450))
# [1] 0.8
pnorm(-1.96 - 200*sqrt(79.5)/(sqrt(2)*450))
# [1] 9.58e-07
1 - pnorm(1.96 - 200*sqrt(79.5)/(sqrt(2)*450))
# [1] 0.8
</syntaxhighlight>
 
== [http://geraldbelton.com/calculating-required-sample-size-in-r-and-sas/#sthash.jT6fZ29h.dpbs Calculating required sample size in R and SAS] ==
'''pwr''' package is used. For two-sided test, the formula for sample size is
 
:<math> n_{\mbox{each group}} = \frac{2 * (Z_{\alpha/2} + Z_\beta)^2 * \sigma^2}{\Delta^2} = \frac{2 * (Z_{\alpha/2} + Z_\beta)^2}{d^2} </math>
 
where <math>Z_\alpha</math> is value of the Normal distribution which cuts off an upper tail probability of <math>\alpha</math>, <math>\Delta</math> is the difference sought, <math>\sigma</math> is the presumed standard deviation of the outcome, <math>\alpha</math> is the type 1 error, <math>\beta</math> is the type II error and (Cohen's) d is the '''effect size''' - difference between the means divided by the pooled standard deviation. 
 
<syntaxhighlight lang='rsplus'>
# An example from http://www.stat.columbia.edu/~gelman/stuff_for_blog/c13.pdf#page=3
# Method 1.
require(pwr)
pwr.t.test(d=200/450, power=.8, sig.level=.05,
          type="two.sample", alternative="two.sided")
#
#    Two-sample t test power calculation
#
#              n = 80.4
#              d = 0.444
#      sig.level = 0.05
#          power = 0.8
#    alternative = two.sided
#
# NOTE: n is number in *each* group
 
# Method 2.
2*(qnorm(.975) + qnorm(.8))^2*450^2/(200^2)
# [1] 79.5
2*(1.96 + .84)^2*450^2 / (200^2)
# [1] 79.4
</syntaxhighlight>
And stats::power.t.test() function.
<syntaxhighlight lang='rsplus'>
power.t.test(n = 79.5, delta = 200, sd = 450, sig.level = .05,
            type ="two.sample", alternative = "two.sided")
#
#    Two-sample t test power calculation
#
#              n = 79.5
#          delta = 200
#            sd = 450
#      sig.level = 0.05
#          power = 0.795
#    alternative = two.sided
#
# NOTE: n is number in *each* group
</syntaxhighlight>
 
== R package related to power analysis ==
[https://cran.r-project.org/web/views/ExperimentalDesign.html CRAN Task View: Design of Experiments]
 
* [https://cran.r-project.org/web/packages/powerAnalysis/index.html powerAnalysis] w/o vignette
* [https://cran.r-project.org/web/packages/powerbydesign/index.html powerbydesign] w/o vignette
* [https://cran.r-project.org/web/packages/easypower/index.html easypower] w/ vignette
* [https://cran.r-project.org/web/packages/pwr/index.html pwr] w/ vignette, https://www.statmethods.net/stats/power.html. The reference is Cohen's book.
* [https://github.com/rpsychologist/powerlmm powerlmm] Power Analysis for Longitudinal Multilevel/Linear Mixed-Effects Models.
* [https://cran.r-project.org/web/packages/ssize.fdr/index.html ssize.fdr] w/o vignette
* [https://cran.r-project.org/web/packages/samplesize/index.html samplesize] w/o vignette
* [https://cran.r-project.org/web/packages/ssizeRNA/index.html ssizeRNA] w/ vignette
* power.t.test(), power.anova.test(), power.prop.test() from [https://stat.ethz.ch/R-manual/R-devel/library/stats/html/00Index.html stats] package
 
== Russ Lenth Java applets ==
https://homepage.divms.uiowa.edu/~rlenth/Power/index.html
 
== Bootstrap method ==
[https://academic.oup.com/biostatistics/advance-article/doi/10.1093/biostatistics/kxy054/5106666 The upstrap] Crainiceanu & Crainiceanu, Biostatistics 2018
 
== Multiple Testing Case ==
[https://www.tandfonline.com/doi/abs/10.1198/016214504000001646 Optimal Sample Size for Multiple Testing The Case of Gene Expression Microarrays]
 
= Common covariance/correlation structures =
See [https://onlinecourses.science.psu.edu/stat502/node/228 psu.edu]. Assume covariance <math>\Sigma = (\sigma_{ij})_{p\times p} </math>
 
* Diagonal structure: <math>\sigma_{ij} = 0</math> if <math>i \neq j</math>.
* Compound symmetry: <math>\sigma_{ij} = \rho</math> if <math>i \neq j</math>.
* First-order autoregressive AR(1) structure: <math>\sigma_{ij} = \rho^{|i - j|}</math>. <syntaxhighlight lang='rsplus'>
rho <- .8
p <- 5
blockMat <- rho ^ abs(matrix(1:p, p, p, byrow=T) - matrix(1:p, p, p))
</syntaxhighlight>
* Banded matrix: <math>\sigma_{ii}=1, \sigma_{i,i+1}=\sigma_{i+1,i} \neq 0, \sigma_{i,i+2}=\sigma_{i+2,i} \neq 0</math> and <math>\sigma_{ij}=0</math> for <math>|i-j| \ge 3</math>.
* Spatial Power
* Unstructured Covariance
* [https://en.wikipedia.org/wiki/Toeplitz_matrix Toeplitz structure]
 
To create blocks of correlation matrix, use the "%x%" operator. See [https://www.rdocumentation.org/packages/base/versions/3.4.3/topics/kronecker kronecker()].
<syntaxhighlight lang='rsplus'>
covMat <- diag(n.blocks) %x% blockMat
</syntaxhighlight>
 
= Counter/Special Examples =
== Correlated does not imply independence ==
Suppose X is a normally-distributed random variable with zero mean.  Let Y = X^2.  Clearly X and Y are not independent: if you know X, you also know Y.  And if you know Y, you know the absolute value of X.
 
The covariance of X and Y is
<pre>
  Cov(X,Y) = E(XY) - E(X)E(Y) = E(X^3) - 0*E(Y) = E(X^3)
          = 0,
</pre>
because the distribution of X is symmetric around zero.  Thus the correlation r(X,Y) = Cov(X,Y)/Sqrt[Var(X)Var(Y)] = 0, and we have a situation where the variables are not independent, yet
have (linear) correlation r(X,Y) = 0.
 
This example shows how a linear correlation coefficient does not encapsulate anything about the quadratic dependence of Y upon X.
 
== Spearman vs Pearson correlation ==
Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship. https://stats.stackexchange.com/questions/8071/how-to-choose-between-pearson-and-spearman-correlation
 
<pre>
x=(1:100); 
y=exp(x);                       
cor(x,y, method='spearman') # 1
cor(x,y, method='pearson')  # .25
</pre>
 
== Spearman vs Wilcoxon ==
By [http://www.talkstats.com/threads/wilcoxon-signed-rank-test-or-spearmans-rho.42395/ this post]
* Wilcoxon used to compare categorical versus non-normal continuous variable
* Spearman's rho used to compare two continuous (including '''ordinal''') variables that one or both aren't normally distributed
 
== Spearman vs [https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient Kendall correlation] ==
* Kendall's tau coefficient (after the Greek letter τ), is a statistic used to measure the '''ordinal''' association between two measured quantities.
* [https://stats.stackexchange.com/questions/3943/kendall-tau-or-spearmans-rho Kendall Tau or Spearman's rho?]
 
== [http://en.wikipedia.org/wiki/Anscombe%27s_quartet Anscombe quartet] ==
 
Four datasets have almost same properties: same mean in X, same mean in Y, same variance in X, (almost) same variance in Y, same correlation in X and Y, same linear regression.
 
[[File:Anscombe quartet 3.svg|150px]]
 
== The real meaning of spurious correlations ==
https://nsaunders.wordpress.com/2017/02/03/the-real-meaning-of-spurious-correlations/
<syntaxhighlight lang='rsplus'>
library(ggplot2)
set.seed(123)
spurious_data <- data.frame(x = rnorm(500, 10, 1),
                            y = rnorm(500, 10, 1),
                            z = rnorm(500, 30, 3))
cor(spurious_data$x, spurious_data$y)
# [1] -0.05943856
spurious_data %>% ggplot(aes(x, y)) + geom_point(alpha = 0.3) +
theme_bw() + labs(title = "Plot of y versus x for 500 observations with N(10, 1)")
 
cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
# [1] 0.4517972
spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) +
theme_bw() + geom_smooth(method = "lm") +
scale_color_gradientn(colours = c("red", "white", "blue")) +
labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 3)")
 
spurious_data$z <- rnorm(500, 30, 6)
cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
# [1] 0.8424597
spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) +
theme_bw() + geom_smooth(method = "lm") +
scale_color_gradientn(colours = c("red", "white", "blue")) +
labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 6)")
</syntaxhighlight>
 
= Time series =
* [http://ellisp.github.io/blog/2016/12/07/arima-prediction-intervals Why time series forecasts prediction intervals aren't as good as we'd hope]
 
== Structural change ==
[https://datascienceplus.com/structural-changes-in-global-warming/ Structural Changes in Global Warming]
 
== AR(1) processes and random walks ==
[https://fdabl.github.io/r/Spurious-Correlation.html Spurious correlations and random walks]
 
= Measurement Error model =
* [https://en.wikipedia.org/wiki/Errors-in-variables_models Errors-in-variables models/errors-in-variables models or measurement error models]
* [https://onlinelibrary.wiley.com/doi/10.1111/biom.13112 Simulation‐‐Selection‐‐Extrapolation: Estimation in High‐‐Dimensional Errors‐‐in‐‐Variables Models] Nghiem 2019


= Dictionary =
== COPSS ==
* '''Prognosis''' is the probability that an event or diagnosis will result in a particular outcome.
[https://zh.wikipedia.org/wiki/考普斯会长奖 考普斯會長獎] COPSS
** For example, on the paper [http://clincancerres.aacrjournals.org/content/18/21/6065.figures-only Developing and Validating Continuous Genomic Signatures in Randomized Clinical Trials for Predictive Medicine] by Matsui 2012, the prognostic score .1 (0.9) represents a '''good (poor)''' prognosis.
** Prostate cancer has a much higher one-year overall survival rate than pancreatic cancer, and thus has a better prognosis. See [https://en.wikipedia.org/wiki/Survival_rate Survival rate] in wikipedia.


= Data =
== 美國國家科學院 United States National Academy of Sciences/NAS ==
== Eleven quick tips for finding research data ==
[https://zh.wikipedia.org/wiki/美国国家科学院 美國國家科學院]
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006038
 
= Books =
* [https://leanpub.com/biostatmethods Methods in Biostatistics with R] ($)
* [http://web.stanford.edu/class/bios221/book/ Modern Statistics for Modern Biology] (free)
* Principles of Applied Statistics, by David Cox & Christl Donnelly
* [https://www.amazon.com/Freedman-Robert-Pisani-Statistics-Hardcover/dp/B004QNRMDK/ Statistics] by David Freedman,Robert Pisani, Roger Purves
* [https://onlinelibrary.wiley.com/topic/browse/000113 Wiley Online Library -> Statistics] (Access by NIH Library)
* [https://web.stanford.edu/~hastie/CASI/ Computer Age Statistical Inference: Algorithms, Evidence and Data Science] by Efron and Hastie 2016
 
= Social =
== JSM ==
* 2019
** [https://minecr.shinyapps.io/jsm2019-schedule/ JSM 2019] and the [http://www.citizen-statistician.org/2019/07/shiny-for-jsm-2019/ post].
** [https://rviews.rstudio.com/2019/07/19/an-r-users-guide-to-jsm-2019/ An R Users Guide to JSM 2019]
 
== Following ==
* [http://jtleek.com/ Jeff Leek], https://twitter.com/jtleek
* Revolutions, http://blog.revolutionanalytics.com/
* RStudio Blog, https://blog.rstudio.com/
* Sean Davis, https://twitter.com/seandavis12, https://github.com/seandavi
* [http://stephenturner.us/post/ Stephen Turner], https://twitter.com/genetics_blog

Revision as of 15:57, 23 September 2024

Statisticians

The most important statistical ideas of the past 50 years

What are the most important statistical ideas of the past 50 years?, JASA 2021

Some Advice

Data

Rules for initial data analysis

Ten simple rules for initial data analysis

Types of probabilities

See this illustration

Exploratory Analysis (EDA)

Kurtosis

Kurtosis in R-What do you understand by Kurtosis?

Phi coefficient

  • Phi coefficient. Its values is [-1, 1]. A value of zero means that the binary variables are not positively or negatively associated.
  • Cramér’s V. Its value is [0, 1]. A value of zero indicates that there is no association between the two variables. This means that knowing the value of one variable does not help predict the value of the other variable.
    library(vcd)
    cramersV <- assocstats(table(x, y))$cramer
    

Coefficient of variation (CV)

Motivating the coefficient of variation (CV) for beginners:

  • Boss: Measure it 5 times.
  • You: 8, 8, 9, 6, and 8
  • B: SD=1. Make it three times more precise!
  • Y: 0.20 0.20 0.23 0.15 0.20 meters. SD=0.3!
  • B: All you did was change to meters! Report the CV instead!
  • Y: Damn it.
R> sd(c(8, 8, 9, 6, 8))
[1] 1.095445
R> sd(c(8, 8, 9, 6, 8)*2.54/100)
[1] 0.02782431

Agreement

Pitfalls

Common pitfalls in statistical analysis: Measures of agreement 2017

Cohen's Kappa statistic (2-class)

Fleiss Kappa statistic (more than two raters)

  • https://en.wikipedia.org/wiki/Fleiss%27_kappa
  • Fleiss kappa (more than two raters) to test interrater reliability or to evaluate the repeatability and stability of models (robustness). This was used by Cancer prognosis prediction of Zheng 2020. "In our case, each trained model is designed to be a rater to assign the affiliation of each variable (gene or pathway). We conducted 20 replications of fivefold cross validation. As such, we had 100 trained models, or 100 raters in total, among which the agreement was measured by the Fleiss kappa..."
  • Fleiss’ Kappa in R: For Multiple Categorical Variables. irr::kappam.fleiss() was used.
  • Kappa statistic vs ICC
    • ICC and Kappa totally disagree
    • Measures of Interrater Agreement by Mandrekar 2011. "In certain clinical studies, agreement between the raters is assessed for a clinical outcome that is measured on a continuous scale. In such instances, intraclass correlation is calculated as a measure of agreement between the raters. Intraclass correlation is equivalent to weighted kappa under certain conditions, see the study by Fleiss and Cohen6, 7 for details."

ICC: intra-class correlation

See ICC

Compare two sets of p-values

https://stats.stackexchange.com/q/155407

Computing different kinds of correlations

correlation package

Partial correlation

Partial correlation

Association is not causation

  • Association is not causation
  • Correlation Does Not Imply Causation: 5 Real-World Examples
  • Reasons Why Correlation Does Not Imply Causation
    • Third-Variable Problem: There may be an unseen third variable that is influencing both correlated variables. For example, ice cream sales and drowning incidents might be correlated because both increase during the summer, but neither causes the other.
    • Reverse Causation: The direction of cause and effect might be opposite to what we assume. For example, one might assume that stress causes poor health (which it can), but it’s also possible that poor health increases stress.
    • Coincidence: Sometimes, correlations occur purely by chance, especially if the sample size is large or if many variables are tested.
    • Complex Interactions: The relationship between variables can be influenced by a complex interplay of multiple factors that correlation alone cannot unpack.
  • Examples
    • Example of Correlation without Causation: There is a correlation between the number of fire trucks at a fire scene and the amount of damage caused by the fire. However, this does not mean that the fire trucks cause the damage; rather, larger fires both require more fire trucks and cause more damage.
    • Example of Potential Misinterpretation: Studies might find a correlation between coffee consumption and heart disease. Without further investigation, one might mistakenly conclude that drinking coffee causes heart disease. However, it could be that people who drink a lot of coffee are more likely to smoke, and smoking is the actual cause of heart disease.

Predictive power score

Transform sample values to their percentiles

  • ecdf()
  • quantile()
    • An example from the TreatmentSelection package where "type = 1" was used.
    R> x <- c(1,2,3,4,4.5,6,7)
    R> Fn <- ecdf(x)
    R> Fn     # a *function*
    Empirical CDF 
    Call: ecdf(x)
     x[1:7] =      1,      2,      3,  ...,      6,      7
    R> Fn(x)  # returns the percentiles for x
    [1] 0.1428571 0.2857143 0.4285714 0.5714286 0.7142857 0.8571429 1.0000000
    R> diff(Fn(x))
    [1] 0.1428571 0.1428571 0.1428571 0.1428571 0.1428571 0.1428571
    R> quantile(x, Fn(x))
    14.28571% 28.57143% 42.85714% 57.14286% 71.42857% 85.71429%      100% 
     1.857143  2.714286  3.571429  4.214286  4.928571  6.142857  7.000000 
    R> quantile(x, Fn(x), type = 1) 
    14.28571% 28.57143% 42.85714% 57.14286% 71.42857% 85.71429%      100% 
          1.0       2.0       3.0       4.0       4.5       6.0       7.0 
    
    R> x <- c(2, 6, 8, 10, 20)
    R> Fn <- ecdf(x)
    R> Fn(x)
    [1] 0.2 0.4 0.6 0.8 1.0
    
  • Definition of a Percentile in Statistics and How to Calculate It
  • https://en.wikipedia.org/wiki/Percentile
  • Percentile vs. Quartile vs. Quantile: What’s the Difference?
    • Percentiles: Range from 0 to 100.
    • Quartiles: Range from 0 to 4.
    • Quantiles: Range from any value to any other value.

Standardization

Feature standardization considered harmful

Eleven quick tips for finding research data

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006038

An archive of 1000+ datasets distributed with R

https://vincentarelbundock.github.io/Rdatasets/

Data and global

  • Age Structure from One Data in World. Our World in Data is a non-profit organization that provides free and open access to data and insights on how the world is changing across 115 topics.

Box(Box, whisker & outlier)

An example for a graphical explanation. File:Boxplot.svg, File:Geom boxplot.png

> x=c(0,4,15, 1, 6, 3, 20, 5, 8, 1, 3)
> summary(x)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
      0       2       4       6       7      20 
> sort(x)
 [1]  0  1  1  3  3  4  5  6  8 15 20
> y <- boxplot(x, col = 'grey')
> t(y$stats)
     [,1] [,2] [,3] [,4] [,5]
[1,]    0    2    4    7    8
# the extreme of the lower whisker, the lower hinge, the median, 
# the upper hinge and the extreme of the upper whisker

# https://en.wikipedia.org/wiki/Quartile#Example_1
> summary(c(6, 7, 15, 36, 39, 40, 41, 42, 43, 47, 49))
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   6.00   25.50   40.00   33.18   42.50   49.00
  • The lower and upper edges of box (also called the lower/upper hinge) is determined by the first and 3rd quartiles (2 and 7 in the above example).
    • 2 = median(c(0, 1, 1, 3, 3, 4)) = (1+3)/2
    • 7 = median(c(4, 5, 6, 8, 15, 20)) = (6+8)/2
    • IQR = 7 - 2 = 5
  • The thick dark horizon line is the median (4 in the example).
  • Outliers are defined by (the empty circles in the plot)
    • Observations larger than 3rd quartile + 1.5 * IQR (7+1.5*5=14.5) and
    • smaller than 1st quartile - 1.5 * IQR (2-1.5*5=-5.5).
    • Note that the cutoffs are not shown in the Box plot.
  • Whisker (defined using the cutoffs used to define outliers)
    • Upper whisker is defined by the largest "data" below 3rd quartile + 1.5 * IQR (8 in this example). Note Upper whisker is NOT defined as 3rd quartile + 1.5 * IQR.
    • Lower whisker is defined by the smallest "data" greater than 1st quartile - 1.5 * IQR (0 in this example). Note lower whisker is NOT defined as 1st quartile - 1.5 * IQR.
    • See another example below where we can see the whiskers fall on observations.

Note the wikipedia lists several possible definitions of a whisker. R uses the 2nd method (Tukey boxplot) to define whiskers.

Create boxplots from a list object

Normally we use a vector to create a single boxplot or a formula on a data to create boxplots.

But we can also use split() to create a list and then make boxplots.

Dot-box plot

File:Boxdot.svg

geom_boxplot

Note the geom_boxplot() does not create crossbars. See How to generate a boxplot graph with whisker by ggplot or this. A trick is to add the stat_boxplot() function.

Without jitter

ggplot(dfbox, aes(x=sample, y=expr)) +
  geom_boxplot() +
  theme(axis.text.x=element_text(color = "black", angle=30, vjust=.8, 
                                 hjust=0.8, size=6),  
        plot.title = element_text(hjust = 0.5)) +
  labs(title="", y = "", x = "") 

With jitter

ggplot(dfbox, aes(x=sample, y=expr)) +
  geom_boxplot(outlier.shape=NA) + #avoid plotting outliers twice
  geom_jitter(position=position_jitter(width=.2, height=0)) +
  theme(axis.text.x=element_text(color = "black", angle=30, vjust=.8, 
                                 hjust=0.8, size=6),  
        plot.title = element_text(hjust = 0.5)) +
  labs(title="", y = "", x = "") 

Why geom_boxplot identify more outliers than base boxplot?

What do hjust and vjust do when making a plot using ggplot? The value of hjust and vjust are only defined between 0 and 1: 0 means left-justified, 1 means right-justified.

Other boxplots

File:Lotsboxplot.png

Annotated boxplot

https://stackoverflow.com/a/38032281

stem and leaf plot

stem(). See R Tutorial.

Note that stem plot is useful when there are outliers.

> stem(x)

  The decimal point is 10 digit(s) to the right of the |

   0 | 00000000000000000000000000000000000000000000000000000000000000000000+419
   1 |
   2 |
   3 |
   4 |
   5 |
   6 |
   7 |
   8 |
   9 |
  10 |
  11 |
  12 | 9

> max(x)
[1] 129243100275
> max(x)/1e10
[1] 12.92431

> stem(y)

  The decimal point is at the |

  0 | 014478
  1 | 0
  2 | 1
  3 | 9
  4 | 8

> y
 [1] 3.8667356428 0.0001762708 0.7993462430 0.4181079732 0.9541728562
 [6] 4.7791262101 0.6899313108 2.1381289177 0.0541736818 0.3868776083

> set.seed(1234)
> z <- rnorm(10)*10
> z
 [1] -12.070657   2.774292  10.844412 -23.456977   4.291247   5.060559
 [7]  -5.747400  -5.466319  -5.644520  -8.900378
> stem(z)

  The decimal point is 1 digit(s) to the right of the |

  -2 | 3
  -1 | 2
  -0 | 9665
   0 | 345
   1 | 1

Box-Cox transformation

CLT/Central limit theorem

Central limit theorem

Delta method

Delta

Sample median, x-percentiles

  • Central limit theorem for sample medians
  • For the q-th sample quantile in sufficiently large samples, we get that it will approximately have a normal distribution with mean the [math]\displaystyle{ 𝑞 }[/math]th population quantile [math]\displaystyle{ 𝑥_𝑞 }[/math] and variance [math]\displaystyle{ 𝑞(1−𝑞)/(𝑛𝑓_𝑋(𝑥_𝑞)^2) }[/math]. Hence for the median (𝑞=1/2), the variance in sufficiently large samples will be approximately [math]\displaystyle{ 1/(4𝑛𝑓_𝑋(m)^2) }[/math].
  • For example for an exponential distribution with a rate parameter [math]\displaystyle{ \lambda \gt 0 }[/math], the pdf is [math]\displaystyle{ f(x)=\lambda \exp(-\lambda x) }[/math]. The population median [math]\displaystyle{ m }[/math] is the value such as [math]\displaystyle{ F(m)=.5 }[/math]. So [math]\displaystyle{ m=log(2)/\lambda }[/math]. For large n, the sample median [math]\displaystyle{ \tilde{X} }[/math] will be approximately normal distributed around the population median [math]\displaystyle{ m }[/math], but with the asymptotic variance given by [math]\displaystyle{ Var(\tilde{X}) \approx \frac{1}{4nf(m)^2} }[/math] where [math]\displaystyle{ f(m) }[/math] is the PDF evaluated at the median [math]\displaystyle{ m=\log(2)/\lambda }[/math]. For the exponential distribution with rate [math]\displaystyle{ \lambda }[/math], we have [math]\displaystyle{ f(m) = \lambda e^{-\lambda m} = \lambda/2 }[/math]. Substituting this into the expression for the variance we have [math]\displaystyle{ Var(\tilde{X}) \approx \frac{1}{n\lambda^2} }[/math].
  • For normal distribution with mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math]. The sample median has a limiting distribution of normal with mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \frac{1}{4nf(m)^2} = \frac{\pi \sigma^2}{2n} }[/math].
  • Some references:
    • "Mathematical Statistics" by Jun Shao
    • "Probability and Statistics" by DeGroot and Schervish
    • "Order Statistics" by H.A. David and H.N. Nagaraja

the Holy Trinity (LRT, Wald, Score tests)

Don't invert that matrix

Different matrix decompositions/factorizations

set.seed(1234)
x <- matrix(rnorm(10*2), nr= 10)
cmat <- cov(x); cmat
# [,1]       [,2]
# [1,]  0.9915928 -0.1862983
# [2,] -0.1862983  1.1392095

# cholesky decom
d1 <- chol(cmat)
t(d1) %*% d1  # equal to cmat
d1  # upper triangle
# [,1]       [,2]
# [1,] 0.9957875 -0.1870864
# [2,] 0.0000000  1.0508131

# svd
d2 <- svd(cmat)
d2$u %*% diag(d2$d) %*% t(d2$v) # equal to cmat
d2$u %*% diag(sqrt(d2$d))
# [,1]      [,2]
# [1,] -0.6322816 0.7692937
# [2,]  0.9305953 0.5226872

Model Estimation with R

Model Estimation by Example Demonstrations with R. Michael Clark

Regression

Regression

Non- and semi-parametric regression

Mean squared error

Splines

k-Nearest neighbor regression

  • class::knn()
  • k-NN regression in practice: boundary problem, discontinuities problem.
  • Weighted k-NN regression: want weight to be small when distance is large. Common choices - weight = kernel(xi, x)

Kernel regression

  • Instead of weighting NN, weight ALL points. Nadaraya-Watson kernel weighted average:

[math]\displaystyle{ \hat{y}_q = \sum c_{qi} y_i/\sum c_{qi} = \frac{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))*y_i}{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))} }[/math].

  • Choice of bandwidth [math]\displaystyle{ \lambda }[/math] for bias, variance trade-off. Small [math]\displaystyle{ \lambda }[/math] is over-fitting. Large [math]\displaystyle{ \lambda }[/math] can get an over-smoothed fit. Cross-validation.
  • Kernel regression leads to locally constant fit.
  • Issues with high dimensions, data scarcity and computational complexity.

Principal component analysis

See PCA.

Partial Least Squares (PLS)

[math]\displaystyle{ X = T P^\mathrm{T} + E }[/math]
[math]\displaystyle{ Y = U Q^\mathrm{T} + F }[/math]
where X is an [math]\displaystyle{ n \times m }[/math] matrix of predictors, Y is an [math]\displaystyle{ n \times p }[/math] matrix of responses; T and U are [math]\displaystyle{ n \times l }[/math] matrices that are, respectively, projections of X (the X score, component or factor matrix) and projections of Y (the Y scores); P and Q are, respectively, [math]\displaystyle{ m \times l }[/math] and [math]\displaystyle{ p \times l }[/math] orthogonal loading matrices; and matrices E and F are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of X and Y are made so as to maximise the covariance between T and U (projection matrices).

High dimension

dimRed package

dimRed package

Feature selection

Goodness-of-fit

Independent component analysis

ICA is another dimensionality reduction method.

ICA vs PCA

ICS vs FA

Robust independent component analysis

robustica: customizable robust independent component analysis 2022

Canonical correlation analysis

Non-negative CCA

Correspondence analysis

Non-negative matrix factorization

Optimization and expansion of non-negative matrix factorization

Nonlinear dimension reduction

The Specious Art of Single-Cell Genomics by Chari 2021

t-SNE

t-Distributed Stochastic Neighbor Embedding (t-SNE) is a technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets.

Perplexity parameter

  • Balance attention between local and global aspects of the dataset
  • A guess about the number of close neighbors
  • In a real setting is important to try different values
  • Must be lower than the number of input records
  • Interactive t-SNE ? Online. We see in addition to perplexity there are learning rate and max iterations.

Classifying digits with t-SNE: MNIST data

Below is an example from datacamp Advanced Dimensionality Reduction in R.

The mnist_sample is very small 200x785. Here (Exploring handwritten digit classification: a tidy analysis of the MNIST dataset) is a large data with 60k records (60000 x 785).

  1. Generating t-SNE features
    library(readr)
    library(dplyr)
    
    # 104MB
    mnist_raw <- read_csv("https://pjreddie.com/media/files/mnist_train.csv", col_names = FALSE)
    mnist_10k <- mnist_raw[1:10000, ]
    colnames(mnist_10k) <- c("label", paste0("pixel", 0:783))
    
    library(ggplot2)
    library(Rtsne)
    
    tsne <- Rtsne(mnist_10k[, -1], perplexity = 5)
    tsne_plot <- data.frame(tsne_x= tsne$Y[1:5000,1],
                            tsne_y = tsne$Y[1:5000,2],
                            digit = as.factor(mnist_10k[1:5000,]$label))
    # visualize obtained embedding
    ggplot(tsne_plot, aes(x= tsne_x, y = tsne_y, color = digit)) +
      ggtitle("MNIST embedding of the first 5K digits") +
      geom_text(aes(label = digit)) + theme(legend.position= "none")
    
  2. Computing centroids
    library(data.table)
    # Get t-SNE coordinates
    centroids <- as.data.table(tsne$Y[1:5000,])
    setnames(centroids, c("X", "Y"))
    centroids[, label := as.factor(mnist_10k[1:5000,]$label)]
    # Compute centroids
    centroids[, mean_X := mean(X), by = label]
    centroids[, mean_Y := mean(Y), by = label]
    centroids <- unique(centroids, by = "label")
    # visualize centroids
    ggplot(centroids, aes(x= mean_X, y = mean_Y, color = label)) +
      ggtitle("Centroids coordinates") + geom_text(aes(label = label)) +
      theme(legend.position = "none")
    
  3. Classifying new digits
    # Get new examples of digits 4 and 9
    distances <- as.data.table(tsne$Y[5001:10000,])
    setnames(distances, c("X" , "Y"))
    distances[, label := mnist_10k[5001:10000,]$label]
    distances <- distances[label == 4 | label == 9]
    # Compute the distance to the centroids
    distances[, dist_4 := sqrt(((X - centroids[label==4,]$mean_X) + 
                                (Y - centroids[label==4,]$mean_Y))^2)]
    dim(distances)
    # [1] 928   4
    distances[1:3, ]
    #            X        Y label   dist_4
    # 1: -15.90171 27.62270     4 1.494578
    # 2: -33.66668 35.69753     9 8.195562
    # 3: -16.55037 18.64792     9 8.128860
    
    # Plot distance to each centroid
    ggplot(distances, aes(x=dist_4, fill = as.factor(label))) + 
      geom_histogram(binwidth=5, alpha=.5, position="identity", show.legend = F)
    

Fashion MNIST data

  • fashion_mnist is only 500x785
  • keras has 60k x 785. Miniconda is required when we want to use the package.

tSNE vs PCA

Two groups example

suppressPackageStartupMessages({
  library(splatter)
  library(scater)
})

sim.groups <- splatSimulate(group.prob = c(0.5, 0.5), method = "groups",
                            verbose = FALSE)
sim.groups <- logNormCounts(sim.groups)
sim.groups <- runPCA(sim.groups)
plotPCA(sim.groups, colour_by = "Group") # 2 groups separated in PC1

sim.groups <- runTSNE(sim.groups)
plotTSNE(sim.groups, colour_by = "Group") # 2 groups separated in TSNE2

UMAP

GECO

GECO: gene expression clustering optimization app for non-linear data visualization of patterns

Visualize the random effects

http://www.quantumforest.com/2012/11/more-sense-of-random-effects/

Calibration

  • Search by image: graphical explanation of calibration problem
  • Does calibrating classification models improve prediction?
    • Calibrating a classification model can improve the reliability and accuracy of the predicted probabilities, but it may not necessarily improve the overall prediction performance of the model in terms of metrics such as accuracy, precision, or recall.
    • Calibration is about ensuring that the predicted probabilities from a model match the observed proportions of outcomes in the data. This can be important when the predicted probabilities are used to make decisions or when they are presented to users as a measure of confidence or uncertainty.
    • However, calibrating a model does not change its ability to discriminate between positive and negative outcomes. In other words, calibration does not affect how well the model separates the classes, but rather how accurately it estimates the probabilities of class membership.
    • In some cases, calibrating a model may improve its overall prediction performance by making the predicted probabilities more accurate. However, this is not always the case, and the impact of calibration on prediction performance may vary depending on the specific needs and goals of the analysis.
  • A real-world example of calibration in machine learning is in the field of fraud detection. In this case, it might be desirable to have the model predict probabilities of data belonging to each possible class instead of crude class labels. Gaining access to probabilities is useful for a richer interpretation of the responses, analyzing the model shortcomings, or presenting the uncertainty to the end-users ². A guide to model calibration | Wunderman Thompson Technology.
  • Another example where calibration is more important than prediction on new samples is in the field of medical diagnosis. In this case, it is important to have well-calibrated probabilities for the presence of a disease, so that doctors can make informed decisions about treatment. For example, if a diagnostic test predicts an 80% chance that a patient has a certain disease, doctors would expect that 80% of the time when such a prediction is made, the patient actually has the disease. This example does not mean that prediction on new samples is not feasible or not a concern, but rather that having well-calibrated probabilities is crucial for making accurate predictions and informed decisions.
  • Calibration: the Achilles heel of predictive analytics Calster 2019
  • https://www.itl.nist.gov/div898/handbook/pmd/section1/pmd133.htm Calibration and calibration curve.
    • Y=voltage (observed), X=temperature (true/ideal). The calibration curve for a thermocouple is often constructed by comparing thermocouple (observed)output to relatively (true)precise thermometer data.
    • when a new temperature is measured with the thermocouple, the voltage is converted to temperature terms by plugging the observed voltage into the regression equation and solving for temperature.
    • It is important to note that the thermocouple measurements, made on the secondary measurement scale, are treated as the response variable and the more precise thermometer results, on the primary scale, are treated as the predictor variable because this best satisfies the underlying assumptions (Y=observed, X=true) of the analysis.
    • Calibration interval
    • In almost all calibration applications the ultimate quantity of interest is the true value of the primary-scale measurement method associated with a measurement made on the secondary scale.
    • It seems the x-axis and y-axis have similar ranges in many application.
  • An Exercise in the Real World of Design and Analysis, Denby, Landwehr, and Mallows 2001. Inverse regression
  • How to determine calibration accuracy/uncertainty of a linear regression?
  • Linear Regression and Calibration Curves
  • Regression and calibration Shaun Burke
  • calibrate package
  • investr: An R Package for Inverse Estimation. Paper
  • The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models by Kattan and Gerds 2018. The following code demonstrates Figure 2.
    # Odds ratio =1 and calibrated model
    set.seed(666)
    x = rnorm(1000)           
    z1 = 1 + 0*x        
    pr1 = 1/(1+exp(-z1))
    y1 = rbinom(1000,1,pr1)  
    mean(y1) # .724, marginal prevalence of the outcome
    dat1 <- data.frame(x=x, y=y1)
    newdat1 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr1))
    
    # Odds ratio =1 and severely miscalibrated model
    set.seed(666)
    x = rnorm(1000)           
    z2 =  -2 + 0*x        
    pr2 = 1/(1+exp(-z2))  
    y2 = rbinom(1000,1,pr2)  
    mean(y2) # .12
    dat2 <- data.frame(x=x, y=y2)
    newdat2 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr2))
    
    library(riskRegression)
    lrfit1 <- glm(y ~ x, data = dat1, family = 'binomial')
    IPA(lrfit1, newdata = newdat1)
    #     Variable     Brier           IPA     IPA.gain
    # 1 Null model 0.1984710  0.000000e+00 -0.003160010
    # 2 Full model 0.1990982 -3.160010e-03  0.000000000
    # 3          x 0.1984800 -4.534668e-05 -0.003114664
    1 - 0.1990982/0.1984710
    # [1] -0.003160159
    
    lrfit2 <- glm(y ~ x, family = 'binomial')
    IPA(lrfit2, newdata = newdat1)
    #     Variable     Brier       IPA     IPA.gain
    # 1 Null model 0.1984710  0.000000 -1.859333763
    # 2 Full model 0.5674948 -1.859334  0.000000000
    # 3          x 0.5669200 -1.856437 -0.002896299
    1 - 0.5674948/0.1984710
    # [1] -1.859334
    From the simulated data, we see IPA = -3.16e-3 for a calibrated model and IPA = -1.86 for a severely miscalibrated model.

ROC curve

See ROC.

NRI (Net reclassification improvement)

Maximum likelihood

Difference of partial likelihood, profile likelihood and marginal likelihood

EM Algorithm

Mixture model

mixComp: Estimation of the Order of Mixture Distributions

MLE

Maximum Likelihood Distilled

Efficiency of an estimator

What does it mean by more “efficient” estimator

Inference

infer package

Generalized Linear Model

Link function

Link Functions versus Data Transforms

Extract coefficients, z, p-values

Use coef(summary(glmObject))

> coef(summary(glm.D93))
                 Estimate Std. Error       z value     Pr(>|z|)
(Intercept)  3.044522e+00  0.1708987  1.781478e+01 5.426767e-71
outcome2    -4.542553e-01  0.2021708 -2.246889e+00 2.464711e-02
outcome3    -2.929871e-01  0.1927423 -1.520097e+00 1.284865e-01
treatment2   1.337909e-15  0.2000000  6.689547e-15 1.000000e+00
treatment3   1.421085e-15  0.2000000  7.105427e-15 1.000000e+00

Quasi Likelihood

Quasi-likelihood is like log-likelihood. The quasi-score function (first derivative of quasi-likelihood function) is the estimating equation.

IRLS

Plot

https://strengejacke.wordpress.com/2015/02/05/sjplot-package-and-related-online-manuals-updated-rstats-ggplot/

Deviance, stats::deviance() and glmnet::deviance.glmnet() from R

## an example with offsets from Venables & Ripley (2002, p.189)
utils::data(anorexia, package = "MASS")

anorex.1 <- glm(Postwt ~ Prewt + Treat + offset(Prewt),
                family = gaussian, data = anorexia)
summary(anorex.1)

# Call:
#   glm(formula = Postwt ~ Prewt + Treat + offset(Prewt), family = gaussian, 
#       data = anorexia)
# 
# Deviance Residuals: 
#   Min        1Q    Median        3Q       Max  
# -14.1083   -4.2773   -0.5484    5.4838   15.2922  
# 
# Coefficients:
#   Estimate Std. Error t value Pr(>|t|)    
# (Intercept)  49.7711    13.3910   3.717 0.000410 ***
#   Prewt        -0.5655     0.1612  -3.509 0.000803 ***
#   TreatCont    -4.0971     1.8935  -2.164 0.033999 *  
#   TreatFT       4.5631     2.1333   2.139 0.036035 *  
#   ---
#   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# 
# (Dispersion parameter for gaussian family taken to be 48.69504)
# 
# Null deviance: 4525.4  on 71  degrees of freedom
# Residual deviance: 3311.3  on 68  degrees of freedom
# AIC: 489.97
# 
# Number of Fisher Scoring iterations: 2

deviance(anorex.1)
# [1] 3311.263
  • In glmnet package. The deviance is defined to be 2*(loglike_sat - loglike), where loglike_sat is the log-likelihood for the saturated model (a model with a free parameter per observation). Null deviance is defined to be 2*(loglike_sat -loglike(Null)); The NULL model refers to the intercept model, except for the Cox, where it is the 0 model. Hence dev.ratio=1-deviance/nulldev, and this deviance method returns (1-dev.ratio)*nulldev.
x=matrix(rnorm(100*2),100,2)
y=rnorm(100)
fit1=glmnet(x,y) 
deviance(fit1)  # one for each lambda
#  [1] 98.83277 98.53893 98.29499 98.09246 97.92432 97.78472 97.66883
#  [8] 97.57261 97.49273 97.41327 97.29855 97.20332 97.12425 97.05861
# ...
# [57] 96.73772 96.73770
fit2 <- glmnet(x, y, lambda=.1) # fix lambda
deviance(fit2)
# [1] 98.10212
deviance(glm(y ~ x))
# [1] 96.73762
sum(residuals(glm(y ~ x))^2)
# [1] 96.73762

Saturated model

Testing

Generalized Additive Models

Simulate data

Density plot

# plot a Weibull distribution with shape and scale
func <- function(x) dweibull(x, shape = 1, scale = 3.38)
curve(func, .1, 10)

func <- function(x) dweibull(x, shape = 1.1, scale = 3.38)
curve(func, .1, 10)

The shape parameter plays a role on the shape of the density function and the failure rate.

  • Shape <=1: density is convex, not a hat shape.
  • Shape =1: failure rate (hazard function) is constant. Exponential distribution.
  • Shape >1: failure rate increases with time

Simulate data from a specified density

Permuted block randomization

Permuted block randomization using simstudy

Correlated data

Clustered data with marginal correlations

Generating clustered data with marginal correlations

Signal to noise ratio/SNR

[math]\displaystyle{ SNR = \frac{\sigma^2_{signal}}{\sigma^2_{noise}} = \frac{Var(f(X))}{Var(e)} }[/math] if Y = f(X) + e
  • The SNR is related to the correlation of Y and f(X). Assume X and e are independent ([math]\displaystyle{ X \perp e }[/math]):
[math]\displaystyle{ \begin{align} Cor(Y, f(X)) &= Cor(f(X)+e, f(X)) \\ &= \frac{Cov(f(X)+e, f(X))}{\sqrt{Var(f(X)+e) Var(f(X))}} \\ &= \frac{Var(f(X))}{\sqrt{Var(f(X)+e) Var(f(X))}} \\ &= \frac{\sqrt{Var(f(X))}}{\sqrt{Var(f(X)) + Var(e))}} = \frac{\sqrt{SNR}}{\sqrt{SNR + 1}} \\ &= \frac{1}{\sqrt{1 + Var(e)/Var(f(X))}} = \frac{1}{\sqrt{1 + SNR^{-1}}} \end{align} }[/math] SnrVScor.png
Or [math]\displaystyle{ SNR = \frac{Cor^2}{1-Cor^2} }[/math]

Some examples of signal to noise ratio

Effect size, Cohen's d and volcano plot

[math]\displaystyle{ \theta = \frac{\mu_1 - \mu_2} \sigma, }[/math]

Treatment/control

  • simdata() from biospear package
  • data.gen() from ROCSI package. The response contains continuous, binary and survival outcomes. The input include prevalence of predictive biomarkers, effect size (beta) for prognostic biomarker, etc.

Cauchy distribution has no expectation

https://en.wikipedia.org/wiki/Cauchy_distribution

replicate(10, mean(rcauchy(10000)))

Dirichlet distribution

  • Dirichlet distribution
    • It is a multivariate generalization of the beta distribution
    • The Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.
  • dirmult::rdirichlet()

Relationships among probability distributions

https://en.wikipedia.org/wiki/Relationships_among_probability_distributions

What is the probability that two persons have the same initials

The post. The probability that at least two persons have the same initials depends on the size of the group. For a team of 8 people, simulations suggest that the probability is close to 4.1%. This probability increases with the size of the group. If there are 1000 people in the room, the probability is almost 100%. How many people do you need to guarantee that two of them have the same initals?

Multiple comparisons

Take an example, Suppose 550 out of 10,000 genes are significant at .05 level

  1. P-value < .05 ==> Expect .05*10,000=500 false positives
  2. False discovery rate < .05 ==> Expect .05*550 =27.5 false positives
  3. Family wise error rate < .05 ==> The probablity of at least 1 false positive <.05

According to Lifetime Risk of Developing or Dying From Cancer, there is a 39.7% risk of developing a cancer for male during his lifetime (in other words, 1 out of every 2.52 men in US will develop some kind of cancer during his lifetime) and 37.6% for female. So the probability of getting at least one cancer patient in a 3-generation family is 1-.6**3 - .63**3 = 0.95.

Flexible method

?GSEABenchmarkeR::runDE. Unadjusted (too few DE genes), FDR, and Bonferroni (too many DE genes) are applied depending on the proportion of DE genes.

Family-Wise Error Rate (FWER)

Bonferroni

False Discovery Rate/FDR

Suppose [math]\displaystyle{ p_1 \leq p_2 \leq ... \leq p_n }[/math]. Then

[math]\displaystyle{ \text{FDR}_i = \text{min}(1, n* p_i/i) }[/math].

So if the number of tests ([math]\displaystyle{ n }[/math]) is large and/or the original p value ([math]\displaystyle{ p_i }[/math]) is large, then FDR can hit the value 1.

However, the simple formula above does not guarantee the monotonicity property from the FDR. So the calculation in R is more complicated. See How Does R Calculate the False Discovery Rate.

Below is the histograms of p-values and FDR (BH adjusted) from a real data (Pomeroy in BRB-ArrayTools).

File:Hist bh.svg

And the next is a scatterplot w/ histograms on the margins from a null data. The curve looks like f(x)=log(x).

File:Scatterhist.svg

q-value

q-value is defined as the minimum FDR that can be attained when calling that feature significant (i.e., expected proportion of false positives incurred when calling that feature significant).

If gene X has a q-value of 0.013 it means that 1.3% of genes that show p-values at least as small as gene X are false positives.

Another view: q-value = FDR adjusted p-value. A p-value of 5% means that 5% of all tests will result in false positives. A q-value of 5% means that 5% of significant results will result in false positives. here.

Double dipping

Double dipping

SAM/Significance Analysis of Microarrays

The percentile option is used to define the number of falsely called genes based on 'B' permutations. If we use the 90-th percentile, the number of significant genes will be less than if we use the 50-th percentile/median.

In BRCA dataset, using the 90-th percentile will get 29 genes vs 183 genes if we use median.

Required number of permutations for a permutation-based p-value

library("iterpc")

multichoose(c(3,1,1)) # [1] 20
multichoose(c(10,10)) |> log10()  # [1] 5.266599
multichoose(c(100,100), bigz = T) |> log10() # [1] 58.95688
multichoose(c(100,100,100), bigz = T) |> log10() # [1] 140.5758

Multivariate permutation test

In BRCA dataset, using 80% confidence gives 116 genes vs 237 genes if we use 50% confidence (assuming maximum proportion of false discoveries is 10%). The method is published on EL Korn, JF Troendle, LM McShane and R Simon, Controlling the number of false discoveries: Application to high dimensional genomic data, Journal of Statistical Planning and Inference, vol 124, 379-398 (2004).

The role of the p-value in the multitesting problem

https://www.tandfonline.com/doi/full/10.1080/02664763.2019.1682128

String Permutations Algorithm

https://youtu.be/nYFd7VHKyWQ

combinat package

Find all Permutations

coin package: Resampling

Resampling Statistics

Empirical Bayes Normal Means Problem with Correlated Noise

Solving the Empirical Bayes Normal Means Problem with Correlated Noise Sun 2018

The package cashr and the source code of the paper

Bayes

Bayes factor

Empirical Bayes method

Naive Bayes classifier

Understanding Naïve Bayes Classifier Using R

MCMC

Speeding up Metropolis-Hastings with Rcpp

offset() function

Offset in Poisson regression

  1. We need to model rates instead of counts
  2. More generally, you use offsets because the units of observation are different in some dimension (different populations, different geographic sizes) and the outcome is proportional to that dimension.

An example from here

Y  <- c(15,  7, 36,  4, 16, 12, 41, 15)
N  <- c(4949, 3534, 12210, 344, 6178, 4883, 11256, 7125)
x1 <- c(-0.1, 0, 0.2, 0, 1, 1.1, 1.1, 1)
x2 <- c(2.2, 1.5, 4.5, 7.2, 4.5, 3.2, 9.1, 5.2)

glm(Y ~ offset(log(N)) + (x1 + x2), family=poisson) # two variables
# Coefficients:
# (Intercept)           x1           x2
#     -6.172       -0.380        0.109
#
# Degrees of Freedom: 7 Total (i.e. Null);  5 Residual
# Null Deviance:	    10.56
# Residual Deviance: 4.559 	AIC: 46.69
glm(Y ~ offset(log(N)) + I(x1+x2), family=poisson)  # one variable
# Coefficients:
# (Intercept)   I(x1 + x2)
#   -6.12652      0.04746
#
# Degrees of Freedom: 7 Total (i.e. Null);  6 Residual
# Null Deviance:	    10.56
# Residual Deviance: 8.001 	AIC: 48.13

Offset in Cox regression

An example from biospear::PCAlasso()

coxph(Surv(time, status) ~ offset(off.All), data = data)
# Call:  coxph(formula = Surv(time, status) ~ offset(off.All), data = data)
#
# Null model
#   log likelihood= -2391.736 
#   n= 500 

# versus without using offset()
coxph(Surv(time, status) ~ off.All, data = data)
# Call:
# coxph(formula = Surv(time, status) ~ off.All, data = data)
#
#          coef exp(coef) se(coef)    z    p
# off.All 0.485     1.624    0.658 0.74 0.46
#
# Likelihood ratio test=0.54  on 1 df, p=0.5
# n= 500, number of events= 438 
coxph(Surv(time, status) ~ off.All, data = data)$loglik
# [1] -2391.702 -2391.430    # initial coef estimate, final coef

Offset in linear regression

Overdispersion

https://en.wikipedia.org/wiki/Overdispersion

Var(Y) = phi * E(Y). If phi > 1, then it is overdispersion relative to Poisson. If phi <1, we have under-dispersion (rare).

Heterogeneity

The Poisson model fit is not good; residual deviance/df >> 1. The lack of fit maybe due to missing data, covariates or overdispersion.

Subjects within each covariate combination still differ greatly.

Consider Quasi-Poisson or negative binomial.

Test of overdispersion or underdispersion in Poisson models

https://stats.stackexchange.com/questions/66586/is-there-a-test-to-determine-whether-glm-overdispersion-is-significant

Poisson

Negative Binomial

The mean of the Poisson distribution can itself be thought of as a random variable drawn from the gamma distribution thereby introducing an additional free parameter.

Binomial

Count data

Zero counts

Bias

Bias in Small-Sample Inference With Count-Data Models Blackburn 2019

Survival data analysis

See Survival data analysis

Logistic regression

Simulate binary data from the logistic model

https://stats.stackexchange.com/questions/46523/how-to-simulate-artificial-data-for-logistic-regression

set.seed(666)
x1 = rnorm(1000)           # some continuous variables 
x2 = rnorm(1000)
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
pr = 1/(1+exp(-z))         # pass through an inv-logit function
y = rbinom(1000,1,pr)      # bernoulli response variable
 
#now feed it to glm:
df = data.frame(y=y,x1=x1,x2=x2)
glm( y~x1+x2,data=df,family="binomial")

Building a Logistic Regression model from scratch

https://www.analyticsvidhya.com/blog/2015/10/basics-logistic-regression

Algorithm didn’t converge & probabilities 0/1

Prediction

Odds ratio

  • https://en.wikipedia.org/wiki/Odds_ratio. It seems a larger OR does not imply a smaller Fisher's exact p-value. See an example on Fig 4 here.
  • Odds ratio = exp(coefficient). For example, if the coefficient for a predictor variable in your logistic regression model is 0.5, the odds ratio for that variable would be: exp(0.5) = 1.64. This means that, for every unit increase in the predictor variable, the odds of the binary outcome occurring increase by a factor of 1.64. A larger odds ratio indicates a stronger association between the predictor variable and the binary outcome, while a smaller odds ratio indicates a weaker association.
  • why the odds ratio is exp(coefficient) in logistic regression? The odds ratio is the exponent of the coefficient in a logistic regression model because the logistic regression model is based on the logit function, which is the natural logarithm of the odds ratio. The logit function takes the following form: logit(p) = log(p/(1-p)), where p is the probability of the binary outcome occurring.
  • Clinical example: Imagine that you are conducting a study to investigate the association between body mass index (BMI) and the risk of developing type 2 diabetes. Fit a logistic regression using BMI as the covariate. Calculate the odds ratio for the BMI variable: exp(coefficient) = 1.64. This means that, for every unit increase in BMI, the odds of a patient developing type 2 diabetes increase by a factor of 1.64.
  • Probability vs. odds: Probability and odds can differ from each other in many ways. For example, probability (of an event) typically appears as a percentage, while you can express odds as a fraction or ratio (the ratio of the number of ways the event can occur to the number of ways it cannot occur). Another difference is that probability uses a range that only exists between the numbers zero and one, while odds use a range that has no limits.
  • Calculate the odds ratio from the coefficient estimates; see this post.
    require(MASS)
    N  <- 100               # generate some data
    X1 <- rnorm(N, 175, 7)
    X2 <- rnorm(N,  30, 8)
    X3 <- abs(rnorm(N, 60, 30))
    Y  <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 12)
    
    # dichotomize Y and do logistic regression
    Yfac   <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi"))
    glmFit <- glm(Yfac ~ X1 + X2 + X3, family=binomial(link="logit"))
    
    exp(cbind(coef(glmFit), confint(glmFit)))  
    

AUC

A small introduction to the ROCR package

       predict.glm()             ROCR::prediction()     ROCR::performance()
glmobj ------------> predictTest -----------------> ROCPPred ---------> AUC
newdata                labels

Gompertz function

Medical applications

RCT

Subgroup analysis

Other related keywords: recursive partitioning, randomized clinical trials (RCT)

Interaction analysis

Statistical Learning

LDA (Fisher's linear discriminant), QDA

Bagging

Chapter 8 of the book.

  • Bootstrap mean is approximately a posterior average.
  • Bootstrap aggregation or bagging average: Average the prediction over a collection of bootstrap samples, thereby reducing its variance. The bagging estimate is defined by
[math]\displaystyle{ \hat{f}_{bag}(x) = \frac{1}{B}\sum_{b=1}^B \hat{f}^{*b}(x). }[/math]

Where Bagging Might Work Better Than Boosting

CLASSIFICATION FROM SCRATCH, BAGGING AND FORESTS 10/8

Boosting

AdaBoost

AdaBoost.M1 by Freund and Schapire (1997):

The error rate on the training sample is [math]\displaystyle{ \bar{err} = \frac{1}{N} \sum_{i=1}^N I(y_i \neq G(x_i)), }[/math]

Sequentially apply the weak classification algorithm to repeatedly modified versions of the data, thereby producing a sequence of weak classifiers [math]\displaystyle{ G_m(x), m=1,2,\dots,M. }[/math]

The predictions from all of them are combined through a weighted majority vote to produce the final prediction: [math]\displaystyle{ G(x) = sign[\sum_{m=1}^M \alpha_m G_m(x)]. }[/math] Here [math]\displaystyle{ \alpha_1,\alpha_2,\dots,\alpha_M }[/math] are computed by the boosting algorithm and weight the contribution of each respective [math]\displaystyle{ G_m(x) }[/math]. Their effect is to give higher influence to the more accurate classifiers in the sequence.

Dropout regularization

DART: Dropout Regularization in Boosting Ensembles

Gradient boosting

Gradient descent

Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function.

  • Gradient Descent in R by Econometric Sense. Example of using the trivial cost function 1.2 * (x-2)^2 + 3.2. R code is provided and visualization of steps is interesting! The unknown parameter is the learning rate.
    repeat until convergence {
      Xn+1 = Xn - α∇F(Xn) 
    }
    

    Where ∇F(x) would be the derivative for the cost function at hand and α is the learning rate.

The error function from a simple linear regression looks like

[math]\displaystyle{ \begin{align} Err(m,b) &= \frac{1}{N}\sum_{i=1}^n (y_i - (m x_i + b))^2, \\ \end{align} }[/math]

We compute the gradient first for each parameters.

[math]\displaystyle{ \begin{align} \frac{\partial Err}{\partial m} &= \frac{2}{n} \sum_{i=1}^n -x_i(y_i - (m x_i + b)), \\ \frac{\partial Err}{\partial b} &= \frac{2}{n} \sum_{i=1}^n -(y_i - (m x_i + b)) \end{align} }[/math]

The gradient descent algorithm uses an iterative method to update the estimates using a tuning parameter called learning rate.

new_m &= m_current - (learningRate * m_gradient) 
new_b &= b_current - (learningRate * b_gradient) 

After each iteration, derivative is closer to zero. Coding in R for the simple linear regression.

Gradient descent vs Newton's method

Classification and Regression Trees (CART)

Construction of the tree classifier

  • Node proportion
[math]\displaystyle{ p(1|t) + \dots + p(6|t) =1 }[/math] where [math]\displaystyle{ p(j|t) }[/math] define the node proportions (class proportion of class j on node t. Here we assume there are 6 classes.
  • Impurity of node t
[math]\displaystyle{ i(t) }[/math] is a nonnegative function [math]\displaystyle{ \phi }[/math] of the [math]\displaystyle{ p(1|t), \dots, p(6|t) }[/math] such that [math]\displaystyle{ \phi(1/6,1/6,\dots,1/6) }[/math] = maximumm [math]\displaystyle{ \phi(1,0,\dots,0)=0, \phi(0,1,0,\dots,0)=0, \dots, \phi(0,0,0,0,0,1)=0 }[/math]. That is, the node impurity is largest when all classes are equally mixed together in it, and smallest when the node contains only one class.
  • Gini index of impurity
[math]\displaystyle{ i(t) = - \sum_{j=1}^6 p(j|t) \log p(j|t). }[/math]
  • Goodness of the split s on node t
[math]\displaystyle{ \Delta i(s, t) = i(t) -p_Li(t_L) - p_Ri(t_R). }[/math] where [math]\displaystyle{ p_R }[/math] are the proportion of the cases in t go into the left node [math]\displaystyle{ t_L }[/math] and a proportion [math]\displaystyle{ p_R }[/math] go into right node [math]\displaystyle{ t_R }[/math].

A tree was grown in the following way: At the root node [math]\displaystyle{ t_1 }[/math], a search was made through all candidate splits to find that split [math]\displaystyle{ s^* }[/math] which gave the largest decrease in impurity;

[math]\displaystyle{ \Delta i(s^*, t_1) = \max_{s} \Delta i(s, t_1). }[/math]
  • Class character of a terminal node was determined by the plurality rule. Specifically, if [math]\displaystyle{ p(j_0|t)=\max_j p(j|t) }[/math], then t was designated as a class [math]\displaystyle{ j_0 }[/math] terminal node.

R packages

Partially additive (generalized) linear model trees

Supervised Classification, Logistic and Multinomial

Variable selection

Review

Variable selection – A review and recommendations for the practicing statistician by Heinze et al 2018.

Variable selection and variable importance plot

Variable selection and cross-validation

Mallow Cp

Mallows's Cp addresses the issue of overfitting. The Cp statistic calculated on a sample of data estimates the mean squared prediction error (MSPE).

[math]\displaystyle{ E\sum_j (\hat{Y}_j - E(Y_j\mid X_j))^2/\sigma^2, }[/math]

The Cp statistic is defined as

[math]\displaystyle{ C_p={SSE_p \over S^2} - N + 2P. }[/math]

Variable selection for mode regression

http://www.tandfonline.com/doi/full/10.1080/02664763.2017.1342781 Chen & Zhou, Journal of applied statistics ,June 2017

lmSubsets

lmSubsets: Exact variable-subset selection in linear regression. 2020

Permutation method

BASIC XAI with DALEX — Part 2: Permutation-based variable importance

Neural network

Support vector machine (SVM)

Quadratic Discriminant Analysis (qda), KNN

Machine Learning. Stock Market Data, Part 3: Quadratic Discriminant Analysis and KNN

KNN

KNN Algorithm Machine Learning

Regularization

Regularization is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting

Regularization: Ridge, Lasso and Elastic Net from datacamp.com. Bias and variance trade-off in parameter estimates was used to lead to the discussion.

Regularized least squares

https://en.wikipedia.org/wiki/Regularized_least_squares. Ridge/ridge/elastic net regressions are special cases.

Ridge regression

Since L2 norm is used in the regularization, ridge regression is also called L2 regularization.

ridge regression with glmnet

Hoerl and Kennard (1970a, 1970b) introduced ridge regression, which minimizes RSS subject to a constraint [math]\displaystyle{ \sum|\beta_j|^2 \le t }[/math]. Note that though ridge regression shrinks the OLS estimator toward 0 and yields a biased estimator [math]\displaystyle{ \hat{\beta} = (X^TX + \lambda X)^{-1} X^T y }[/math] where [math]\displaystyle{ \lambda=\lambda(t) }[/math], a function of t, the variance is smaller than that of the OLS estimator.

The solution exists if [math]\displaystyle{ \lambda \gt 0 }[/math] even if [math]\displaystyle{ n \lt p }[/math].

Ridge regression (L2 penalty) only shrinks the coefficients. In contrast, Lasso method (L1 penalty) tries to shrink some coefficient estimators to exactly zeros. This can be seen from comparing the coefficient path plot from both methods.

Geometrically (contour plot of the cost function), the L1 penalty (the sum of absolute values of coefficients) will incur a probability of some zero coefficients (i.e. some coefficient hitting the corner of a diamond shape in the 2D case). For example, in the 2D case (X-axis=[math]\displaystyle{ \beta_0 }[/math], Y-axis=[math]\displaystyle{ \beta_1 }[/math]), the shape of the L1 penalty [math]\displaystyle{ |\beta_0| + |\beta_1| }[/math] is a diamond shape whereas the shape of the L2 penalty ([math]\displaystyle{ \beta_0^2 + \beta_1^2 }[/math]) is a circle.

Lasso/glmnet, adaptive lasso and FAQs

glmnet

Lasso logistic regression

https://freakonometrics.hypotheses.org/52894

Lagrange Multipliers

A Simple Explanation of Why Lagrange Multipliers Works

How to solve lasso/convex optimization

Quadratic programming

Constrained optimization

Jaya Package. Jaya Algorithm is a gradient-free optimization algorithm. It can be used for Maximization or Minimization of a function for solving both constrained and unconstrained optimization problems. It does not contain any hyperparameters.

Highly correlated covariates

1. Elastic net

2. Group lasso

Grouped data

Other Lasso

Comparison by plotting

If we are running simulation, we can use the DALEX package to visualize the fitting result from different machine learning methods and the true model. See http://smarterpoland.pl/index.php/2018/05/ml-models-what-they-cant-learn.

Prediction

Prediction, Estimation, and Attribution Efron 2020

Postprediction inference/Inference based on predicted outcomes

Methods for correcting inference based on outcomes predicted by machine learning Wang 2020. postpi package.

SHAP/SHapley Additive exPlanation: feature importance for each class

Imbalanced/unbalanced Classification

See ROC.

Deep Learning

Tensor Flow (tensorflow package)

Biological applications

Machine learning resources

The Bias-Variance Trade-Off & "DOUBLE DESCENT" in the test error

https://twitter.com/daniela_witten/status/1292293102103748609 and an easy to read Thread Reader.

  • (Thread #17) The key point is with 20 DF, n=p, and there's exactly ONE least squares fit that has zero training error. And that fit happens to have oodles of wiggles.....
  • (Thread #18) but as we increase the DF so that p>n, there are TONS of interpolating least squares fits. The MINIMUM NORM least squares fit is the "least wiggly" of those zillions of fits. And the "least wiggly" among them is even less wiggly than the fit when p=n !!!
  • (Thread #19) "double descent" is happening b/c DF isn't really the right quantity for the the x-axis: like, the fact that we are choosing the minimum norm least squares fit actually means that the spline with 36 DF is **less** flexible than the spline with 20 DF.
  • (Thread #20) if had used a ridge penalty when fitting the spline (instead of least squares)? Well then we wouldn't have interpolated training set, we wouldn't have seen double descent, AND we would have gotten better test error (for the right value of the tuning parameter!)
  • (Thread #21) When we use (stochastic) gradient descent to fit a neural net, we are actually picking out the minimum norm solution!! So the spline example is a pretty good analogy for what is happening when we see double descent for neural nets.

Survival data

Deep learning for survival outcomes Steingrimsson, 2020

Randomization inference

Randomization test

What is a Randomization Test?

Myths of randomisation

Myths of randomisation

Unequal probabilities

Sampling without replacement with unequal probabilities

Model selection criteria

All models are wrong

All models are wrong from George Box.

MSE

Akaike information criterion/AIC

[math]\displaystyle{ \mathrm{AIC} \, = \, 2k - 2\ln(\hat L) }[/math], where k be the number of estimated parameters in the model.
  • Smaller is better (error criteria)
  • Akaike proposed to approximate the expectation of the cross-validated log likelihood [math]\displaystyle{ E_{test}E_{train} [log L(x_{test}| \hat{\beta}_{train})] }[/math] by [math]\displaystyle{ log L(x_{train} | \hat{\beta}_{train})-k }[/math].
  • Leave-one-out cross-validation is asymptotically equivalent to AIC, for ordinary linear regression models.
  • AIC can be used to compare two models even if they are not hierarchically nested.
  • AIC() from the stats package.
  • broom::glance() was used.
  • Generally resampling based measures such as cross-validation should be preferred over theoretical measures such as Aikake's Information Criteria. Understanding the Bias-Variance Tradeoff & Accurately Measuring Model Prediction Error.

BIC

[math]\displaystyle{ \mathrm{BIC} \, = \, \ln(n) \cdot 2k - 2\ln(\hat L) }[/math], where k be the number of estimated parameters in the model.

Overfitting

AIC vs AUC

What is the difference in what AIC and c-statistic (AUC) actually measure for model fit?

Roughly speaking:

  • AIC is telling you how good your model fits for a specific mis-classification cost.
  • AUC is telling you how good your model would work, on average, across all mis-classification costs.

Frank Harrell: AUC (C-index) has the advantage of measuring the concordance probability as you stated, aside from cost/utility considerations. To me the bottom line is the AUC should be used to describe discrimination of one model, not to compare 2 models. For comparison we need to use the most powerful measure: deviance and those things derived from deviance: generalized 𝑅2 and AIC.

Variable selection and model estimation

Proper variable selection: Use only training data or full data?

  • training observations to perform all aspects of model-fitting—including variable selection
  • make use of the full data set in order to obtain more accurate coefficient estimates (This statement is arguable)

Cross-Validation

References:

R packages:

Bias–variance tradeoff

Data splitting

Split-Sample Model Validation

PRESS statistic (LOOCV) in regression

The PRESS statistic (predicted residual error sum of squares) [math]\displaystyle{ \sum_i (y_i - \hat{y}_{i,-i})^2 }[/math] provides another way to find the optimal model in regression. See the formula for the ridge regression case.

LOOCV vs 10-fold CV in classification

  • Background: Variance of mean for correlated data. If the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is
[math]\displaystyle{ \operatorname{Var}\left(\overline{X}\right) = \frac{\sigma^2}{n} + \frac{n - 1}{n}\rho\sigma^2. }[/math]
This implies that the variance of the mean increases with the average of the correlations.

Monte carlo cross-validation

This method creates multiple random splits of the dataset into training and validation data. See Wikipedia.

  • It is not creating replicates of CV samples.
  • As the number of random splits approaches infinity, the result of repeated random sub-sampling validation tends towards that of leave-p-out cross-validation.

Difference between CV & bootstrapping

Differences between cross validation and bootstrapping to estimate the prediction error

  • CV tends to be less biased but K-fold CV has fairly large variance.
  • Bootstrapping tends to drastically reduce the variance but gives more biased results (they tend to be pessimistic).
  • The 632 and 632+ rules methods have been adapted to deal with the bootstrap bias
  • Repeated CV does K-fold several times and averages the results similar to regular K-fold

.632 and .632+ bootstrap

[math]\displaystyle{ Err_{.632} = 0.368 \overline{err} + 0.632 Err_{boot(1)} }[/math]
[math]\displaystyle{ \hat{E}^*[\phi_{\mathcal{F}}(S)] = .368 \hat{E}[\phi_{f}(S)] + 0.632 \hat{E}[\phi_{f_b}(S_{-b})] }[/math]
where [math]\displaystyle{ \hat{E}[\phi_{f}(S)] }[/math] is the naive estimate of [math]\displaystyle{ \phi_f }[/math] using the entire dataset.

Create partitions for cross-validation

Stratified sampling: caret::createFolds()

Random sampling: sample()

  • cv.glmnet()
    sample(rep(seq(nfolds), length = N))  # a vector
    set.seed(1); sample(rep(seq(3), length = 20)) 
    # [1] 1 1 1 2 1 1 2 2 2 3 3 2 3 1 3 3 3 1 2 2
    
  • Another way is to use replace=TRUE in sample() (not quite uniform compared to the last method, strange)
    sample(1:nfolds, N, replace=TRUE) # a vector
    set.seed(1); sample(1:3, 20, replace=TRUE)
    # [1] 1 3 1 2 1 3 3 2 2 3 3 1 1 1 2 2 2 2 3 1
    table(.Last.value)
    # .Last.value
    # 1 2 3 
    # 7 7 6 
    
  • k-fold cross validation with modelr and broom
  • h2o package to split the merged training dataset into three parts
    n <- 42; nfold <- 5  # unequal partition
    folds <- split(sample(1:n), rep(1:nfold, length = n))  # a list
    sapply(folds, length)
    
  • Another simple example. Split the data into 70% training data and 30% testing data
    mysplit <- sample(c(rep(0, 0.7 * nrow(df)), rep(1, nrow(df) - 0.7 * nrow(df))))
    train <- df[mysplit == 0, ] 
    test <- df[mysplit == 1, ]  
    

Create training/testing data

  • ?createDataPartition.
  • caret createDataPartition returns more samples than expected. It is more complicated than it looks.
    set.seed(1)
    createDataPartition(rnorm(10), p=.3)
    # $Resample1
    # [1] 1 2 4 5
    
    set.seed(1)
    createDataPartition(rnorm(10), p=.5)
    # $Resample1
    # [1] 1 2 4 5 6 9
    
  • Stratified Sampling in R: A Practical Guide with Base R and dplyr
  • Stratified sampling: Stratified Sampling in R (With Examples), initial_split() from tidymodels. With a strata argument, the random sampling is conducted within the stratification variable. So it guaranteed each strata (stratify variable level) has observations in training and testing sets.
    > library(rsample) # or library(tidymodels)
    > table(mtcars$cyl)
     4  6  8 
    11  7 14
    > set.seed(22)
    > sp <- initial_split(mtcars, prop=.8, strata = cyl)
       # 80% training and 20% testing sets
    > table(training(sp)$cyl)
     4  6  8 
     8  5 11 
    > table(testing(sp)$cyl)
    4 6 8 
    3 2 3 
    > 8/11; 5/7; 11/14 # split by initial_split()
    [1] 0.7272727
    [1] 0.7142857
    [1] 0.7857143
    > 9/11; 6/7; 12/14 # if we try to increase 1 observation
    [1] 0.8181818
    [1] 0.8571429
    [1] 0.8571429
    > (8+5+11)/nrow(mtcars)
    [1] 0.75
    > (9+6+12)/nrow(mtcars)
    [1] 0.84375   # looks better
    
    > set.seed(22)
    > sp2 <- initial_split(mtcars, prop=.8)
    table(training(sp2)$cyl)
     4  6  8 
     8  7 10 
    > table(testing(sp2)$cyl)
    4 8 
    3 4 
     # not what we want since cyl "6" has no observations
    

Nested resampling

Nested resampling is need when we want to tuning a model by using a grid search. The default settings of a model are likely not optimal for each data set out. So an inner CV has to be performed with the aim to find the best parameter set of a learner for each fold.

See a diagram at https://i.stack.imgur.com/vh1sZ.png

In BRB-ArrayTools -> class prediction with multiple methods, the alpha (significant level of threshold used for gene selection, 2nd option in individual genes) can be viewed as a tuning parameter for the development of a classifier.

Pre-validation/pre-validated predictor

  • Pre-validation and inference in microarrays Tibshirani and Efron, Statistical Applications in Genetics and Molecular Biology, 2002.
  • See glmnet vignette
  • http://www.stat.columbia.edu/~tzheng/teaching/genetics/papers/tib_efron.pdf#page=5. In each CV, we compute the estimate of the response. This estimate of the response will serve as a new predictor (pre-validated 'predictor' ) in the final fitting model.
  • P1101 of Sachs 2016. With pre-validation, instead of computing the statistic [math]\displaystyle{ \phi }[/math] for each of the held-out subsets ([math]\displaystyle{ S_{-b} }[/math] for the bootstrap or [math]\displaystyle{ S_{k} }[/math] for cross-validation), the fitted signature [math]\displaystyle{ \hat{f}(X_i) }[/math] is estimated for [math]\displaystyle{ X_i \in S_{-b} }[/math] where [math]\displaystyle{ \hat{f} }[/math] is estimated using [math]\displaystyle{ S_{b} }[/math]. This process is repeated to obtain a set of pre-validated 'signature' estimates [math]\displaystyle{ \hat{f} }[/math]. Then an association measure [math]\displaystyle{ \phi }[/math] can be calculated using the pre-validated signature estimates and the true outcomes [math]\displaystyle{ Y_i, i = 1, \ldots, n }[/math].
  • Another description from the paper The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection. The prevalidation method is a variant of cross-validation. We then use [math]\displaystyle{ (y_i, \hat{\eta}_i) }[/math] to compute the measures described above. The cross-validated linear predictor for each patient is derived independently of the observed response of the patient, and hence the “prevalidated” dataset Embedded Image can essentially be treated as a “new dataset.” Therefore, this procedure provides valid assessment of the predictive performance of the model. To get stable results, we run 10× 10-fold cross-validation for real data analysis.
  • In CV, left-out samples = hold-out cases = test set

Custom cross validation

Cross validation vs regularization

When Cross-Validation is More Powerful than Regularization

Cross-validation with confidence (CVC)

JASA 2019 by Jing Lei, pdf, code

Correlation data

Cross-Validation for Correlated Data Rabinowicz, JASA 2020

Bias in Error Estimation

Bias due to unsupervised preprocessing

On the cross-validation bias due to unsupervised preprocessing 2022. Below I follow the practice from Biowulf to install Mamba. In this example, the 'project1' subfolder (2.0 GB) is located in '~/conda/envs' directory.

$ which python3
/usr/bin/python3

$ wget https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh
$ bash Mambaforge-Linux-x86_64.sh -p /home/brb/conda -b
$ source ~/conda/etc/profile.d/conda.sh && source ~/conda/etc/profile.d/mamba.sh
$ mkdir -p ~/bin
$ cat <<'__EOF__' > ~/bin/myconda
__conda_setup="$('/home/$USER/conda/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
    eval "$__conda_setup"
else
    if [ -f "/home/$USER/conda/etc/profile.d/conda.sh" ]; then
        . "/home/$USER/conda/etc/profile.d/conda.sh"
    else
        export PATH="/home/$USER/conda/bin:$PATH"
    fi
fi
unset __conda_setup

if [ -f "/home/$USER/conda/etc/profile.d/mamba.sh" ]; then
    . "/home/$USER/conda/etc/profile.d/mamba.sh"
fi
__EOF__
$ source ~/bin/myconda

$ export MAMBA_NO_BANNER=1
$ mamba create -n project1 python=3.7 numpy scipy scikit-learn mkl-service mkl_random pandas matplotlib
$ mamba activate project1
$ which python  # /home/brb/conda/envs/project1/bin/python

$ git clone https://github.com/mosco/unsupervised-preprocessing.git
$ cd unsupervised-preprocessing/
$ python    # Ctrl+d to quit
$ mamba deactivate

Pitfalls of applying machine learning in genomics

Navigating the pitfalls of applying machine learning in genomics 2022

Bootstrap

See Bootstrap

Clustering

See Clustering.

Cross-sectional analysis

  • https://en.wikipedia.org/wiki/Cross-sectional_study. The opposite of cross-sectional analysis is longitudinal analysis.
  • Cross-sectional analysis refers to a type of research method in which data is collected at a single point in time from a group of individuals, organizations, or other units of analysis. This approach contrasts with longitudinal studies, which follow the same group of individuals or units over an extended period of time.
    • In a cross-sectional analysis, researchers typically collect data from a sample of individuals or units that are representative of the population of interest. This data can then be used to examine patterns, relationships, or differences among the units at a specific point in time.
    • Cross-sectional analysis is commonly used in fields such as sociology, psychology, public health, and economics to study topics such as demographics, health behaviors, income inequality, and social attitudes. While cross-sectional analysis can provide valuable insights into the characteristics of a population at a given point in time, it cannot establish causality or determine changes over time.

Mixed Effect Model

See Longitudinal analysis.

Entropy

[math]\displaystyle{ \begin{align} Entropy &= \sum \log(1/p(x)) p(x) = \sum Surprise P(Surprise) \end{align} }[/math]

Definition

Entropy is defined by -log2(p) where p is a probability. Higher entropy represents higher unpredictable of an event.

Some examples:

  • Fair 2-side die: Entropy = -.5*log2(.5) - .5*log2(.5) = 1.
  • Fair 6-side die: Entropy = -6*1/6*log2(1/6) = 2.58
  • Weighted 6-side die: Consider pi=.1 for i=1,..,5 and p6=.5. Entropy = -5*.1*log2(.1) - .5*log2(.5) = 2.16 (less unpredictable than a fair 6-side die).

Use

When entropy was applied to the variable selection, we want to select a class variable which gives a largest entropy difference between without any class variable (compute entropy using response only) and with that class variable (entropy is computed by adding entropy in each class level) because this variable is most discriminative and it gives most information gain. For example,

  • entropy (without any class)=.94,
  • entropy(var 1) = .69,
  • entropy(var 2)=.91,
  • entropy(var 3)=.725.

We will choose variable 1 since it gives the largest gain (.94 - .69) compared to the other variables (.94 -.91, .94 -.725).

Why is picking the attribute with the most information gain beneficial? It reduces entropy, which increases predictability. A decrease in entropy signifies an decrease in unpredictability, which also means an increase in predictability.

Consider a split of a continuous variable. Where should we cut the continuous variable to create a binary partition with the highest gain? Suppose cut point c1 creates an entropy .9 and another cut point c2 creates an entropy .1. We should choose c2.

Related

In addition to information gain, gini (dʒiːni) index is another metric used in decision tree. See wikipedia page about decision tree learning.

Ensembles

Bagging

Draw N bootstrap samples and summary the results (averaging for regression problem, majority vote for classification problem). Decrease variance without changing bias. Not help much with underfit or high bias models.

Random forest

Boosting

Instead of selecting data points randomly with the boostrap, it favors the misclassified points.

Algorithm:

  • Initialize the weights
  • Repeat
    • resample with respect to weights
    • retrain the model
    • recompute weights

Since boosting requires computation in iterative and bagging can be run in parallel, bagging has an advantage over boosting when the data is very large.

Time series

p-values

p-values

Misuse of p-values

  • https://en.wikipedia.org/wiki/Misuse_of_p-values. The p-value does not indicate the size or importance of the observed effect.
  • Question: If we are fitting a multivariate regression and variable 1 ends with p-value .01 and variable 2 has p-value .001. How do we describe variable 2 is more significant than variable 1?
    • Answer: you can say that variable 2 has a smaller p-value than variable 1. A p-value is a measure of the strength of evidence against the null hypothesis. It is the probability of observing a test statistic as extreme or more extreme than the one calculated from your data, assuming the null hypothesis is true. The smaller the p-value, the stronger the evidence against the null hypothesis and in favor of the alternative hypothesis. In your example, variable 2 has a smaller p-value than variable 1, which means that there is stronger evidence against the null hypothesis for variable 2 than for variable 1. However, it is important to note that a smaller p-value does not necessarily mean that one variable has a stronger effect or is more important than the other. Instead of comparing p-values directly, it would be more appropriate to look at effect sizes and confidence intervals to determine the relative importance of each variable.
    • Effect Size: While a p-value tells you whether an effect exists, it does not convey the size of the effect. A p-value of 0.001 may be due to a larger effect size than one producing a p-value of 0.01, but this isn’t always the case. Effect size measures (like Cohen’s d for two means, Pearson’s r for two continuous variables, or Odds Ratio in logistic regression or contingency tables) are necessary to interpret the practical significance.
    • Practical Significance: Even if both p-values are statistically significant, the practical or clinical significance of the findings should be considered. A very small effect size, even with a p-value of 0.001, may not be practically important.
  • Question: do p-values show the relative importance of different predictors?
    • P-values can indicate the statistical significance of a predictor in a model, but they do not directly measure the relative importance of different predictors.
    • A p-value is a measure of the probability that the observed relationship between a predictor and the response variable occurred by chance under the null hypothesis. A smaller p-value suggests that it is less likely that the observed relationship occurred by chance, which often leads to the conclusion that the predictor is statistically significant.
    • However, p-values do not tell us about the size or magnitude of an effect, nor do they directly compare the effects of different predictors. Two predictors might both be statistically significant, but one might have a much larger effect on the response variable than the other (There are several statistical measures that can be used to assess the relative importance of predictors in a model: Standardized Coefficients, Partial Correlation Coefficients, Variable Importance in Projection (VIP), Variable Importance Measures in Tree-Based Models, LASSO (Least Absolute Shrinkage and Selection Operator) and Relative Weights Analysis).
    • Moreover, p-values are sensitive to sample size. With a large enough sample size, even tiny, unimportant differences can become statistically significant.
    • Therefore, while p-values are a useful tool in model analysis, they should not be used alone to determine the relative importance of predictors. Other statistical measures and domain knowledge should also be considered.

Distribution of p values in medical abstracts

nominal p-value and Empirical p-values

  • Nominal p-values are based on asymptotic null distributions
  • Empirical p-values are computed from simulations/permutations
  • What is the concepts of nominal and actual significance level?
    • The nominal significance level is the significance level a test is designed to achieve. This is very often 5% or 1%. Now in many situations the nominal significance level can't be achieved precisely. This can happen because the distribution is discrete and doesn't allow for a precise given rejection probability, and/or because the theory behind the test is asymptotic, i.e., the nominal level is only achieved for 𝑛→∞.

(nominal) alpha level

Conventional methodology for statistical testing is, in advance of undertaking the test, to set a NOMINAL ALPHA CRITERION LEVEL (often 0.05). The outcome is classified as showing STATISTICAL SIGNIFICANCE if the actual ALPHA (probability of the outcome under the null hypothesis) is no greater than this NOMINAL ALPHA CRITERION LEVEL.

Normality assumption

Violating the normality assumption may be the lesser of two evils

Second-Generation p-Values

An Introduction to Second-Generation p-Values Blume et al, 2020

Small p-value due to very large sample size

Bayesian

  • Bayesian believers, who adhere to Bayesian statistics, often have a different perspective on hypothesis testing compared to frequentist statisticians. In Bayesian statistics, the focus is on estimating the probability of a hypothesis being true given the data, rather than on the probability of the data given a specific hypothesis (as in p-values).
  • Bayesian believers generally prefer using Bayesian methods, such as computing credible intervals or Bayes factors, which provide more directly interpretable results in terms of the probability of hypotheses. These methods can be seen as more informative than p-values, as they give a range of plausible values for the parameter of interest or directly compare the relative plausibility of different hypotheses.

T-statistic

See T-statistic.

ANOVA

See ANOVA.

Goodness of fit

Chi-square tests

Fitting distribution

Fitting distributions with R

Normality distribution check

Anderson-Darling Test in R (Quick Normality Check)

Kolmogorov-Smirnov test

Contingency Tables

How to Measure Contingency-Coefficient (Association Strength). gplots::balloonplot() and corrplot::corrplot() .

What statistical test should I do

What statistical test should I do?

Graphically show association

  1. Bar Graphs: Bar graphs can be used to compare the frequency of different categories in two variables. Each bar represents a category, and the height of the bar represents its frequency. You can create side-by-side bar graphs or stacked bar graphs to compare frequencies across categories. See Contingency Table: Definition, Examples & Interpreting (row totals) and Two Different Categorical Variables (column totals).
  2. Mosaic Plots: A mosaic plot gives a visual representation of the relationship between two categorical variables. It's a rectangular grid that represents the total population, and it's divided into smaller rectangles that represent the categories of each variable. The size of each rectangle is proportional to the frequency of each category. See Visualizing Association With Mosaic Plots.
  3. Categorical Scatterplots: In seaborn, a Python data visualization library, there are categorical scatterplots that adjust the positions of points on the categorical axis with a small amount of random "jitter" or using an algorithm that prevents them from overlapping. See Visualizing categorical data.
  4. Contingency Tables: While not a graphical method, contingency tables are often used in conjunction with graphical methods. A contingency table displays how many individuals fall in each combination of categories for two variables.

Q: How to guess whether two variables are associated by looking at the counts in a 2x2 contingency table:

  • Observe the distribution of counts: If the counts are evenly distributed across the cells of the table, it suggests that there may not be a strong association between the two variables. However, if the counts are unevenly distributed, it suggests that there may be an association.
  • Compare the diagonal cells: If the counts in the diagonal cells (top left to bottom right or top right to bottom left) are high compared to the off-diagonal cells, it suggests a positive association between the two variables. Conversely, if the counts in the off-diagonal cells are high, it suggests a negative association. See odds ratio >1 (pos association) or <1 (neg association).
  • Calculate and compare the row and column totals: If the row and column totals are similar, it suggests that there may not be a strong association between the two variables. However, if the row and column totals are very different, it suggests that there may be an association.

Q: When creating a barplot of percentages from a contingency table, whether you calculate percentages by dividing counts by row totals or column totals? A: It depends on the question you’re trying to answer. See Contingency Table: Definition, Examples & Interpreting.

  • Row Totals: If you’re interested in understanding the distribution of a variable within each row category, you would calculate percentages by dividing counts by row totals. This is often used when the row variable is the independent variable and you want to see how the column variable (dependent variable) is distributed within each level of the row variable.
  • Column Totals: If you’re interested in understanding the distribution of a variable within each column category, you would calculate percentages by dividing counts by column totals. This is often used when the column variable is the independent variable and you want to see how the row variable (dependent variable) is distributed within each level of the column variable.

Barplot with colors for a 2nd variable.

Measure the association in a contingency table

  • Phi coefficient: The Phi coefficient is a measure of association that is used for 2x2 contingency tables. It ranges from -1 to 1, with 0 indicating no association and values close to -1 or 1 indicating a strong association. The formula for Phi coefficient is: Phi = (ad - bc) / sqrt((a+b)(c+d)(a+c)(b+d)), where a, b, c, and d are the frequency counts in the four cells of the contingency table.
  • Cramer's V: Cramer's V is a measure of association that is used for contingency tables of any size. It ranges from 0 to 1, with 0 indicating no association and values close to 1 indicating a strong association. The formula for Cramer's V is: V = sqrt(Chi-Square / (n*(min(r,c)-1))), where Chi-Square is the Chi-Square statistic, n is the total sample size, and r and c are the number of rows and columns in the contingency table.
  • Odds ratio: The odds ratio is a measure of association that is commonly used in medical research and epidemiology. It compares the odds of an event occurring in one group compared to another group. The odds ratio can be calculated as: OR = (a/b) / (c/d), where a, b, c, and d are the frequency counts in the four cells of the contingency table. An odds ratio of 1 indicates no association, while values greater than 1 indicate a positive association and values less than 1 indicate a negative association.

Odds ratio and Risk ratio

  • Odds ratio and Risk ratio/relative risk.
    • In practice the odds ratio is commonly used for case-control studies, as the relative risk cannot be estimated.
    • Relative risk is used in the statistical analysis of the data of ecological, cohort, medical and intervention studies, to estimate the strength of the association between exposures (treatments or risk factors) and outcomes.
  • Odds Ratio Interpretation Quick Guide
  • The odds ratio is often used to evaluate the strength of the association between two binary variables and to compare the risk of an event occurring between two groups.
    • An odds ratio greater than 1 indicates that the event is more likely to occur in the first group, while an odds ratio less than 1 indicates that the event is more likely to occur in the second group.
    • In general, a larger odds ratio indicates a stronger association between the two variables, while a smaller odds ratio indicates a weaker association.
  • The ratio of the odds of an event occurring in one group to the odds of it occurring in another group
                            Treatment  | Control   
    -------------------------------------------------
    Event occurs         |   A         |   B       
    -------------------------------------------------
    Event does not occur |   C         |   D       
    -------------------------------------------------
    Odds                 |   A/C       |   B/D
    -------------------------------------------------
    Risk                 |   A/(A+C)   |   B/(B+D)
    
    • Odds Ratio = (A / C) / (B / D) = (AD) / (BC)
    • Risk Ratio = (A / (A+C)) / (C / (B+D))
  • Real example. In a study published in the Journal of the American Medical Association, researchers investigated the association between the use of nonsteroidal anti-inflammatory drugs (NSAIDs) and the risk of developing gastrointestinal bleeding. Suppose odds ratio = 2.5 and risk ratio is 1.5. The interpretation of the results in this study is as follows:
    • The odds ratio of 2.5 indicates that the odds of gastrointestinal bleeding are 2.5 times higher in the group of patients taking NSAIDs compared to the group of patients not taking NSAIDs.
    • The risk ratio of 1.5 indicates that the risk of gastrointestinal bleeding is 1.5 times higher in the group of patients taking NSAIDs compared to the group of patients not taking NSAIDs.
    • In this example, both the odds ratio and the risk ratio indicate a significant association between NSAID use and the risk of gastrointestinal bleeding. However, the risk ratio is lower than the odds ratio, indicating that the overall prevalence of gastrointestinal bleeding in the study population is relatively low.
  • What is the main difference in the interpretation of odds ratio and risk ratio?
    • Odds are a measure of the probability of an event occurring, expressed as the ratio of the number of ways the event can occur to the number of ways it cannot occur. For example, if the probability of an event occurring is 0.5 (or 50%), the odds of the event occurring would be 1:1 (or 1 to 1).
    • Risk is a measure of the probability of an event occurring, expressed as the ratio of the number of events that occur to the total number of events. For example, if 10 out of 100 people experience an event, the risk of the event occurring would be 10%.
    • The main difference between the two measures is that the odds ratio is more sensitive to changes in the frequency of the event, while the risk ratio is more sensitive to changes in the overall prevalence of the event.
    • This means that the odds ratio is more useful for comparing the odds of an event occurring between two groups when the event is relatively rare, while the risk ratio is more useful for comparing the risk of an event occurring between two groups when the event is more common.

Hypergeometric, One-tailed Fisher exact test

         drawn   | not drawn | 
-------------------------------------
white |   x      |           | m
-------------------------------------
black |  k-x     |           | n
-------------------------------------
      |   k      |           | m+n

For example, k=100, m=100, m+n=1000,

> 1 - phyper(10, 100, 10^3-100, 100, log.p=F)
[1] 0.4160339
> a <- dhyper(0:10, 100, 10^3-100, 100)
> cumsum(rev(a))
  [1] 1.566158e-140 1.409558e-135 3.136408e-131 3.067025e-127 1.668004e-123 5.739613e-120 1.355765e-116
  [8] 2.325536e-113 3.018276e-110 3.058586e-107 2.480543e-104 1.642534e-101  9.027724e-99  4.175767e-96
 [15]  1.644702e-93  5.572070e-91  1.638079e-88  4.210963e-86  9.530281e-84  1.910424e-81  3.410345e-79
 [22]  5.447786e-77  7.821658e-75  1.013356e-72  1.189000e-70  1.267638e-68  1.231736e-66  1.093852e-64
 [29]  8.900857e-63  6.652193e-61  4.576232e-59  2.903632e-57  1.702481e-55  9.240350e-54  4.650130e-52
 [36]  2.173043e-50  9.442985e-49  3.820823e-47  1.441257e-45  5.074077e-44  1.669028e-42  5.134399e-41
 [43]  1.478542e-39  3.989016e-38  1.009089e-36  2.395206e-35  5.338260e-34  1.117816e-32  2.200410e-31
 [50]  4.074043e-30  7.098105e-29  1.164233e-27  1.798390e-26  2.617103e-25  3.589044e-24  4.639451e-23
 [57]  5.654244e-22  6.497925e-21  7.042397e-20  7.198582e-19  6.940175e-18  6.310859e-17  5.412268e-16
 [64]  4.377256e-15  3.338067e-14  2.399811e-13  1.626091e-12  1.038184e-11  6.243346e-11  3.535115e-10
 [71]  1.883810e-09  9.442711e-09  4.449741e-08  1.970041e-07  8.188671e-07  3.193112e-06  1.167109e-05
 [78]  3.994913e-05  1.279299e-04  3.828641e-04  1.069633e-03  2.786293e-03  6.759071e-03  1.525017e-02
 [85]  3.196401e-02  6.216690e-02  1.120899e-01  1.872547e-01  2.898395e-01  4.160339e-01  5.550192e-01
 [92]  6.909666e-01  8.079129e-01  8.953150e-01  9.511926e-01  9.811343e-01  9.942110e-01  9.986807e-01
 [99]  9.998018e-01  9.999853e-01  1.000000e+00

# Density plot
plot(0:100, dhyper(0:100, 100, 10^3-100, 100), type='h')

File:Dhyper.svg

Moreover,

  1 - phyper(q=10, m, n, k) 
= 1 - sum_{x=0}^{x=10} phyper(x, m, n, k)
= 1 - sum(a[1:11]) # R's index starts from 1.

Another example is the data from the functional annotation tool in DAVID.

               | gene list | not gene list | 
-------------------------------------------------------
pathway        |   3  (q)  |               | 40 (m)
-------------------------------------------------------
not in pathway |  297      |               | 29960 (n)
-------------------------------------------------------
               |  300 (k)  |               | 30000

The one-tailed p-value from the hypergeometric test is calculated as 1 - phyper(3-1, 40, 29960, 300) = 0.0074.

Fisher's exact test

Following the above example from the DAVID website, the following R command calculates the Fisher exact test for independence in 2x2 contingency tables.

> fisher.test(matrix(c(3, 40, 297, 29960), nr=2)) #  alternative = "two.sided" by default

        Fisher's Exact Test for Count Data

data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.008853
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
  1.488738 23.966741
sample estimates:
odds ratio
  7.564602

> fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="greater")

        Fisher's Exact Test for Count Data

data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.008853
alternative hypothesis: true odds ratio is greater than 1
95 percent confidence interval:
 1.973   Inf
sample estimates:
odds ratio
  7.564602

> fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="less")

        Fisher's Exact Test for Count Data

data:  matrix(c(3, 40, 297, 29960), nr = 2)
p-value = 0.9991
alternative hypothesis: true odds ratio is less than 1
95 percent confidence interval:
  0.00000 20.90259
sample estimates:
odds ratio
  7.564602

Fisher's exact test in R: independence test for a small sample

From the documentation of fisher.test

Usage:
     fisher.test(x, y = NULL, workspace = 200000, hybrid = FALSE,
                 control = list(), or = 1, alternative = "two.sided",
                 conf.int = TRUE, conf.level = 0.95,
                 simulate.p.value = FALSE, B = 2000)
  • For 2 by 2 cases, p-values are obtained directly using the (central or non-central) hypergeometric distribution.
  • For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one.
  • The alternative for a one-sided test is based on the odds ratio, so ‘alternative = "greater"’ is a test of the odds ratio being bigger than ‘or’.
  • Two-sided tests are based on the probabilities of the tables, and take as ‘more extreme’ all tables with probabilities less than or equal to that of the observed table, the p-value being the sum of such probabilities.

Boschloo's test

https://en.wikipedia.org/wiki/Boschloo%27s_test

IID assumption

Ignoring the IID assumption isn’t a great idea

Chi-square independence test

  • https://en.wikipedia.org/wiki/Chi-squared_test.
    • Chi-Square = Σ[(O - E)^2 / E]
    • We can see expected_{ij} = n_{i.}*n_{.j}/n_{..}
    • The Chi-Square test statistic follows a Chi-Square distribution with degrees of freedom equal to (r-1) x (c-1)
    • The Chi-Square test is generally a two-sided test, meaning that it tests for a significant difference between the observed and expected frequencies in both directions (i.e., either a greater than or less than difference).
  • Chi-square test of independence by hand
> chisq.test(matrix(c(14,0,4,10), nr=2), correct=FALSE)

	Pearson's Chi-squared test

data:  matrix(c(14, 0, 4, 10), nr = 2)
X-squared = 15.556, df = 1, p-value = 8.012e-05

# How about the case if expected=0 for some elements?
> chisq.test(matrix(c(14,0,4,0), nr=2), correct=FALSE)

	Pearson's Chi-squared test

data:  matrix(c(14, 0, 4, 0), nr = 2)
X-squared = NaN, df = 1, p-value = NA

Warning message:
In chisq.test(matrix(c(14, 0, 4, 0), nr = 2), correct = FALSE) :
  Chi-squared approximation may be incorrect

Exploring the underlying theory of the chi-square test through simulation - part 2

The result of Fisher exact test and chi-square test can be quite different.

# https://myweb.uiowa.edu/pbreheny/7210/f15/notes/9-24.pdf#page=4
R> Job <- matrix(c(16,48,67,21,0,19,53,88), nr=2, byrow=T)
R> dimnames(Job) <- list(A=letters[1:2],B=letters[1:4])
R> fisher.test(Job)

	Fisher's Exact Test for Count Data

data:  Job
p-value < 2.2e-16
alternative hypothesis: two.sided

R> chisq.test(c(16,48,67,21), c(0,19,53,88))

	Pearson's Chi-squared test

data:  c(16, 48, 67, 21) and c(0, 19, 53, 88)
X-squared = 12, df = 9, p-value = 0.2133

Warning message:
In chisq.test(c(16, 48, 67, 21), c(0, 19, 53, 88)) :
  Chi-squared approximation may be incorrect

Cochran-Armitage test for trend (2xk)

PAsso: Partial Association between ordinal variables after adjustment

https://github.com/XiaoruiZhu/PAsso

Cochran-Mantel-Haenszel (CMH) & Association Tests for Ordinal Table

GSEA

See GSEA.

McNemar’s test on paired nominal data

https://en.wikipedia.org/wiki/McNemar%27s_test

R

Contingency Tables In R. Two-Way Tables, Mosaic plots, Proportions of the Contingency Tables, Rows and Columns Totals, Statistical Tests, Three-Way Tables, Cochran-Mantel-Haenszel (CMH) Methods.

Case control study

Confidence vs Credibility Intervals

http://freakonometrics.hypotheses.org/18117

T-distribution vs normal distribution

set.seed(1); shapiro.test(rnorm(5000) )
#	Shapiro-Wilk normality test
# data:  rnorm(5000)
# W = 0.99957, p-value = 0.3352. --> accept H0

set.seed(1234567); shapiro.test(rnorm(5000) )
# 	Shapiro-Wilk normality test
# data:  rnorm(5000)
# W = 0.99934, p-value = 0.06508 --> accept H0, but close to .05

Power analysis/Sample Size determination

See Power.

Common covariance/correlation structures

See psu.edu. Assume covariance [math]\displaystyle{ \Sigma = (\sigma_{ij})_{p\times p} }[/math]

  • Diagonal structure: [math]\displaystyle{ \sigma_{ij} = 0 }[/math] if [math]\displaystyle{ i \neq j }[/math].
  • Compound symmetry: [math]\displaystyle{ \sigma_{ij} = \rho }[/math] if [math]\displaystyle{ i \neq j }[/math].
  • First-order autoregressive AR(1) structure: [math]\displaystyle{ \sigma_{ij} = \rho^{|i - j|} }[/math].
    rho <- .8
    p <- 5
    blockMat <- rho ^ abs(matrix(1:p, p, p, byrow=T) - matrix(1:p, p, p))
  • Banded matrix: [math]\displaystyle{ \sigma_{ii}=1, \sigma_{i,i+1}=\sigma_{i+1,i} \neq 0, \sigma_{i,i+2}=\sigma_{i+2,i} \neq 0 }[/math] and [math]\displaystyle{ \sigma_{ij}=0 }[/math] for [math]\displaystyle{ |i-j| \ge 3 }[/math].
  • Spatial Power
  • Unstructured Covariance
  • Toeplitz structure

To create blocks of correlation matrix, use the "%x%" operator. See kronecker().

covMat <- diag(n.blocks) %x% blockMat

Counter/Special Examples

Math myths

Correlated does not imply independence

Suppose X is a normally-distributed random variable with zero mean. Let Y = X^2. Clearly X and Y are not independent: if you know X, you also know Y. And if you know Y, you know the absolute value of X.

The covariance of X and Y is

  Cov(X,Y) = E(XY) - E(X)E(Y) = E(X^3) - 0*E(Y) = E(X^3)
           = 0, 

because the distribution of X is symmetric around zero. Thus the correlation r(X,Y) = Cov(X,Y)/Sqrt[Var(X)Var(Y)] = 0, and we have a situation where the variables are not independent, yet have (linear) correlation r(X,Y) = 0.

This example shows how a linear correlation coefficient does not encapsulate anything about the quadratic dependence of Y upon X.

Significant p value but no correlation

Post where p-value = 1.18e-06 but cor=0.067. p-value does not say anything about the size of r.

Spearman vs Pearson correlation

Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship. https://stats.stackexchange.com/questions/8071/how-to-choose-between-pearson-and-spearman-correlation

Testing using Student's t-distribution cor.test() (T-distribution with n-1 d.f.). The normality assumption is used in test. For estimation, it affects the unbiased and efficiency. See Sensitivity to the data distribution.

x=(1:100);  
y=exp(x);                        
cor(x,y, method='spearman') # 1
cor(x,y, method='pearson')  # .25

How to know whether Pearson's or Spearman's correlation is better to use? & Spearman’s Correlation Explained. Spearman's 𝜌 is better than Pearson correlation since

  • it doesn't assume linear relationship between variables
  • it is resistant to outliers
  • it handles ordinal data that are not interval-scaled

Spearman vs Wilcoxon

By this post

  • Wilcoxon used to compare categorical versus non-normal continuous variable
  • Spearman's rho used to compare two continuous (including ordinal) variables that one or both aren't normally distributed

Spearman vs Kendall correlation

  • Kendall's tau coefficient (after the Greek letter τ), is a statistic used to measure the ordinal association between two measured quantities.
  • Spearman’s rho and Kendall’s tau from Statistical Odds & Ends
  • Kendall Tau or Spearman's rho?
  • Kendall’s Rank Correlation in R-Correlation Test
  • Kendall’s tau is also more robust (less sensitive) to ties and outliers than Spearman’s rho. However, if the data are continuous or nearly so, Spearman’s rho may be more appropriate.
  • Kendall’s tau is preferred when dealing with small samples. Pearson vs Spearman vs Kendall.
  • Interpretation of concordant and discordant pairs: Kendall’s tau quantifies the difference between the percentage of concordant and discordant pairs among all possible pairwise events, which can be a more direct interpretation in certain contexts
  • Although Kendall’s tau has a higher computation complexity (O(n^2)) compared to Spearman’s rho (O(n logn)), it can still be preferred in certain scenarios.

Pearson/Spearman/Kendall correlations

Anscombe quartet

Four datasets have almost same properties: same mean in X, same mean in Y, same variance in X, (almost) same variance in Y, same correlation in X and Y, same linear regression.

File:Anscombe quartet 3.svg

phi correlation for binary variables

https://en.wikipedia.org/wiki/Phi_coefficient. A Pearson correlation coefficient estimated for two binary variables will return the phi coefficient.

set.seed(1)
data <- data.frame(x=sample(c(0,1), 100, replace = T), y= sample(c(0,1), 100, replace = T))
cor(data$x, data$y)
# [1] -0.03887781

library(psych)
psych::phi(table(data$x, data$y))
# [1] -0.04

The real meaning of spurious correlations

https://nsaunders.wordpress.com/2017/02/03/the-real-meaning-of-spurious-correlations/

library(ggplot2)
 
set.seed(123)
spurious_data <- data.frame(x = rnorm(500, 10, 1),
                            y = rnorm(500, 10, 1),
                            z = rnorm(500, 30, 3))
cor(spurious_data$x, spurious_data$y)
# [1] -0.05943856
spurious_data %>% ggplot(aes(x, y)) + geom_point(alpha = 0.3) + 
theme_bw() + labs(title = "Plot of y versus x for 500 observations with N(10, 1)")

cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
# [1] 0.4517972
spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) +
 theme_bw() + geom_smooth(method = "lm") + 
scale_color_gradientn(colours = c("red", "white", "blue")) + 
labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 3)")

spurious_data$z <- rnorm(500, 30, 6)
cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
# [1] 0.8424597
spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) + 
theme_bw() + geom_smooth(method = "lm") + 
scale_color_gradientn(colours = c("red", "white", "blue")) + 
labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 6)")

A New Coefficient of Correlation

A New Coefficient of Correlation Chatterjee, 2020 Jasa

Time series

Structural change

Structural Changes in Global Warming

AR(1) processes and random walks

Spurious correlations and random walks

Measurement Error model

Polya Urn Model

The Pólya Urn Model: A simple Simulation of “The Rich get Richer”

Dictionary

Statistical guidance

Books, learning material

Social

JSM

Following

COPSS

考普斯會長獎 COPSS

美國國家科學院 United States National Academy of Sciences/NAS

美國國家科學院