ROC: Difference between revisions

From 太極
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
= Confusion matrix, Sensitivity/Specificity/Accuracy =
[https://en.wikipedia.org/wiki/Confusion_matrix Wikipedia]
{| border="1" style="border-collapse:collapse; text-align:center;"
|-
|                    ||  || colspan="2" | Predict  ||
|-
|                    ||  ||  1      ||    0      ||
|-
| rowspan="2" | True || 1 ||  TP    ||    FN    || style="color: white;background: red;"|Sens=TP/(TP+FN)=Recall=TPR <br/> FNR=FN/(TP+FN)
|-
|    0              ||  FP    ||    TN    || style="color: white;background: blue;"|Spec=TN/(FP+TN), 1-Spec=FPR
|-
|                    ||    ||  style="color: white;background: green;"|PPV=TP/(TP+FP)=Precision <br/> FDR=FP/(TP+FP) =1-PPV||  NPV=TN/(FN+TN) ||  N = TP + FP + FN + TN
|}
* <span style="color: red">Sensitivity 敏感度</span> = TP / (TP + FN) = Recall
* <span style="color: blue">Specificity 特異度</span> = TN / (TN + FP)
* Accuracy = (TP + TN) / N
* <span style="color: green">False discovery rate FDR = FP / (TP + FP)</span>
* False negative rate FNR = FN / (TP + FN)
* <span style="color: blue">False positive rate FPR = FP / (FP + TN) = 1 - Spec</span>
* <span style="color: red">True positive rate = TP / (TP + FN) = Sensitivity</span>
* [https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values <span style="color: green">Positive predictive value (PPV)</span>] = TP / # positive calls = TP / (TP + FP) = 1 - FDR = <span style="color: green">Precision</span>
* Negative predictive value (NPV) = TN /  # negative calls = TN / (FN + TN)
* Prevalence 盛行率 = (TP + FN) / N.
* Note that PPV & NPV can also be computed from sensitivity, specificity, and prevalence:
** [https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values#cite_note-AltmanBland1994-2 PPV is directly proportional to the prevalence of the disease or condition.].
** For example, in the extreme case if the prevalence =1, then PPV is always 1.
::<math> \text{PPV} = \frac{\text{sensitivity} \times \text{prevalence}}{\text{sensitivity} \times \text{prevalence}+(1-\text{specificity}) \times (1-\text{prevalence})} </math>
::<math> \text{NPV} = \frac{\text{specificity} \times (1-\text{prevalence})}{(1-\text{sensitivity}) \times \text{prevalence}+\text{specificity} \times (1-\text{prevalence})} </math>
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-03626-y Prediction of heart disease and classifiers’ sensitivity analysis] Almustafa, 2020
* [https://www.fda.gov/regulatory-information/search-fda-guidance-documents/statistical-guidance-reporting-results-studies-evaluating-diagnostic-tests-guidance-industry-and-fda Positive percent agreement (PPA) and negative percent agreement (NPA)]
* [https://hutsons-hacks.info/confusiontabler-has-made-it-to-cran ConfusionTableR has made it to CRAN] 21/07/2021. [https://mran.microsoft.com/web/packages/ConfusionTableR/index.html ConfusionTableR]
== caret::confusionMatrix ==
* [https://www.datatechnotes.com/2019/02/accuracy-metrics-in-classification.html Precision, Recall, Specificity, Prevalence, Kappa, F1-score check with R] by using the [https://www.rdocumentation.org/packages/caret/versions/6.0-86/topics/confusionMatrix caret:: confusionMatrix()] function. ''If there are only two factor levels, the first level will be used as the "positive" result.''
* One thing caret::confusionMatrix can't do is ROC/AUC. For this, use [https://cran.r-project.org/web/packages/pROC/index.html pROC]::ROC() and the plot function. [https://www.statology.org/auc-in-r/ How to Calculate AUC (Area Under Curve) in R].
<ul>
<li>[https://en.wikipedia.org/wiki/F-score#Dependence_of_the_F-score_on_class_imbalance F-score]/F1-score.
* F1 = 2 * Precision * Recall / (Precision + Recall) = 2 / (1/Precision + 1/Recall)  '''Harmonic mean''' of precision and recall.
* This means that comparison of the F-score across different problems with differing class ratios is problematic.
* [https://towardsdatascience.com/the-f1-score-bec2bbc38aa6 The F1 score]
* [https://stephenallwright.com/interpret-f1-score/ How to interpret F1 score (simply explained)]. It is a popular metric to use for classification models as it provides accurate results for both balanced and imbalanced datasets, and takes into account both the precision and recall ability of the model. '''F1 score tells you the model’s balanced ability to both capture positive cases (recall) and be accurate with the cases it does capture (precision).'''
* [https://www.statology.org/what-is-a-good-f1-score/ What is Considered a “Good” F1 Score?] It gives an example of F1-score where all the predictions are on one class ('''baseline model''') for comparison purpose.
* An example. '''confusionMatrix()''' assumes the rows are predicted and columns are true; that's why I put a transpose on my table object. The 'positive' class seems to be always the 1st row of the 2x2 table. If the levels are not specified, it will assign 'A' to the first row and 'B' to the 2nd row. For this case, we can see accuracy (0.615) is between sensitivity (0.429) and specificity (0.684) and F1-score (0.375) is between precision (0.333) and recall (0.429) . Also F1-score is smaller than sensitivity and specificity.
<pre>
              predicted
        sensitive  resistant
-------------------------------
true
sensitive    3          4
resistant    6        13
xtab <- matrix(c(3,6,4,13),nr=2)
confusionMatrix(t(xtab))$byClass
#        Sensitivity          Specificity      Pos Pred Value      Neg Pred Value
#          0.4285714            0.6842105            0.3333333            0.7647059
#          Precision              Recall                  F1          Prevalence
#          0.3333333            0.4285714            0.3750000            0.2692308
#      Detection Rate Detection Prevalence    Balanced Accuracy
#          0.1153846            0.3461538            0.5563910
confusionMatrix(t(xtab))$overall
#      Accuracy          Kappa  AccuracyLower  AccuracyUpper  AccuracyNull
#      0.6153846      0.1034483      0.4057075      0.7977398      0.7307692
# AccuracyPValue  McnemarPValue
#      0.9348215      0.7518296
confusionMatrix(t(xtab))$positive
# [1] "A"
</pre>
</ul>
== False positive rates vs false positive rates ==
[https://stats.stackexchange.com/a/340079 FPR (false positive rate) vs FDR (false discovery rate)]
= ROC curve =
= ROC curve =
* Binary case:
* Binary case:
Line 92: Line 169:
* ROC curves are useful for comparing the discriminatory capacity of different potential biomarkers.
* ROC curves are useful for comparing the discriminatory capacity of different potential biomarkers.


= Confusion matrix, Sensitivity/Specificity/Accuracy =
[https://en.wikipedia.org/wiki/Confusion_matrix Wikipedia]
{| border="1" style="border-collapse:collapse; text-align:center;"
|-
|                    ||  || colspan="2" | Predict  ||
|-
|                    ||  ||  1      ||    0      ||
|-
| rowspan="2" | True || 1 ||  TP    ||    FN    || style="color: white;background: red;"|Sens=TP/(TP+FN)=Recall=TPR <br/> FNR=FN/(TP+FN)
|-
|    0              ||  FP    ||    TN    || style="color: white;background: blue;"|Spec=TN/(FP+TN), 1-Spec=FPR
|-
|                    ||    ||  style="color: white;background: green;"|PPV=TP/(TP+FP)=Precision <br/> FDR=FP/(TP+FP) =1-PPV||  NPV=TN/(FN+TN) ||  N = TP + FP + FN + TN
|}
* <span style="color: red">Sensitivity 敏感度</span> = TP / (TP + FN) = Recall
* <span style="color: blue">Specificity 特異度</span> = TN / (TN + FP)
* Accuracy = (TP + TN) / N
* <span style="color: green">False discovery rate FDR = FP / (TP + FP)</span>
* False negative rate FNR = FN / (TP + FN)
* <span style="color: blue">False positive rate FPR = FP / (FP + TN) = 1 - Spec</span>
* <span style="color: red">True positive rate = TP / (TP + FN) = Sensitivity</span>
* [https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values <span style="color: green">Positive predictive value (PPV)</span>] = TP / # positive calls = TP / (TP + FP) = 1 - FDR = <span style="color: green">Precision</span>
* Negative predictive value (NPV) = TN /  # negative calls = TN / (FN + TN)
* Prevalence 盛行率 = (TP + FN) / N.
* Note that PPV & NPV can also be computed from sensitivity, specificity, and prevalence:
** [https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values#cite_note-AltmanBland1994-2 PPV is directly proportional to the prevalence of the disease or condition.].
** For example, in the extreme case if the prevalence =1, then PPV is always 1.
::<math> \text{PPV} = \frac{\text{sensitivity} \times \text{prevalence}}{\text{sensitivity} \times \text{prevalence}+(1-\text{specificity}) \times (1-\text{prevalence})} </math>
::<math> \text{NPV} = \frac{\text{specificity} \times (1-\text{prevalence})}{(1-\text{sensitivity}) \times \text{prevalence}+\text{specificity} \times (1-\text{prevalence})} </math>
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-03626-y Prediction of heart disease and classifiers’ sensitivity analysis] Almustafa, 2020
* [https://www.fda.gov/regulatory-information/search-fda-guidance-documents/statistical-guidance-reporting-results-studies-evaluating-diagnostic-tests-guidance-industry-and-fda Positive percent agreement (PPA) and negative percent agreement (NPA)]
* [https://hutsons-hacks.info/confusiontabler-has-made-it-to-cran ConfusionTableR has made it to CRAN] 21/07/2021. [https://mran.microsoft.com/web/packages/ConfusionTableR/index.html ConfusionTableR]
== caret::confusionMatrix ==
* [https://www.datatechnotes.com/2019/02/accuracy-metrics-in-classification.html Precision, Recall, Specificity, Prevalence, Kappa, F1-score check with R] by using the [https://www.rdocumentation.org/packages/caret/versions/6.0-86/topics/confusionMatrix caret:: confusionMatrix()] function. ''If there are only two factor levels, the first level will be used as the "positive" result.''
* One thing caret::confusionMatrix can't do is ROC/AUC. For this, use [https://cran.r-project.org/web/packages/pROC/index.html pROC]::ROC() and the plot function. [https://www.statology.org/auc-in-r/ How to Calculate AUC (Area Under Curve) in R].
<ul>
<li>[https://en.wikipedia.org/wiki/F-score#Dependence_of_the_F-score_on_class_imbalance F-score]/F1-score.
* F1 = 2 * Precision * Recall / (Precision + Recall) = 2 / (1/Precision + 1/Recall)  '''Harmonic mean''' of precision and recall.
* This means that comparison of the F-score across different problems with differing class ratios is problematic.
* [https://towardsdatascience.com/the-f1-score-bec2bbc38aa6 The F1 score]
* [https://stephenallwright.com/interpret-f1-score/ How to interpret F1 score (simply explained)]. It is a popular metric to use for classification models as it provides accurate results for both balanced and imbalanced datasets, and takes into account both the precision and recall ability of the model. '''F1 score tells you the model’s balanced ability to both capture positive cases (recall) and be accurate with the cases it does capture (precision).'''
* [https://www.statology.org/what-is-a-good-f1-score/ What is Considered a “Good” F1 Score?] It gives an example of F1-score where all the predictions are on one class ('''baseline model''') for comparison purpose.
* An example. '''confusionMatrix()''' assumes the rows are predicted and columns are true; that's why I put a transpose on my table object. The 'positive' class seems to be always the 1st row of the 2x2 table. If the levels are not specified, it will assign 'A' to the first row and 'B' to the 2nd row. For this case, we can see accuracy (0.615) is between sensitivity (0.429) and specificity (0.684) and F1-score (0.375) is between precision (0.333) and recall (0.429) . Also F1-score is smaller than sensitivity and specificity.
<pre>
              predicted
        sensitive  resistant
-------------------------------
true
sensitive    3          4
resistant    6        13
xtab <- matrix(c(3,6,4,13),nr=2)
confusionMatrix(t(xtab))$byClass
#        Sensitivity          Specificity      Pos Pred Value      Neg Pred Value
#          0.4285714            0.6842105            0.3333333            0.7647059
#          Precision              Recall                  F1          Prevalence
#          0.3333333            0.4285714            0.3750000            0.2692308
#      Detection Rate Detection Prevalence    Balanced Accuracy
#          0.1153846            0.3461538            0.5563910
confusionMatrix(t(xtab))$overall
#      Accuracy          Kappa  AccuracyLower  AccuracyUpper  AccuracyNull
#      0.6153846      0.1034483      0.4057075      0.7977398      0.7307692
# AccuracyPValue  McnemarPValue
#      0.9348215      0.7518296
confusionMatrix(t(xtab))$positive
# [1] "A"
</pre>
</ul>
== False positive rates vs false positive rates ==
[https://stats.stackexchange.com/a/340079 FPR (false positive rate) vs FDR (false discovery rate)]


= Precision recall curve =
= Precision recall curve =

Revision as of 10:07, 7 November 2022

Confusion matrix, Sensitivity/Specificity/Accuracy

Wikipedia

Predict
1 0
True 1 TP FN Sens=TP/(TP+FN)=Recall=TPR
FNR=FN/(TP+FN)
0 FP TN Spec=TN/(FP+TN), 1-Spec=FPR
PPV=TP/(TP+FP)=Precision
FDR=FP/(TP+FP) =1-PPV
NPV=TN/(FN+TN) N = TP + FP + FN + TN
  • Sensitivity 敏感度 = TP / (TP + FN) = Recall
  • Specificity 特異度 = TN / (TN + FP)
  • Accuracy = (TP + TN) / N
  • False discovery rate FDR = FP / (TP + FP)
  • False negative rate FNR = FN / (TP + FN)
  • False positive rate FPR = FP / (FP + TN) = 1 - Spec
  • True positive rate = TP / (TP + FN) = Sensitivity
  • Positive predictive value (PPV) = TP / # positive calls = TP / (TP + FP) = 1 - FDR = Precision
  • Negative predictive value (NPV) = TN / # negative calls = TN / (FN + TN)
  • Prevalence 盛行率 = (TP + FN) / N.
  • Note that PPV & NPV can also be computed from sensitivity, specificity, and prevalence:
[math]\displaystyle{ \text{PPV} = \frac{\text{sensitivity} \times \text{prevalence}}{\text{sensitivity} \times \text{prevalence}+(1-\text{specificity}) \times (1-\text{prevalence})} }[/math]
[math]\displaystyle{ \text{NPV} = \frac{\text{specificity} \times (1-\text{prevalence})}{(1-\text{sensitivity}) \times \text{prevalence}+\text{specificity} \times (1-\text{prevalence})} }[/math]

caret::confusionMatrix

  • F-score/F1-score.
    • F1 = 2 * Precision * Recall / (Precision + Recall) = 2 / (1/Precision + 1/Recall) Harmonic mean of precision and recall.
    • This means that comparison of the F-score across different problems with differing class ratios is problematic.
    • The F1 score
    • How to interpret F1 score (simply explained). It is a popular metric to use for classification models as it provides accurate results for both balanced and imbalanced datasets, and takes into account both the precision and recall ability of the model. F1 score tells you the model’s balanced ability to both capture positive cases (recall) and be accurate with the cases it does capture (precision).
    • What is Considered a “Good” F1 Score? It gives an example of F1-score where all the predictions are on one class (baseline model) for comparison purpose.
    • An example. confusionMatrix() assumes the rows are predicted and columns are true; that's why I put a transpose on my table object. The 'positive' class seems to be always the 1st row of the 2x2 table. If the levels are not specified, it will assign 'A' to the first row and 'B' to the 2nd row. For this case, we can see accuracy (0.615) is between sensitivity (0.429) and specificity (0.684) and F1-score (0.375) is between precision (0.333) and recall (0.429) . Also F1-score is smaller than sensitivity and specificity.
                  predicted
             sensitive   resistant
    -------------------------------
    true
    sensitive     3          4
    resistant     6         13
    
    xtab <- matrix(c(3,6,4,13),nr=2)
    confusionMatrix(t(xtab))$byClass
    #         Sensitivity          Specificity       Pos Pred Value       Neg Pred Value 
    #           0.4285714            0.6842105            0.3333333            0.7647059 
    #           Precision               Recall                   F1           Prevalence 
    #           0.3333333            0.4285714            0.3750000            0.2692308 
    #      Detection Rate Detection Prevalence    Balanced Accuracy 
    #           0.1153846            0.3461538            0.5563910
    
    confusionMatrix(t(xtab))$overall
    #       Accuracy          Kappa  AccuracyLower  AccuracyUpper   AccuracyNull 
    #      0.6153846      0.1034483      0.4057075      0.7977398      0.7307692 
    # AccuracyPValue  McnemarPValue 
    #      0.9348215      0.7518296
    
    confusionMatrix(t(xtab))$positive
    # [1] "A"
    

False positive rates vs false positive rates

FPR (false positive rate) vs FDR (false discovery rate)

ROC curve

  • Binary case:
    • Y = true positive rate = sensitivity,
    • X = false positive rate = 1-specificity = 假陽性率
  • Area under the curve AUC from the wikipedia: the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').
    [math]\displaystyle{ A = \int_{\infty}^{-\infty} \mbox{TPR}(T) \mbox{FPR}'(T) \, dT = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} I(T'\gt T)f_1(T') f_0(T) \, dT' \, dT = P(X_1 \gt X_0) }[/math]
    where [math]\displaystyle{ X_1 }[/math] is the score for a positive instance and [math]\displaystyle{ X_0 }[/math] is the score for a negative instance, and [math]\displaystyle{ f_0 }[/math] and [math]\displaystyle{ f_1 }[/math] are probability densities as defined in previous section.
  • Interpretation of the AUC. A small toy example (n=12=4+8) was used to calculate the exact probability [math]\displaystyle{ P(X_1 \gt X_0) }[/math] (4*8=32 all combinations).
    • It is a discrimination measure which tells us how well we can classify patients in two groups: those with and those without the outcome of interest.
    • Since the measure is based on ranks, it is not sensitive to systematic errors in the calibration of the quantitative tests.
    • The AUC can be defined as The probability that a randomly selected case will have a higher test result than a randomly selected control.
    • Plot of sensitivity/specificity (y-axis) vs cutoff points of the biomarker
    • The Mann-Whitney U test statistic (or Wilcoxon or Kruskall-Wallis test statistic) is equivalent to the AUC (Mason, 2002)
    • The p-value of the Mann-Whitney U test can thus safely be used to test whether the AUC differs significantly from 0.5 (AUC of an uninformative test).
  • Calculate AUC by hand. AUC is equal to the probability that a true positive is scored greater than a true negative.
  • See the uROC() function in <functions.R> from the supplementary of the paper (need access right) Bivariate Marker Measurements and ROC Analysis Wang 2012. Let [math]\displaystyle{ n_1 }[/math] be the number of obs from X1 and [math]\displaystyle{ n_0 }[/math] be the number of obs from X0. X1 and X0 are the predict values for data from group 1 and 0. [math]\displaystyle{ TP_i=Prob(X_1\gt X_{0i})=\sum_j (X_{1j} \gt X_{0i})/n_1, ~ FP_i=Prob(X_0\gt X_{0i}) = \sum_j (X_{0j} \gt X_{0i}) / n_0 }[/math]. We can draw a scatter plot or smooth.spline() of TP(y-axis) vs FP(x-axis) for the ROC curve.
uROC <- function(marker, status)   ### ROC function for univariate marker ###
{
    x <- marker
    bad <-  is.na(status) | is.na(x) 
    status <- status[!bad]
    x <- x[!bad]
    if (sum(bad) > 0) 
        cat(paste("\n", sum(bad), "records with missing values dropped. \n"))
	no_case <- sum(status==1)
	no_control <- sum(status==0)
	TP <- rep(0, no_control)
	FP <- rep(0, no_control)
	for (i in 1: no_control){	
	  TP[i] <- sum(x[status==1]>x[status==0][i])/no_case	
	  FP[i] <- sum(x[status==0]>x[status==0][i])/no_control	
    }
    list(TP = TP, FP = FP)
}

partial AUC

summary ROC

Weighted ROC

Adjusted AUC

Difficult to compute for some models

Optimal threshold

ROC Curve AUC for Hypothesis Testing

Challenges, issues

Methodological conduct of prognostic prediction models developed using machine learning in oncology: a systematic review 2022. class imbalance, data pre-processing, and hyperparameter tuning. twitter.

Survival data

'Survival Model Predictive Accuracy and ROC Curves' by Heagerty & Zheng 2005

  • Recall Sensitivity= [math]\displaystyle{ P(\hat{p_i} \gt c | Y_i=1) }[/math], Specificity= [math]\displaystyle{ P(\hat{p}_i \le c | Y_i=0 }[/math]), [math]\displaystyle{ Y_i }[/math] is binary outcomes, [math]\displaystyle{ \hat{p}_i }[/math] is a prediction, [math]\displaystyle{ c }[/math] is a criterion for classifying the prediction as positive ([math]\displaystyle{ \hat{p}_i \gt c }[/math]) or negative ([math]\displaystyle{ \hat{p}_i \le c }[/math]).
  • For survival data, we need to use a fixed time/horizon (t) to classify the data as either a case or a control. Following Heagerty and Zheng's definition in Survival Model Predictive Accuracy and ROC Curves (Incident/dynamic) 2005, Sensitivity(c, t)= [math]\displaystyle{ P(M_i \gt c | T_i = t) }[/math], Specificity= [math]\displaystyle{ P(M_i \le c | T_i \gt t }[/math]) where M is a marker value or [math]\displaystyle{ Z^T \beta }[/math]. Here sensitivity measures the expected fraction of subjects with a marker greater than c among the subpopulation of individuals who die at time t, while specificity measures the fraction of subjects with a marker less than or equal to c among those who survive beyond time t.
  • The AUC measures the probability that the marker value for a randomly selected case exceeds the marker value for a randomly selected control
  • ROC curves are useful for comparing the discriminatory capacity of different potential biomarkers.


Precision recall curve

Incidence, Prevalence

https://www.health.ny.gov/diseases/chronic/basicstat.htm

Jaccard index

Calculate area under curve by hand (using trapezoid), relation to concordance measure and the Wilcoxon–Mann–Whitney test

genefilter package and rowpAUCs function

  • rowpAUCs function in genefilter package. The aim is to find potential biomarkers whose expression level is able to distinguish between two groups.
# source("http://www.bioconductor.org/biocLite.R")
# biocLite("genefilter")
library(Biobase) # sample.ExpressionSet data
data(sample.ExpressionSet)

library(genefilter)
r2 = rowpAUCs(sample.ExpressionSet, "sex", p=0.1)
plot(r2[1]) # first gene, asking specificity = .9

r2 = rowpAUCs(sample.ExpressionSet, "sex", p=1.0)
plot(r2[1]) # it won't show pAUC

r2 = rowpAUCs(sample.ExpressionSet, "sex", p=.999)
plot(r2[1]) # pAUC is very close to AUC now

Use and Misuse of the Receiver Operating Characteristic Curve in Risk Prediction

Performance evaluation

Youden's index/Youden's J statistic

  • Youden's index
  • rocbc package. Provides inferences and comparisons around the AUC, the Youden index, the sensitivity at a given specificity level (and vice versa), the optimal operating point of the ROC curve (in the Youden sense), and the Youden based cutoff.

Some R packages

pROC

  • https://cran.r-project.org/web/packages/pROC/index.html
  • pROC: display and analyze ROC curves in R and S+
  • ROC Curve and AUC in Machine learning and R pROC Package. Pay attention to the legacy.axes parameter.
    library(pROC)
    data(aSAH)
    str(aSAH[,c('outcome', "s100b")])
    # 'data.frame':	113 obs. of  2 variables:
    #  $ outcome: Factor w/ 2 levels "Good","Poor": 1 1 1 1 2 2 1 2 1 1 ...
    #  $ s100b  : num  0.13 0.14 0.1 0.04 0.13 0.1 0.47 0.16 0.18 0.1 ...
    roc.s100b <- roc(aSAH$outcome, aSAH$s100b)
    
    roc(aSAH$outcome, aSAH$s100b, 
      plot=TRUE,
      auc=TRUE,  # already the default
      col="green",
      lwd =4,
      legacy.axes=TRUE,
      main="ROC Curves")
    # Data: aSAH$s100b in 72 controls (aSAH$outcome Good) < 41 cases (aSAH$outcome Poor).
    # Area under the curve: 0.7314
    
    auc(roc.s100b)
    # Area under the curve: 0.7314
    
    auc(aSAH$outcome, aSAH$s100b)
    # Area under the curve: 0.7314
    

    Note: in pROC::roc() or auc(), the response is on the 1st argument while in caret::confusionMatrix(), truth/reference is on the 2nd argument.

    If we flipped the outcomes, it won't affect AUC.

    aSAH$outcome2 <- factor(ifelse(aSAH$outcome == "Good", "Poor", "Good"), 
                            levels=c("Good", "Poor"))
    roc(aSAH$outcome2, aSAH$s100b)
    # Data: aSAH$s100b in 41 controls (aSAH$outcome2 Good) > 72 cases (aSAH$outcome2 Poor).
    # Area under the curve: 0.7314
    

Cross-validation ROC

mean ROC curve

ROC with cross-validation for linear regression in R

Comparison of two AUCs

  • Statistical Assessments of AUC. This is using the pROC::roc.test function.
  • prioritylasso. It is using roc(), auc(), roc.test(), plot.roc() from the pROC package. The calculation based on the training data is biased so we need to report the one based on test data.

Assess risk of bias

PROBAST: A Tool to Assess Risk of Bias and Applicability of Prediction Model Studies: Explanation and Elaboration 2019. http://www.probast.org/

Confidence interval of AUC

How to get an AUC confidence interval. pROC package was used.

DeLong test for comparing two ROC curves

AUC can be a misleading measure of performance

AUC is high but precision is low (i.e. FDR is high). https://twitter.com/michaelhoffman/status/1398380674206285830?s=09.

Caveats and pitfalls of ROC analysis in clinical microarray research

Caveats and pitfalls of ROC analysis in clinical microarray research (and how to avoid them) Berrar 2011

Limitation in clinical data

Sample size

Picking a threshold based on model performance/utility

Squeezing the Most Utility from Your Models

Why does my ROC curve look like a V

https://stackoverflow.com/a/42917384

Unbalanced classes

Weights

Metric

Class comparison problem

  • compcodeR: RNAseq data simulation, differential expression analysis and performance comparison of differential expression methods
  • Polyester: simulating RNA-seq datasets with differential transcript expression, github, HTML

Reporting

Applications

Lessons

  • Unbalanced data: kNN or nearest centroid is better than the traditional methods
  • Small sample size and large number of predictors: t-test can select predictors while lasso cannot