ROC: Difference between revisions

From 太極
Jump to navigation Jump to search
Line 42: Line 42:
|                    ||  ||  1      ||    0      ||
|                    ||  ||  1      ||    0      ||
|-
|-
| rowspan="2" | True || 1 ||  TP    ||    FN    || Sens=TP/(TP+FN)=Recall <br/> FNR=FN/(TP+FN)
| rowspan="2" | True || 1 ||  TP    ||    FN    || style="color: white;background: red;"|Sens=TP/(TP+FN)=Recall <br/> FNR=FN/(TP+FN)
|-  
|-  
|    0              ||  FP    ||    TN    || Spec=TN/(FP+TN), 1-Spec=FPR
|    0              ||  FP    ||    TN    || style="color: white;background: blue;"|Spec=TN/(FP+TN), 1-Spec=FPR
|-
|-
|                    ||    ||  PPV=TP/(TP+FP) <br/> FDR=FP/(TP+FP)||  NPV=TN/(FN+TN) ||  N = TP + FP + FN + TN
|                    ||    ||  style="color: white;background: green;"|PPV=TP/(TP+FP) <br/> FDR=FP/(TP+FP)||  NPV=TN/(FN+TN) ||  N = TP + FP + FN + TN
|}
|}



Revision as of 09:14, 13 September 2021

ROC curve

  • Binary case:
    • Y = true positive rate = sensitivity,
    • X = false positive rate = 1-specificity = 假陽性率
  • Area under the curve AUC from the wikipedia: the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').
[math]\displaystyle{ A = \int_{\infty}^{-\infty} \mbox{TPR}(T) \mbox{FPR}'(T) \, dT = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} I(T'\gt T)f_1(T') f_0(T) \, dT' \, dT = P(X_1 \gt X_0) }[/math]

where [math]\displaystyle{ X_1 }[/math] is the score for a positive instance and [math]\displaystyle{ X_0 }[/math] is the score for a negative instance, and [math]\displaystyle{ f_0 }[/math] and [math]\displaystyle{ f_1 }[/math] are probability densities as defined in previous section.

Survival data

'Survival Model Predictive Accuracy and ROC Curves' by Heagerty & Zheng 2005

  • Recall Sensitivity= [math]\displaystyle{ P(\hat{p_i} \gt c | Y_i=1) }[/math], Specificity= [math]\displaystyle{ P(\hat{p}_i \le c | Y_i=0 }[/math]), [math]\displaystyle{ Y_i }[/math] is binary outcomes, [math]\displaystyle{ \hat{p}_i }[/math] is a prediction, [math]\displaystyle{ c }[/math] is a criterion for classifying the prediction as positive ([math]\displaystyle{ \hat{p}_i \gt c }[/math]) or negative ([math]\displaystyle{ \hat{p}_i \le c }[/math]).
  • For survival data, we need to use a fixed time/horizon (t) to classify the data as either a case or a control. Following Heagerty and Zheng's definition (Incident/dynamic), Sensitivity(c, t)= [math]\displaystyle{ P(M_i \gt c | T_i = t) }[/math], Specificity= [math]\displaystyle{ P(M_i \le c | T_i \gt 0 }[/math]) where M is a marker value or [math]\displaystyle{ Z^T \beta }[/math]. Here sensitivity measures the expected fraction of subjects with a marker greater than c among the subpopulation of individuals who die at time t, while specificity measures the fraction of subjects with a marker less than or equal to c among those who survive beyond time t.
  • The AUC measures the probability that the marker value for a randomly selected case exceeds the marker value for a randomly selected control
  • ROC curves are useful for comparing the discriminatory capacity of different potential biomarkers.

Confusion matrix, Sensitivity/Specificity/Accuracy

Predict
1 0
True 1 TP FN Sens=TP/(TP+FN)=Recall
FNR=FN/(TP+FN)
0 FP TN Spec=TN/(FP+TN), 1-Spec=FPR
PPV=TP/(TP+FP)
FDR=FP/(TP+FP)
NPV=TN/(FN+TN) N = TP + FP + FN + TN
  • Sensitivity 敏感度 = TP / (TP + FN) = Recall
  • Specificity 特異度 = TN / (TN + FP)
  • Accuracy = (TP + TN) / N
  • False discovery rate FDR = FP / (TP + FP)
  • False negative rate FNR = FN / (TP + FN)
  • False positive rate FPR = FP / (FP + TN)
  • True positive rate = TP / (TP + FN)
  • Positive predictive value (PPV) = TP / # positive calls = TP / (TP + FP) = 1 - FDR
  • Negative predictive value (NPV) = TN / # negative calls = TN / (FN + TN)
  • Prevalence 盛行率 = (TP + FN) / N.
  • Note that PPV & NPV can also be computed from sensitivity, specificity, and prevalence:
[math]\displaystyle{ \text{PPV} = \frac{\text{sensitivity} \times \text{prevalence}}{\text{sensitivity} \times \text{prevalence}+(1-\text{specificity}) \times (1-\text{prevalence})} }[/math]
[math]\displaystyle{ \text{NPV} = \frac{\text{specificity} \times (1-\text{prevalence})}{(1-\text{sensitivity}) \times \text{prevalence}+\text{specificity} \times (1-\text{prevalence})} }[/math]

Precision recall curve

Incidence, Prevalence

https://www.health.ny.gov/diseases/chronic/basicstat.htm

Calculate area under curve by hand (using trapezoid), relation to concordance measure and the Wilcoxon–Mann–Whitney test

genefilter package and rowpAUCs function

  • rowpAUCs function in genefilter package. The aim is to find potential biomarkers whose expression level is able to distinguish between two groups.
# source("http://www.bioconductor.org/biocLite.R")
# biocLite("genefilter")
library(Biobase) # sample.ExpressionSet data
data(sample.ExpressionSet)

library(genefilter)
r2 = rowpAUCs(sample.ExpressionSet, "sex", p=0.1)
plot(r2[1]) # first gene, asking specificity = .9

r2 = rowpAUCs(sample.ExpressionSet, "sex", p=1.0)
plot(r2[1]) # it won't show pAUC

r2 = rowpAUCs(sample.ExpressionSet, "sex", p=.999)
plot(r2[1]) # pAUC is very close to AUC now

Use and Misuse of the Receiver Operating Characteristic Curve in Risk Prediction

http://circ.ahajournals.org/content/115/7/928

Performance evaluation

Some R packages

Comparison of two AUCs

  • Statistical Assessments of AUC. This is using the pROC::roc.test function.
  • prioritylasso. It is using roc(), auc(), roc.test(), plot.roc() from the pROC package. The calculation based on the training data is biased so we need to report the one based on test data.

Confidence interval of AUC

How to get an AUC confidence interval. pROC package was used.

AUC can be a misleading measure of performance

AUC is high but precision is low (i.e. FDR is high). https://twitter.com/michaelhoffman/status/1398380674206285830?s=09.

Picking a threshold based on model performance/utility

Squeezing the Most Utility from Your Models

Unbalanced classes

Statistics -> Imbalanced/unbalanced Classification. ROC is especially useful for unbalanced data where the 0.5 threshold may not be appropriate.

Class comparison problem

  • compcodeR: RNAseq data simulation, differential expression analysis and performance comparison of differential expression methods
  • Polyester: simulating RNA-seq datasets with differential transcript expression, github, HTML