Survival data

From 太極
Jump to navigation Jump to search

Survival data

Calculating survival time in R

base R and lubridate methods. Emily C. Zabor

Censoring

Sample schemes of incomplete data, Censoring in Clinical Trials: Review of Survival Analysis Techniques

  • Type I censoring: the censoring time is fixed
  • Type II censoring
  • Random censoring
    • Right censoring
    • Left censoring
  • Interval censoring
  • Truncation

The most common is called right censoring and occurs when a participant does not have the event of interest during the study and thus their last observed follow-up time is less than their time to event. This can occur when a participant drops out before the study ends or when a participant is event free at the end of the observation period.

Definitions of common terms in survival analysis

  • Event: Death, disease occurrence, disease recurrence, recovery, or other experience of interest
  • Time: The time from the beginning of an observation period (such as surgery or beginning treatment) to (i) an event, or (ii) end of the study, or (iii) loss of contact or withdrawal from the study.
  • Censoring / Censored observation: If a subject does not have an event during the observation time, they are described as censored. The subject is censored in the sense that nothing is observed or known about that subject after the time of censoring. A censored subject may or may not have an event after the end of observation time.

In R, "status" should be called event status. status = 1 means event occurred. status = 0 means no event (censored). Sometimes the status variable has more than 2 states. We can uses "status != 0" to replace "status" in Surv() function.

  • status=0/1/2 for censored, transplant and dead in survival::pbc data.
  • status=0/1/2 for censored, relapse and dead in randomForestSRC::follic data.

How to explore survival data

https://en.wikipedia.org/wiki/Survival_analysis#Survival_analysis_in_R

  • Create graph of length of time that each subject was in the study
library(survival)
# sort the aml data by time
aml <- aml[order(aml$time),]
with(aml, plot(time, type="h"))

File:Aml time.svg

  • Create the life table survival object
summary(aml.survfit)
Call: survfit(formula = Surv(time, status == 1) ~ 1, data = aml)

 time n.risk n.event survival std.err lower 95% CI upper 95% CI
    5     23       2   0.9130  0.0588       0.8049        1.000
    8     21       2   0.8261  0.0790       0.6848        0.996
    9     19       1   0.7826  0.0860       0.6310        0.971
   12     18       1   0.7391  0.0916       0.5798        0.942
   13     17       1   0.6957  0.0959       0.5309        0.912
   18     14       1   0.6460  0.1011       0.4753        0.878
   23     13       2   0.5466  0.1073       0.3721        0.803
   27     11       1   0.4969  0.1084       0.3240        0.762
   30      9       1   0.4417  0.1095       0.2717        0.718
   31      8       1   0.3865  0.1089       0.2225        0.671
   33      7       1   0.3313  0.1064       0.1765        0.622
   34      6       1   0.2761  0.1020       0.1338        0.569
   43      5       1   0.2208  0.0954       0.0947        0.515
   45      4       1   0.1656  0.0860       0.0598        0.458
   48      2       1   0.0828  0.0727       0.0148        0.462
  • Kaplan-Meier curve for aml with the confidence bounds.
plot(aml.survfit, xlab = "Time", ylab="Proportion surviving")
  • Create aml life tables broken out by treatment (x, "Maintained" vs. "Not maintained")
surv.by.aml.rx <- survfit(Surv(time, status == 1) ~ x, data = aml)

summary(surv.by.aml.rx)
Call: survfit(formula = Surv(time, status == 1) ~ x, data = aml)

                x=Maintained 
 time n.risk n.event survival std.err lower 95% CI upper 95% CI
    9     11       1    0.909  0.0867       0.7541        1.000
   13     10       1    0.818  0.1163       0.6192        1.000
   18      8       1    0.716  0.1397       0.4884        1.000
   23      7       1    0.614  0.1526       0.3769        0.999
   31      5       1    0.491  0.1642       0.2549        0.946
   34      4       1    0.368  0.1627       0.1549        0.875
   48      2       1    0.184  0.1535       0.0359        0.944

                x=Nonmaintained 
 time n.risk n.event survival std.err lower 95% CI upper 95% CI
    5     12       2   0.8333  0.1076       0.6470        1.000
    8     10       2   0.6667  0.1361       0.4468        0.995
   12      8       1   0.5833  0.1423       0.3616        0.941
   23      6       1   0.4861  0.1481       0.2675        0.883
   27      5       1   0.3889  0.1470       0.1854        0.816
   30      4       1   0.2917  0.1387       0.1148        0.741
   33      3       1   0.1944  0.1219       0.0569        0.664
   43      2       1   0.0972  0.0919       0.0153        0.620
   45      1       1   0.0000     NaN           NA           NA
  • Plot KM plot broken out by treatment
plot(surv.by.aml.rx, xlab = "Time", ylab="Survival",
     col=c("black", "red"), lty = 1:2, 
     main="Kaplan-Meier Survival vs. Maintenance in AML")
legend(100, .6, c("Maintained", "Not maintained"), 
     lty = 1:2, col=c("black", "red"))
  • Perform the log rank test using the R function survdiff().
surv.diff.aml <- survdiff(Surv(time, status == 1) ~ x, data=aml)
surv.diff.aml

Call:
survdiff(formula = Surv(time, status == 1) ~ x, data = aml)

                 N Observed Expected (O-E)^2/E (O-E)^2/V
x=Maintained    11        7    10.69      1.27       3.4
x=Nonmaintained 12       11     7.31      1.86       3.4

 Chisq= 3.4  on 1 degrees of freedom, p= 0.07

Summary statistics

  • Kaplan-Meier Method and Log-Rank Test
  • Statistics
    • Table of status vs treatment (with proportion)
    • Table of treatment vs training/test
  • Life table
    • summary(survfit(Surv(time, status) ~ 1)) or summary(survfit(Surv(time, status) ~ treatment))
    • KMsurv::lifetab()
  • Coxph function and visualize them using the ggforest package

Some public data

package data (sample size)
survival pbc (418), ovarian (26), aml/leukemia (23), colon (1858), lung (228), veteran (137)
pec GBSG2 (686), cost (518)
randomForestSRC follic (541)
KMsurv A LOT. tongue (80)
survivalROC mayo (312)
survAUC NA

Kaplan & Meier and Nelson-Aalen: survfit.formula(), Surv()

  • Landmarks
    • Kaplan-Meier: 1958
    • Nelson: 1969
    • Cox and Brewlow: 1972 S(t) = exp(-Lambda(t))
    • Aalen: 1978 Lambda(t)
  • D distinct times [math]\displaystyle{ t_1 \lt t_2 \lt \cdots \lt t_D }[/math]. At time [math]\displaystyle{ t_i }[/math] there are [math]\displaystyle{ d_i }[/math] events. Let [math]\displaystyle{ Y_i }[/math] be the number of individuals who are at risk at time [math]\displaystyle{ t_i }[/math]. The quantity [math]\displaystyle{ d_i/Y_i }[/math] provides an estimate of the conditional probability that an individual who survives to just prior to time [math]\displaystyle{ t_i }[/math] experiences the event at time [math]\displaystyle{ t_i }[/math]. The KM estimator of the survival function and the Nelson-Aalen estimator of the cumulative hazard (their relationship is given below) are define as follows ([math]\displaystyle{ t_1 \le t }[/math]):
[math]\displaystyle{ \begin{align} \hat{S}(t) &= \prod_{t_i \le t} [1 - d_i/Y_i] \\ \hat{H}(t) &= \sum_{t_i \le t} d_i/Y_i \end{align} }[/math]
str(kidney)
'data.frame':	76 obs. of  7 variables:
$ id     : num  1 1 2 2 3 3 4 4 5 5 ...
$ time   : num  8 16 23 13 22 28 447 318 30 12 ...
$ status : num  1 1 1 0 1 1 1 1 1 1 ...
$ age    : num  28 28 48 48 32 32 31 32 10 10 ...
$ sex    : num  1 1 2 2 1 1 2 2 1 1 ...
$ disease: Factor w/ 4 levels "Other","GN","AN",..: 1 1 2 2 1 1 1 1 1 1 ...
$ frail  : num  2.3 2.3 1.9 1.9 1.2 1.2 0.5 0.5 1.5 1.5 ...
kidney[order(kidney$time), c("time", "status")]
kidney[kidney$time == 13, ] # one is dead and the other is alive
length(unique(kidney$time)) # 60

sfit <- survfit(Surv(time, status) ~ 1, data = kidney)

sfit
Call: survfit(formula = Surv(time, status) ~ 1, data = kidney)

      n  events  median 0.95LCL 0.95UCL 
     76      58      78      39     152 

str(sfit)
List of 13
$ n        : int 76
$ time     : num [1:60] 2 4 5 6 7 8 9 12 13 15 ...
$ n.risk   : num [1:60] 76 75 74 72 71 69 65 64 62 60 ...
$ n.event  : num [1:60] 1 0 0 0 2 2 1 2 1 2 ...
$ n.censor : num [1:60] 0 1 2 1 0 2 0 0 1 0 ...
$ surv     : num [1:60] 0.987 0.987 0.987 0.987 0.959 ...
$ type     : chr "right"
length(unique(kidney$time))  # [1] 60
all(sapply(sfit$time, function(tt) sum(kidney$time >= tt)) == sfit$n.risk) # TRUE
all(sapply(sfit$time, function(tt) sum(kidney$status[kidney$time == tt])) == sfit$n.event) # TRUE
all(sapply(sfit$time, function(tt) sum(1-kidney$status[kidney$time == tt])) == sfit$n.censor) #  TRUE
all(cumprod(1 - sfit$n.event/sfit$n.risk) == sfit$surv) #  FALSE
range(abs(cumprod(1 - sfit$n.event/sfit$n.risk) - sfit$surv))
# [1] 0.000000e+00 1.387779e-17

summary(sfit)
time n.risk n.event survival std.err lower 95% CI upper 95% CI
    2     76       1    0.987  0.0131      0.96155        1.000
    7     71       2    0.959  0.0232      0.91469        1.000
    8     69       2    0.931  0.0297      0.87484        0.991
 ...
  511      3       1    0.042  0.0288      0.01095        0.161
  536      2       1    0.021  0.0207      0.00305        0.145
  562      1       1    0.000     NaN           NA           NA
  • Understanding survival analysis: Kaplan-Meier estimate
  • Note that the KM estimate is left-continuous step function with the intervals closed at left and open at right. For [math]\displaystyle{ t \in [t_j, t_{j+1}) }[/math] for a certain j, we have [math]\displaystyle{ \hat{S}(t) = \prod_{i=1}^j (1-d_i/n_i) }[/math] where [math]\displaystyle{ d_i }[/math] is the number people who have an event during the interval [math]\displaystyle{ [t_i, t_{i+1}) }[/math] and [math]\displaystyle{ n_i }[/math] is the number of people at risk just before the beginning of the interval [math]\displaystyle{ [t_i, t_{i+1}) }[/math].
  • The product-limit estimator can be constructed by using a reduced-sample approach. We can estimate the [math]\displaystyle{ P(T \gt t_i | T \ge t_i) = \frac{Y_i - d_i}{Y_i} }[/math] for [math]\displaystyle{ i=1,2,\cdots,D }[/math]. [math]\displaystyle{ S(t_i) = \frac{S(t_i)}{S(t_{i-1})} \frac{S(t_{i-1})}{S(t_{i-2})} \cdots \frac{S(t_2)}{S(t_1)} \frac{S(t_1)}{S(0)} S(0) = P(T \gt t_i | T \ge t_i) P(T \gt t_{i-1} | T \ge t_{i-1}) \cdots P(T\gt t_2|T \ge t_2) P(T\gt t_1 | T \ge t_1) }[/math] because S(0)=1 and, for a discrete distribution, [math]\displaystyle{ S(t_{i-1}) = P(T \gt t_{i-1}) = P(T \ge t_i) }[/math].
  • Self consistency. If we had no censored observations, the estimator of the survival function at a time t is the proportion of observations which are larger than t, that is, [math]\displaystyle{ \hat{S}(t) = \frac{1}{n}\sum I(X_i \gt t) }[/math].
  • Curves are plotted in the same order as they are listed by print (which gives a 1 line summary of each). For example, -1 < 1 and 'Maintenance' < 'Nonmaintained'. That means, the labels list in the legend() command should have the same order as the curves.
  • Kaplan and Meier is used to give an estimator of the survival function S(t)
  • Nelson-Aalen estimator is for the cumulative hazard H(t). Note that [math]\displaystyle{ 0 \le H(t) \lt \infty }[/math] and [math]\displaystyle{ H(t) \rightarrow \infty }[/math] as t goes to infinity. So there is a constraint on the hazard function, see Wikipedia.

Note that S(t) is related to H(t) by [math]\displaystyle{ H(t) = -ln[S(t)] }[/math] or [math]\displaystyle{ S(t) = exp[-H(t)] }[/math]. The two estimators are similar (see example 4.1A and 4.1B from Klein and Moeschberge).

The Nelson-Aalen estimator has two primary uses in analyzing data

  1. Selecting between parametric models for the time to event
  2. Crude estimates of the hazard rate h(t). This is related to the estimation of the survival function in Cox model. See 8.6 of Klein and Moeschberge.

The Kaplan–Meier estimator (the product limit estimator) is an estimator for estimating the survival function from lifetime data. In medical research, it is often used to measure the fraction of patients living for a certain amount of time after treatment.

Note that

  • The "+" sign in the KM curves means censored observations (this convention matches with the output of Surv() function) and a long vertical line (not '+') means there is a dead observation at that time.
> aml[1:5,]
  time status          x
1    9      1 Maintained
2   13      1 Maintained
3   13      0 Maintained
4   18      1 Maintained
5   23      1 Maintained
> Surv(aml$time, aml$status)[1:5,]
[1]  9  13  13+ 18  23 
  • If the last observation (longest survival time) is dead, the survival curve will goes down to zero. Otherwise, the survival curve will remain flat from the last event time.

Usually the KM curve of treatment group is higher than that of the control group.

The Y-axis (the probability that a member from a given population will have a lifetime exceeding time) is often called

  • Cumulative probability
  • Cumulative survival
  • Percent survival
  • Probability without event
  • Proportion alive/surviving
  • Survival
  • Survival probability

File:KMcurve.png, File:KMannotation.png, File:KMcurve cumhaz.png

> library(survival)
> str(aml$x)
 Factor w/ 2 levels "Maintained","Nonmaintained": 1 1 1 1 1 1 1 1 1 1 ...
> plot(leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml[7:17,] ) , 
      lty=2:3, mark.time = TRUE) # a (small) subset, mark.time is used to show censored obs
> aml[7:17,]
   time status             x
7    31      1    Maintained
8    34      1    Maintained
9    45      0    Maintained
10   48      1    Maintained
11  161      0    Maintained
12    5      1 Nonmaintained
13    5      1 Nonmaintained
14    8      1 Nonmaintained
15    8      1 Nonmaintained
16   12      1 Nonmaintained
17   16      0 Nonmaintained
> legend(100, .9, c("Maintenance", "No Maintenance"), lty = 2:3) # lty: 2=dashed, 3=dotted
> title("Kaplan-Meier Curves\nfor AML Maintenance Study") 

# Cumulative hazard plot
# Lambda(t) = -log(S(t)); 
# see https://en.wikipedia.org/wiki/Survival_analysis
# http://statweb.stanford.edu/~olshen/hrp262spring01/spring01Handouts/Phil_doc.pdf
plot(leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml[7:17,] ) , 
      lty=2:3, mark.time = T, fun="cumhaz", ylab="Cumulative Hazard")
# https://www.lexjansen.com/pharmasug/2011/CC/PharmaSUG-2011-CC16.pdf
mydata <- data.frame(time=c(3,6,8,12,12,21),status=c(1,1,0,1,1,1))
km <- survfit(Surv(time, status)~1, data=mydata)
plot(km, mark.time = T)
survest <- stepfun(km$time, c(1, km$surv))
plot(survest)
> str(km)
List of 13
 $ n        : int 6
 $ time     : num [1:5] 3 6 8 12 21
 $ n.risk   : num [1:5] 6 5 4 3 1
 $ n.event  : num [1:5] 1 1 0 2 1
 $ n.censor : num [1:5] 0 0 1 0 0
 $ surv     : num [1:5] 0.833 0.667 0.667 0.222 0
 $ type     : chr "right"
 $ std.err  : num [1:5] 0.183 0.289 0.289 0.866 Inf
 $ upper    : num [1:5] 1 1 1 1 NA
 $ lower    : num [1:5] 0.5827 0.3786 0.3786 0.0407 NA
 $ conf.type: chr "log"
 $ conf.int : num 0.95
> class(survest)
[1] "stepfun"  "function"
> survest
Step function
Call: stepfun(km$time, c(1, km$surv))
 x[1:5] =      3,      6,      8,     12,     21
6 plateau levels =      1, 0.83333, 0.66667,  ..., 0.22222,      0
> str(survest)
function (v)  
 - attr(*, "class")= chr [1:2] "stepfun" "function"
 - attr(*, "call")= language stepfun(km$time, c(1, km$surv))

File:Kmcurve_toy.svg

Multiple curves

Curves/groups are ordered. The first color in the palette is used to color the first level of the factor variable. This is same idea as ggsurvplot in the survminer package. This affects parameters like col and lty in plot() function. For example,

  • 1<2
  • 'c' < 't'
  • 'control' < 'treatment'
  • 'Control' < 'Treatment'
  • 'female' < 'male'.

For legend(), the first category in legend argument will appear at the top of the legend box.

library(RColorBrewer)
library(survival)
set1 = c(brewer.pal(9,"Set1"), brewer.pal(8, "Dark2"))

fit <- survfit(Surv(futime, fustat) ~ cut(age, quantile(age, seq(0,1,l=4))), data = ovarian) 
plot(fit, col = set1[3:1])
par(xpd=TRUE)
legend(x=800, y=1.1, bty="n", "Risk", cex=0.9, text.font=2)
legend(x=800, y=1.0, bty="n", text.col = set1[3:1], c("Low","Intermediate","High"), cex=0.9)

Estimating x-year survival

Survival Analysis in R. See the explanation there why the “naive” estimate is wrong?

plot(survfit(Surv(time, status) ~ 1, data = lung))
summary(survfit(Surv(time, status) ~ 1, data = lung), times = c(200, 400, 600))

Inverse Probability of Weighting

Inverse Probability of Censoring Weighted (IPCW)

The plots below show by flipping the status variable, we can accurately recover the survival function of the censoring variable. See the R code here for superimposing the true exponential distribution on the KM plot of the censoring variable.

require(survival)
n = 10000
beta1 = 2; beta2 = -1
lambdaT = 1 # baseline hazard
lambdaC = 2  # hazard of censoring
set.seed(1234)
x1 = rnorm(n,0)
x2 = rnorm(n,0)
# true event time
# T = rweibull(n, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2)) # Wrong
T = Vectorize(rweibull)(n=1, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2))

# method 1: exponential censoring variable
C <- rweibull(n, shape=1, scale=lambdaC)   
time = pmin(T,C)  
status <- 1*(T <= C) 
mean(status)
summary(T)
summary(C)
par(mfrow=c(2,1), mar = c(3,4,2,2)+.1)
status2 <- 1-status
plot(survfit(Surv(time, status2) ~ 1), 
     ylab="Survival probability",
     main = 'Exponential censoring time')

# method 2: uniform censoring variable
C <- runif(n, 0, 21)
time = pmin(T,C)  
status <- 1*(T <= C) 
status2 <- 1-status
plot(survfit(Surv(time, status2) ~ 1), 
     ylab="Survival probability",
     main = "Uniform censoring time")

File:Ipcw.svg

stepfun() and plot.stepfun()

Survival curves with number at risk at bottom: survminer package

R function survminer::ggsurvplot()

Paper examples

Questions:

  • How to remove tick mark on censored observations especially the case with a large sample size?

KMunicate

https://cran.r-project.org/web/packages/KMunicate/index.html

GGally package (ggplot object)

ggsurv() from the Gally package.

Advantage: return object class is c("gg", "ggplot") while survminer::ggsurvplot returns object class "ggsurvplot" "ggsurv", "list". GGally has much more downloads compared to survminer.

It seems to be better to apply order.legend = FALSE if we want the default color palette has the same order as the levels. For example

data(lung, package = "survival")
sf.sex <- survival::survfit(Surv(time, status) ~ sex, data = lung)
ggsurv(sf.sex)   # 2 = Salmon, 1 = Iris blue
                 # Colors are defined by the final survival time

ggsurv(sf.sex, order.legend = FALSE) # 1 = Salmon, 2 = Iris blue
                        # More consistent with what we expect
                        # Colors are defined by the levels

# More options
ggsurv(sf.sex, order.legend = FALSE, surv.col = scales::hue_pal()(2))

To combine multiple ggplot2 plots, use the ggpubr package. gridExtra is not developed after 2017.

library(GGally)
library(survival)
data(lung, package = "survival")
sf.lung <- survfit(Surv(time, status) ~ sex, data = lung)
p1 <- ggsurv(sf.lung, plot.cens = FALSE, lty.est = c(1, 3), size.est = 0.8, 
  xlab = "Time", ylab = "Survival", main = "Lower score")
p1 <- p1 + annotate("text", x=0, y=.25, hjust=0, label="zxcvb")
p2 <- ggsurv(sf.lung, plot.cens = FALSE, lty.est = c(1, 3), size.est = 0.8, 
  xlab = "Time", ylab = "Survival", main = "High score")
p2 <- p2 + annotate("text", x=0, y=.25, hjust=0, label="asdfg")
# gridExtra::grid.arrange(p1, p2, ncol=2, nrow =1) # no common legend option
ggpubr::ggarrange(p1, p2,  common.legend = TRUE, legend = "right")
# return object class: "gg"   "ggplot"    "ggarrange"

Life table

Re-construct survival data from KM curves

reconstructKM package

Median survival and 95% CI

survfit(Surv(time, status) ~ 1, data). Note this is different from the "naive" estimate (median survival time among patients who died).

Alternatives to survival function plot

https://www.rdocumentation.org/packages/survival/versions/2.43-1/topics/plot.survfit The fun argument, a transformation of the survival curve

  • fun = "event" or "F": f(y) = 1-y; it calculates P(T < t). This is like a t-year risk (Blanche 2018).
  • fun = "cumhaz": cumulative hazard function (f(y) = -log(y)); it calculates H(t). See Intuition for cumulative hazard function.

Breslow estimate

Logrank/log-rank/log rank test

  • Logrank test is a hypothesis test to compare the survival distributions of two samples. The logrank test statistic compares estimates of the hazard functions of the two groups at each observed event time.

Survival curve with confidence interval

http://www.sthda.com/english/wiki/survminer-r-package-survival-data-analysis-and-visualization

Parametric models and survival function for censored data

Assume the CDF of survival time T is [math]\displaystyle{ F(\cdot) }[/math] and the CDF of the censoring time C is [math]\displaystyle{ G(\cdot) }[/math],

[math]\displaystyle{ \begin{align} P(T\gt t, \delta=1) &= \int_t^\infty (1-G(s))dF(s), \\ P(T\gt t, \delta=0) &= \int_t^\infty (1-F(s))dG(s) \end{align} }[/math]

R

Parametric models and likelihood function for uncensored data

plot.survfit()

  • Exponential. [math]\displaystyle{ T \sim Exp(\lambda) }[/math]. [math]\displaystyle{ H(t) = \lambda t. }[/math] and [math]\displaystyle{ ln(S(t)) = -H(t) = -\lambda t. }[/math]
  • Weibull. [math]\displaystyle{ T \sim W(\lambda,p). }[/math] [math]\displaystyle{ H(t) = \lambda^p t^p. }[/math] and [math]\displaystyle{ ln(-ln(S(t))) = ln(\lambda^p t^p)=const + p ln(t) }[/math].

http://www.math.ucsd.edu/~rxu/math284/slect4.pdf

See also accelerated life models where a set of covariates were used to model survival time.

Survival modeling

Accelerated life models - a direct extension of the classical linear model

http://data.princeton.edu/wws509/notes/c7.pdf and also Kalbfleish and Prentice (1980).

[math]\displaystyle{ log T_i = x_i' \beta + \epsilon_i }[/math] Therefore

  • [math]\displaystyle{ T_i = exp(x_i' \beta) T_{0i} }[/math]. So if there are two groups (x=1 and x=0), and [math]\displaystyle{ exp(\beta) = 2 }[/math], it means one group live twice as long as people in another group.
  • [math]\displaystyle{ S_1(t) = S_0(t/ exp(x' \beta)) }[/math]. This explains the meaning of accelerated failure-time. Depending on the sign of [math]\displaystyle{ \beta' x }[/math], the time is either accelerated by a constant factor or degraded by a constant factor. If [math]\displaystyle{ exp(\beta)=2 }[/math], the probability that a member in group one (eg treatment) will be alive at age t is exactly the same as the probability that a member in group zero (eg control group) will be alive at age t/2.
  • The hazard function [math]\displaystyle{ \lambda_1(t) = \lambda_0(t/exp(x'\beta))/ exp(x'\beta) }[/math]. So if [math]\displaystyle{ exp(\beta)=2 }[/math], at any given age people in group one would be exposed to half the risk of people in group zero half their age.

In applications,

  • If the errors are normally distributed, then we obtain a log-normal model for the T. Estimation of this model for censored data by maximum likelihood is known in the econometric literature as a Tobit model.
  • If the errors have an extreme value distribution, then T has an exponential distribution. The hazard [math]\displaystyle{ \lambda }[/math] satisfies the log linear model [math]\displaystyle{ \log \lambda_i = x_i' \beta }[/math].

Proportional hazard models

Note PH models is a type of multiplicative hazard rate models [math]\displaystyle{ h(x|Z) = h_0(x)c(\beta' Z) }[/math] where [math]\displaystyle{ c(\beta' Z) = \exp(\beta ' Z) }[/math].

Assumption: Survival curves for two strata (determined by the particular choices of values for covariates) must have hazard functions that are proportional over time (i.e. constant relative hazard over time). Proportional hazards assumption meaning. The ratio of the hazard rates from two individuals with covariate value [math]\displaystyle{ Z }[/math] and [math]\displaystyle{ Z^* }[/math] is a constant function time.

[math]\displaystyle{ \begin{align} \frac{h(t|Z)}{h(t|Z^*)} = \frac{h_0(t)\exp(\beta 'Z)}{h_0(t)\exp(\beta ' Z^*)} = \exp(\beta' (Z-Z^*)) \mbox{ independent of time} \end{align} }[/math]

Test the assumption

Cox Regression

Weibull and Exponential model to Cox model

In summary:

  • Weibull distribution (Klein) [math]\displaystyle{ h(t) = p \lambda (\lambda t)^{p-1} }[/math] and [math]\displaystyle{ S(t) = exp(-\lambda t^p) }[/math]. If p >1, then the risk increases over time. If p<1, then the risk decreases over time.
    • Note that Weibull distribution has a different parametrization. See http://data.princeton.edu/pop509/ParametricSurvival.pdf#page=2. [math]\displaystyle{ h(t) = \lambda^p p t^{p-1} }[/math] and [math]\displaystyle{ S(t) = exp(-(\lambda t)^p) }[/math]. R and wikipedia also follows this parametrization except that [math]\displaystyle{ h(t) = p t^{p-1}/\lambda^p }[/math] and [math]\displaystyle{ S(t) = exp(-(t/\lambda)^p) }[/math].
  • Exponential distribution [math]\displaystyle{ h(t) }[/math] = constant (independent of t). This is a special case of Weibull distribution (p=1).
  • Weibull (and also exponential) distribution regression model is the only case which belongs to both the proportional hazards and the accelerated life families.
[math]\displaystyle{ \begin{align} \frac{h(x|Z_1)}{h(x|Z_2)} = \frac{h_0(x\exp(-\gamma' Z_1)) \exp(-\gamma ' Z_1)}{h_0(x\exp(-\gamma' Z_2)) \exp(-\gamma ' Z_2)} = \frac{(a/b)\left(\frac{x \exp(-\gamma ' Z_1)}{b}\right)^{a-1}\exp(-\gamma ' Z_1)}{(a/b)\left(\frac{x \exp(-\gamma ' Z_2)}{b}\right)^{a-1}\exp(-\gamma ' Z_2)} \quad \mbox{which is independent of time x} \end{align} }[/math]
f(t)=h(t)*S(t) h(t) S(t) Mean
Exponential (Klein p37) [math]\displaystyle{ \lambda \exp(-\lambda t) }[/math] [math]\displaystyle{ \lambda }[/math] [math]\displaystyle{ \exp(-\lambda t) }[/math] [math]\displaystyle{ 1/\lambda }[/math]
Weibull (Klein, Bender, wikipedia) [math]\displaystyle{ p\lambda t^{p-1}\exp(-\lambda t^p) }[/math] [math]\displaystyle{ p\lambda t^{p-1} }[/math] [math]\displaystyle{ exp(-\lambda t^p) }[/math] [math]\displaystyle{ \frac{\Gamma(1+1/p)}{\lambda^{1/p}} }[/math]
Exponential (R) [math]\displaystyle{ \lambda \exp(-\lambda t) }[/math], [math]\displaystyle{ \lambda }[/math] is rate [math]\displaystyle{ \lambda }[/math] [math]\displaystyle{ \exp(-\lambda t) }[/math] [math]\displaystyle{ 1/\lambda }[/math]
Weibull (R, wikipedia) [math]\displaystyle{ \frac{a}{b}\left(\frac{t}{b}\right)^{a-1} \exp(-(\frac{t}{b})^a) }[/math],
[math]\displaystyle{ a }[/math] is shape, and [math]\displaystyle{ b }[/math] is scale
[math]\displaystyle{ \frac{a}{b}\left(\frac{t}{b}\right)^{a-1} }[/math] [math]\displaystyle{ \exp(-(\frac{t}{b})^a) }[/math] [math]\displaystyle{ b\Gamma(1+1/a) }[/math]
  • Accelerated failure-time model. Let [math]\displaystyle{ Y=\log(T)=\mu + \gamma'Z + \sigma W }[/math]. Then the survival function of [math]\displaystyle{ T }[/math] at the covariate Z,
[math]\displaystyle{ \begin{align} S_T(t|Z) &= P(T \gt t |Z) \\ &= P(Y \gt \ln t|Z) \\ &= P(\mu + \sigma W \gt \ln t-\gamma' Z | Z) \\ &= P(e^{\mu + \sigma W} \gt t\exp(-\gamma'Z) | Z) \\ &= S_0(t \exp(-\gamma'Z)). \end{align} }[/math]

where [math]\displaystyle{ S_0(t) }[/math] denote the survival function T when Z=0. Since [math]\displaystyle{ h(t) = -\partial \ln (S(t)) }[/math], the hazard function of T with a covariate value Z is related to a baseline hazard rate [math]\displaystyle{ h_0 }[/math] by (p56 Klein)

[math]\displaystyle{ \begin{align} h(t|Z) = h_0(t\exp(-\gamma' Z)) \exp(-\gamma ' Z) \end{align} }[/math]
> mean(rexp(1000)^(1/2))
[1] 0.8902948
> mean(rweibull(1000, 2, 1))
[1] 0.8856265

> mean((rweibull(1000, 2, scale=4)/4)^2)
[1] 1.008923

Graphical way to check Weibull, AFT, PH

http://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_9.pdf#page=40

Weibull is related to Extreme value distribution

Weibull distribution and bathtub

Weibull distribution and reliability

Survival Analysis – Fitting Weibull Models for Improving Device Reliability in R (simulation)

Optimisation of a Weibull survival model using Optimx()

Optimisation of a Weibull survival model using Optimx() in R

CDF follows Unif(0,1)

https://stats.stackexchange.com/questions/161635/why-is-the-cdf-of-a-sample-uniformly-distributed

Take the Exponential distribution for example

stem(pexp(rexp(1000)))
stem(pexp(rexp(10000)))

Another example is from simulating survival time. Note that this is exactly Bender et al 2005 approach. See also the simsurv (newer) and survsim (older) packages.

set.seed(100) 

#Define the following parameters outlined in the step: 
n = 1000 
beta_0 = 0.5
beta_1 = -1
beta_2 = 1 

b = 1.6 #This will be changed later as mentioned in Step 5 of documentation 

#Step 1
x_1<-rbinom(n, 1, 0.25)
x_2<-rbinom(n, 1, 0.7)

#Step 2 
U<-runif(n, 0,1)
T<-(-log(U)*exp(-(beta_0+beta_1*x_1+beta_2*x_2))) #Eqn (5) 

Fn <- ecdf(T) # https://stat.ethz.ch/R-manual/R-devel/library/stats/html/ecdf.html
# verify F(T) or 1-F(T) ~ U(0, 1)
hist(Fn(T))
# look at the plot of survival probability vs time
plot(T, 1 - Fn(T))

Simulate survival data

Note that status = 1 means an event (e.g. death) happened; Ti <= Ci. That is, the status variable used in R/Splus means the death indicator.

  • http://www.bioconductor.org/packages/release/bioc/manuals/genefilter/man/genefilter.pdf#page=4
    y <- rexp(10)
    cen <- runif(10)
    status <- ifelse(cen < .7, 1, 0)
    
  • Inference on Selected Subgroups in Clinical Trials [math]\displaystyle{ \lambda(t) = \lambda_0(t) e^{\beta_i D} }[/math] for subgroup i=1,2, respectively where D is the treatment indicator and [math]\displaystyle{ \lambda_0(t) }[/math] is the baseline hazard function of Weibull(1,1). The subjects fall into one of the two subgroups with probability 0.5, and the treatment assignment is also random with equal probability. The response generated from the above model is then censored randomly from the right by a censoring variable C, where log(C) follows the uniform distribution on (-1.25, 1.00). The censoring rate is about 40% across different choices of [math]\displaystyle{ \beta_i }[/math] considered in this study.
  • How much power/accuracy is lost by using the Cox model instead of Weibull model when both model are correct? [math]\displaystyle{ h(t|x)=\lambda=e^{3x+1} = h_0(t)e^{\beta x} }[/math] where [math]\displaystyle{ h_0(t)=e^1, \beta=3 }[/math].
    Note that for the exponential distribution, larger rate/[math]\displaystyle{ \lambda }[/math] corresponds to a smaller mean. This relation matches with the Cox regression where a large covariate corresponds to a smaller survival time. So the coefficient 3 in myrates in the below example has the same sign as the coefficient (2.457466 for censored data) in the output of the Cox model fitting.
    n <- 30
    x <- scale(1:n, TRUE, TRUE) # create covariates (standardized)
                                # the original example does not work on large 'n'
    myrates <- exp(3*x+1)
    set.seed(1234)
    y <- rexp(n, rate = myrates) # generates the r.v.
    cen <- rexp(n, rate = 0.5 )  #  E(cen)=1/rate
    ycen <- pmin(y, cen)
    di <- as.numeric(y <= cen)
    survreg(Surv(ycen, di)~x, dist="weibull")$coef[2]  # -3.080125
    # library(flexsurvreg); flexsurvreg(Surv(ycen, di)~x, dist="weibull")
    coxph(Surv(ycen, di)~x)$coef  # 2.457466 
    
    # no censor
    survreg(Surv(y,rep(1, n))~x,dist="weibull")$coef[2]  # -3.137603
    survreg(Surv(y,rep(1, n))~x,dist="exponential")$coef[2]  # -3.143095
    coxph(Surv(y,rep(1, n))~x)$coef  # 2.717794 
    
    # See the pdf note for the rest of code
    
  • Intercept in survreg for the exponential distribution. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=25.
    [math]\displaystyle{ \begin{align} \lambda = exp(-intercept) \end{align} }[/math]
    > futime <- rexp(1000, 5)
    > survreg(Surv(futime,rep(1,1000))~1,dist="exponential")$coef
    (Intercept) 
      -1.618263 
    > exp(1.618263)
    [1] 5.044321
    
  • Intercept and scale in survreg for a Weibull distribution. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=28.
    [math]\displaystyle{ \begin{align} \gamma &= 1/scale \\ \alpha &= exp(-(Intercept)*\gamma) \end{align} }[/math]
    > survreg(Surv(futime,rep(1,1000))~1,dist="weibull")
    Call:
    survreg(formula = Surv(futime, rep(1, 1000)) ~ 1, dist = "weibull")
    
    Coefficients:
    (Intercept) 
      -1.639469 
    
    Scale= 1.048049 
    
    Loglik(model)= 620.1   Loglik(intercept only)= 620.1
    n= 1000 
    
  • rsurv() function from the ipred package
  • Use Weibull distribution to model survival data. We assume the shape is constant across subjects. We then allow the scale to vary across subjects. For subject [math]\displaystyle{ i }[/math] with covariate [math]\displaystyle{ X_i }[/math], [math]\displaystyle{ \log(scale_i) }[/math] = [math]\displaystyle{ \beta ' X_i }[/math]. Note that if we want to make the [math]\displaystyle{ \beta }[/math] sign to be consistent with the Cox model, we want to use [math]\displaystyle{ \log(scale_i) }[/math] = [math]\displaystyle{ -\beta ' X_i }[/math] instead.
  • http://sas-and-r.blogspot.com/2010/03/example-730-simulate-censored-survival.html. Assuming shape=1 in the Weibull distribution, then the hazard function can be expressed as a proportional hazard model
    [math]\displaystyle{ h(t|x) = 1/scale = \frac{1}{\lambda/e^{\beta 'x}} = \frac{e^{\beta ' x}}{\lambda} = h_0(t) \exp(\beta' x) }[/math]
    n = 10000
    beta1 = 2; beta2 = -1
    lambdaT = .002 # baseline hazard
    lambdaC = .004  # hazard of censoring
    set.seed(1234)
    x1 = rnorm(n,0)
    x2 = rnorm(n,0)
    # true event time
    T = Vectorize(rweibull)(n=1, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2)) 
    # No censoring
    event2 <- rep(1, length(T))
    coxph(Surv(T, event2)~ x1 + x2)
    #        coef exp(coef) se(coef)      z      p
    # x1  1.99825   7.37613  0.01884 106.07 <2e-16
    # x2 -1.00200   0.36715  0.01267 -79.08 <2e-16
    #
    # Likelihood ratio test=15556  on 2 df, p=< 2.2e-16
    # n= 10000, number of events= 10000 
    
    # Censoring
    C = rweibull(n, shape=1, scale=lambdaC)   #censoring time
    time = pmin(T,C)  #observed time is min of censored and true
    event = time==T   # set to 1 if event is observed
    coxph(Surv(time, event)~ x1 + x2)
    #        coef exp(coef) se(coef)      z      p
    # x1  2.01039   7.46622  0.02250  89.33 <2e-16
    # x2 -0.99210   0.37080  0.01552 -63.95 <2e-16
    #
    # Likelihood ratio test=11321  on 2 df, p=< 2.2e-16
    # n= 10000, number of events= 6002
    mean(event)
    # [1] 0.6002
    
  • https://stats.stackexchange.com/a/135129 (Bender's inverse probability method). Let [math]\displaystyle{ h_0(t)=\lambda \rho t^{\rho - 1} }[/math] where shape 𝜌>0 and scale 𝜆>0. Following the inverse probability method, a realisation of 𝑇∼𝑆(⋅|𝐱) is obtained by computing [math]\displaystyle{ t = \left( - \frac{\log(v)}{\lambda \exp(x' \beta)} \right) ^ {1/\rho} }[/math] with 𝑣 a uniform variate on (0,1). Using results on transformations of random variables, one may notice that 𝑇 has a conditional Weibull distribution (given 𝐱) with shape 𝜌 and scale 𝜆exp(𝐱′𝛽).
    # N = sample size    
    # lambda = scale parameter in h0()
    # rho = shape parameter in h0()
    # beta = fixed effect parameter
    # rateC = rate parameter of the exponential distribution of censoring variable C
    
    simulWeib <- function(N, lambda, rho, beta, rateC)
    {
      # covariate --> N Bernoulli trials
      x <- sample(x=c(0, 1), size=N, replace=TRUE, prob=c(0.5, 0.5))
    
      # Weibull latent event times
      v <- runif(n=N)
      Tlat <- (- log(v) / (lambda * exp(x * beta)))^(1 / rho)
    
      # censoring times
      C <- rexp(n=N, rate=rateC)
    
      # follow-up times and event indicators
      time <- pmin(Tlat, C)
      status <- as.numeric(Tlat <= C)
    
      # data set
      data.frame(id=1:N,
                 time=time,
                 status=status,
                 x=x)
    }
    # Test
    set.seed(1234)
    betaHat <- rate <- rep(NA, 1e3)
    for(k in 1:1e3)
    {
      dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001)
      fit <- coxph(Surv(time, status) ~ x, data=dat)
      rate[k] <- mean(dat$status == 0)
      betaHat[k] <- fit$coef
    }
    mean(rate)
    # [1] 0.12287
    mean(betaHat)
    # [1] -0.6085473
    
  • Generating survival times to simulate Cox proportional hazards models Bender et al 2005 [math]\displaystyle{ T=H_0^{-1}[-\log(U) \exp(\beta' x)] }[/math] Bender2005.png, Bender2005table2.png
  • Simple example from glmnet
    set.seed(10101)
    N = 1000
    p = 30
    nzc = p/3
    x = matrix(rnorm(N * p), N, p)
    beta = rnorm(nzc)
    fx = x[, seq(nzc)] %*% beta/3
    hx = exp(fx)
    ty = rexp(N, hx)
    tcens = rbinom(n = N, prob = 0.3, size = 1)  # censoring indicator
    y = cbind(time = ty, status = 1 - tcens)  # y=Surv(ty,1-tcens) with library(survival)
    fit = glmnet(x, y, family = "cox")
    pred = predict(fit, newx = x)
    Cindex(pred, y)
    
  • A non-standard baseline hazard function [math]\displaystyle{ \lambda_0(t)=(t - .5)^2 }[/math] from the paper: A new nonparametric screening method for ultrahigh-dimensional survival data Liu 2018. The censoring time [math]\displaystyle{ C = \widetilde{C} \wedge \tau }[/math], where [math]\displaystyle{ \widetilde{C} }[/math] was generated from Unif (0, [math]\displaystyle{ \tau + 2 }[/math]) where [math]\displaystyle{ \tau }[/math] was chosen to yield the desirable censoring rates of 20% and 40%, respectively.
  • Regularization paths for Cox's proportional hazards model via coordinate descent. J Stat Software Simon et al 2011. Gsslasso Cox: a Bayesian hierarchical model for predicting survival and detecting associated genes by incorporating pathway information by Tang 2019. See also Tian 2014 JASA p1525. X ~ standard Gaussian. True survival time exp(beta X + k · Z). Z ~ N(0,1), and k is chosen so that the signal-to-noise ratio is 3.0 or to induce a certain censoring rate. Censoring time C = exp (k · Z). The observed survival time T = min{Y, C}.
  • survParamSim: Parametric Survival Simulation with Parameter Uncertainty
  • vivaGen – a survival data set generator for software testing BMC Bioinformatics 2020
  • Simulating survival outcomes: setting the parameters for the desired distribution. simstudy, Follow-up: simstudy function for generating parameters for survival distribution package was used.

Warning on multiple rates

Search Vectorize() function in this page.

mean(rexp(1000, rate=2) )
# [1] 0.5258078
mean(rexp(1000, rate=1) )
# [1] 0.9712124

z = rexp(1000, rate=c(1, 2))
mean(z[seq(1, 1000, by=2)])
# [1] 1.041969
mean(z[seq(2, 1000, by=2)])
# [1] 0.5079594

Markov model

Fake Survival Data for the Disease Progression Model

Non-proportional hazards

Simulating time-to-event outcomes with non-proportional hazards

Standardize covariates

coxph() does not have an option to standardize covariates but glmnet() does.

library(glmnet)
library(survival)

N=1000;p=30
nzc=p/3
beta <- c(rep(1, 5), rep(-1, 5))

set.seed(1234)
  x=matrix(rnorm(N*p),N,p)
  x[, 1:5] <- x[, 1:5]*2
  x[, 6:10] <- x[, 6:10] + 2

  fx=x[,seq(nzc)] %*% beta
  hx=exp(fx)
  ty=rexp(N,hx)
  tcens <- rep(0,N)
  y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)

coxph(Surv(ty, 1-tcens) ~ x) %>% coef %>% head(10)
#         x1         x2         x3         x4         x5         x6         x7
#  0.6076146  0.6359927  0.6346022  0.6469274  0.6152082 -0.6614930 -0.5946101
#         x8         x9        x10
# -0.6726081 -0.6275205 -0.7073704

xscale <- scale(x, TRUE, TRUE) # halve the covariate values
coxph(Surv(ty, 1-tcens) ~ xscale) %>% coef %>% head(10) # double the coef
#    xscale1    xscale2    xscale3    xscale4    xscale5    xscale6    xscale7
#  1.2119940  1.2480628  1.2848646  1.2857796  1.1959619 -0.6431946 -0.5941309
#    xscale8    xscale9   xscale10
# -0.6723137 -0.6188384 -0.6793313

  set.seed(1)
  fit=cv.glmnet(x,y,family="cox", nfolds=10, standardize = TRUE)
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1]  0.9351341  0.9394696  0.9187242  0.9418540  0.9111623 -0.9303783
# [7] -0.9271438 -0.9597583 -0.9493759 -0.9386065

  set.seed(1)
  fit=cv.glmnet(x,y,family="cox", nfolds=10, standardize = FALSE)
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1]  0.9357171  0.9396877  0.9200247  0.9420215  0.9118803 -0.9257406
# [7] -0.9232813 -0.9554017 -0.9448827 -0.9356009  

  set.seed(1)
  fit=cv.glmnet(xscale,y,family="cox", nfolds=10, standardize = TRUE)
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1]  1.8652889  1.8436015  1.8601198  1.8719515  1.7712951 -0.9046420
# [7] -0.9263966 -0.9593383 -0.9362407 -0.9014015

  set.seed(1)
  fit=cv.glmnet(xscale,y,family="cox", nfolds=10, standardize = FALSE)
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1]  1.8652889  1.8436015  1.8601198  1.8719515  1.7712951 -0.9046420
# [7] -0.9263966 -0.9593383 -0.9362407 -0.9014015

Predefined censoring rates

Simulating survival data with predefined censoring rates for proportional hazards models

Cross validation

  • CVPL (cross-validated partial likelihood)
    • https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/cvpl (lower is better)
    • https://rdrr.io/cran/dynpred/man/CVPL.html. source code. 1. it does LOOCV so no need to set a random seed. 2. it seems the function does not include lasso/glmnet 3. the formula on pages 173-174 of the book Dynamic Prediction in Clinical Survival Analysis says the partial log likelihood should include the penalty term. 4. concordance measures like Harrell’s C-index are not appropriate because they only measure the discrimination and not the calibration. PS: I downloaded and looked at the chapter source code. It uses optL1() function from the penalized package to obtain cross validated log partial likelihood.
      R> library(dynpred)
      R> data(ova)
      R> CVPL(Surv(tyears, d) ~ 1, data = ova)
      [1] NA
      R> CVPL(Surv(tyears, d) ~ Karn + Broders + FIGO + Ascites + Diam,
         data = ova)
      [1] -1652.169
      R> coxph(Surv(tyears, d) ~ Karn + Broders + FIGO + Ascites + Diam, data = ova)$loglik[2] # No CV
      [1] -1374.717
      
    • optL1() from the penalized package. It seems the penalized package has its own sequence of lambdas and these lambdas are totally different from glmnet() has created though the CV plot from each package shows a convex shape.
    • Gsslasso paper. CVPL does not include the penalty term.
    • https://web.stanford.edu/~hastie/Papers/v39i05.pdf#page=10 (larger is better)

Competing risk

Survival rate terminology

Time-dependent covariates

Books

Class notes

Cox proportional hazards model and the partial log-likelihood function

Let Yi denote the observed time (either censoring time or event time) for subject i, and let Ci be the indicator that the time corresponds to an event (i.e. if Ci = 1 the event occurred and if Ci = 0 the time is a censoring time). The hazard function for the Cox proportional hazard model has the form

[math]\displaystyle{ \lambda(t|X) = \lambda_0(t)\exp(\beta_1X_1 + \cdots + \beta_pX_p) = \lambda_0(t)\exp(X \beta^\prime). }[/math]

This expression gives the hazard at time t for an individual with covariate vector (explanatory variables) X. Based on this hazard function, a partial likelihood (defined on hazard function) can be constructed from the datasets as

[math]\displaystyle{ L(\beta) = \prod\limits_{i:C_i=1}\frac{\theta_i}{\sum_{j:Y_j\ge Y_i}\theta_j}, }[/math]

where θj = exp(Xj β) and X1, ..., Xn are the covariate vectors for the n independently sampled individuals in the dataset (treated here as column vectors). This pdf or this note give a toy example

The corresponding log partial likelihood is

[math]\displaystyle{ \ell(\beta) = \sum_{i:C_i=1} \left(X_i \beta^\prime - \log \sum_{j:Y_j\ge Y_i}\theta_j\right). }[/math]

This function can be maximized over β to produce maximum partial likelihood estimates of the model parameters.

The partial score function is [math]\displaystyle{ \ell^\prime(\beta) = \sum_{i:C_i=1} \left(X_i - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j}{\sum_{j:Y_j\ge Y_i}\theta_j}\right), }[/math]

and the Hessian matrix of the partial log likelihood is

[math]\displaystyle{ \ell^{\prime\prime}(\beta) = -\sum_{i:C_i=1} \left(\frac{\sum_{j:Y_j\ge Y_i}\theta_jX_jX_j^\prime}{\sum_{j:Y_j\ge Y_i}\theta_j} - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j\times \sum_{j:Y_j\ge Y_i}\theta_jX_j^\prime}{[\sum_{j:Y_j\ge Y_i}\theta_j]^2}\right). }[/math]

Using this score function and Hessian matrix, the partial likelihood can be maximized using the Newton-Raphson algorithm. The inverse of the Hessian matrix, evaluated at the estimate of β, can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate standard errors for the regression coefficients.

If X is age, then the coefficient is likely >0. If X is some treatment, then the coefficient is likely <0.

Get the partial likelihood of a Cox PH Model with new data

offset was used. See https://stackoverflow.com/questions/26721551/is-there-a-way-to-get-the-partial-likelihood-of-a-cox-ph-model-with-new-data-and

How to compute partial log-likelihood function in Cox proportional hazards model?

set.seed(1)
n <- 1000
t <- rexp(100)
c <- rbinom(100, 1, .2) ## censoring indicator (independent process)
x <- rbinom(100, 1, exp(-t)) ## some arbitrary relationship btn x and t
betamax <- coxph(Surv(t, c) ~ x)
beta1 <- coxph(Surv(t, c) ~ x, init = c(1), control=coxph.control(iter.max=0))

betamax$loglik[2]  # [1]=initial, [2]=final
# [1] -52.81476
beta1$loglik[2]
# [1] -52.85067

Optimization

Optimisation of a Cox proportional hazard model using Optimx()

Compare the partial likelihood to the full likelihood

http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=10

z-column (Wald statistic) from R's coxph()

How exactly can the Cox-model ignore exact times?

The Cox model does not depend on the times itself, instead it only needs an ordering of the events.

library(survival)
survfit(Surv(time, status) ~ x, data = aml) 
fit <- coxph(Surv(time, status) ~ x, data = aml)
coef(fit) # 0.9155326
min(diff(sort(unique(aml$time)))) # 1

# Shift survival time for some obs but keeps the same order
# make sure we choose obs (n=20 not works but n=21 works) with twins
rbind(order(aml$time), sort(aml$time), aml$time[order(aml$time)])
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16]
# [1,]   12   13   14   15    1   16    2    3   17     4     5    18    19     6    20     7
# [2,]    5    5    8    8    9   12   13   13   16    18    23    23    27    28    30    31
# [3,]    5    5    8    8    9   12   13   13   16    18    23    23    27    28    30    31
# [,17] [,18] [,19] [,20] [,21] [,22] [,23]
# [1,]    21     8    22     9    23    10    11
# [2,]    33    34    43    45    45    48   161
# [3,]    33    34    43    45    45    48   161

aml$time2 <- aml$time
aml$time2[order(aml$time)[1:21]] <- aml$time[order(aml$time)[1:21]] - .9
fit2 <- coxph(Surv(time2, status) ~ x, data = aml); fit2
coef(fit2) #      0.9155326
coef(fit) == coef(fit2) # TRUE

aml$time3 <- aml$time 
aml$time3[order(aml$time)[1:20]] <- aml$time[order(aml$time)[1:20]] - .9
fit3 <- coxph(Surv(time3, status) ~ x, data = aml); fit3
coef(fit3) #      0.8891567
coef(fit) == coef(fit3) # FALSE

Partial likelihood when there are ties; hypothesis testing: Likelihood Ratio Test, Wald Test & Score Test

http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=29

In R's coxph(): Nearly all Cox regression programs use the Breslow method by default, but not this one. The Efron approximation is used as the default here, it is more accurate when dealing with tied death times, and is as efficient computationally.

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xaghtmlnode28.html (include the case when there is a partition of parameters). The formulas for 3 tests are also available on Appendix B of Klein book.

The following code does not test for models. But since there is only one coefficient, the results are the same. If there is more than one variable, we can use anova(model1, model2) to run LRT.

library(KMsurv)
# No ties. Section 8.2
data(btrial)
str(btrial)
# 'data.frame':	45 obs. of  3 variables:
# $ time : int  19 25 30 34 37 46 47 51 56 57 ...
# $ death: int  1 1 1 1 1 1 1 1 1 1 ...
# $ im   : int  1 1 1 1 1 1 1 1 1 1 ...
table(subset(btrial, death == 1)$time)
# death time is unique
coxph(Surv(time, death) ~ im, data = btrial)
#     coef exp(coef) se(coef)    z     p
# im 0.980     2.665    0.435 2.25 0.024
# Likelihood ratio test=4.45  on 1 df, p=0.03
# n= 45, number of events= 24 

# Ties, Section 8.3
data(kidney)
str(kidney)
# 'data.frame':	119 obs. of  3 variables:
# $ time : num  1.5 3.5 4.5 4.5 5.5 8.5 8.5 9.5 10.5 11.5 ...
# $ delta: int  1 1 1 1 1 1 1 1 1 1 ...
# $ type : int  1 1 1 1 1 1 1 1 1 1 ...
table(subset(kidney, delta == 1)$time)
# 0.5  1.5  2.5  3.5  4.5  5.5  6.5  8.5  9.5 10.5 11.5 15.5 16.5 18.5 23.5 26.5 
# 6    1    2    2    2    1    1    2    1    1    1    2    1    1    1    1 

# Default: Efron method
coxph(Surv(time, delta) ~ type, data = kidney)
# coef exp(coef) se(coef)     z    p
# type -0.613     0.542    0.398 -1.54 0.12
# Likelihood ratio test=2.41  on 1 df, p=0.1
# n= 119, number of events= 26 
summary(coxph(Surv(time, delta) ~ type, data = kidney))
# n= 119, number of events= 26 
# coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6126    0.5420   0.3979 -1.539    0.124
#
# exp(coef) exp(-coef) lower .95 upper .95
# type     0.542      1.845    0.2485     1.182
#
# Concordance= 0.497  (se = 0.056 )
# Rsquare= 0.02   (max possible= 0.827 )
# Likelihood ratio test= 2.41  on 1 df,   p=0.1
# Wald test            = 2.37  on 1 df,   p=0.1
# Score (logrank) test = 2.44  on 1 df,   p=0.1

# Breslow method
summary(coxph(Surv(time, delta) ~ type, data = kidney, ties = "breslow"))
# n= 119, number of events= 26 
#         coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6182    0.5389   0.3981 -1.553     0.12
#
#       exp(coef) exp(-coef) lower .95 upper .95
# type    0.5389      1.856     0.247     1.176
#
# Concordance= 0.497  (se = 0.056 )
# Rsquare= 0.02   (max possible= 0.827 )
# Likelihood ratio test= 2.45  on 1 df,   p=0.1
# Wald test            = 2.41  on 1 df,   p=0.1
# Score (logrank) test = 2.49  on 1 df,   p=0.1

# Discrete/exact method
summary(coxph(Surv(time, delta) ~ type, data = kidney, ties = "exact"))
#         coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6294    0.5329   0.4019 -1.566    0.117
#
#      exp(coef) exp(-coef) lower .95 upper .95
# type    0.5329      1.877    0.2424     1.171
#
# Rsquare= 0.021   (max possible= 0.795 )
# Likelihood ratio test= 2.49  on 1 df,   p=0.1
# Wald test            = 2.45  on 1 df,   p=0.1
# Score (logrank) test = 2.53  on 1 df,   p=0.1

Hazard (function) and survival function

A hazard is the rate at which events happen, so that the probability of an event happening in a short time interval is the length of time multiplied by the hazard.

[math]\displaystyle{ h(t) = \lim_{\Delta t \to 0} \frac{P(t \leq T \lt t+\Delta t|T \geq t)}{\Delta t} = \frac{f(t)}{S(t)} = -\partial{ln[S(t)]} }[/math]

Therefore

[math]\displaystyle{ H(x) = \int_0^x h(u) d(u) = -ln[S(x)]. }[/math]

or

[math]\displaystyle{ S(x) = e^{-H(x)} }[/math]

Hazards (or probability of hazards) may vary with time, while the assumption in proportional hazard models for survival is that the hazard is a constant proportion.

Examples:

  • If h(t)=c, S(t) is exponential. f(t) = c exp(-ct). The mean is 1/c.
  • If [math]\displaystyle{ \log h(t) = c + \rho t }[/math], S(t) is Gompertz distribution.
  • If [math]\displaystyle{ \log h(t)=c + \rho \log (t) }[/math], S(t) is Weibull distribution.
  • For Cox regression, the survival function can be shown to be [math]\displaystyle{ S(t|X) = S_0(t) ^ {\exp(X\beta)} }[/math].
[math]\displaystyle{ \begin{align} S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\ &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\ &= e^{-\int_0^t h_0(u) du \cdot exp(X \beta)} \\ &= S_0(t)^{exp(X \beta)} \end{align} }[/math]

Alternatively,

[math]\displaystyle{ \begin{align} S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\ &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\ &= e^{-H_0(t) \cdot exp(X \beta)} \end{align} }[/math]

where the cumulative baseline hazard at time t, [math]\displaystyle{ H_0(t) }[/math], is commonly estimated through the non-parametric Breslow estimator.

Check the proportional hazard (constant HR over time) assumption by cox.zph()

library(survival)
res.cox <- coxph(Surv(time, status) ~ age + sex + wt.loss, data =  lung)
test.ph <- cox.zph(res.cox)
test.ph

strata

stratification in cox model. In a Cox model, stratification allows for as many different hazard functions as there are strata. Beta coefficients (hazard ratios) optimized for all strata are then fitted.

bladder1 <- bladder[bladder$enum < 5, ] 
o <- coxph(Surv(stop, event) ~ rx + size + number + strata(enum) , bladder1)
# the strata will not be a term in covariate in the model fitting
anova(o)

Sample size calculators

How many events are required to fit the Cox regression reliably?

If we have only 1 covariate and the covariate is continuous, we need at least 2 events (one for the baseline hazard and one for beta).

If the covariate is discrete, we need at least one event from (each of) two groups in order to fit the Cox regression reliably. For example, if status=(0,0,0,1,0,1) and x=(0,0,1,1,2,2) works fine.

library(survival)
head(ovarian)
#   futime fustat     age resid.ds rx ecog.ps
# 1     59      1 72.3315        2  1       1
# 2    115      1 74.4932        2  1       1
# 3    156      1 66.4658        2  1       2
# 4    421      0 53.3644        2  2       1
# 5    431      1 50.3397        2  1       1
# 6    448      0 56.4301        1  1       2

ova <- ovarian # n=26
ova$time <- ova$futime
ova$status <- 0
ova$status[1:4] <- 1
coxph(Surv(time, status) ~ rx, data = ova) # OK
summary(survfit(Surv(time, status) ~ rx, data =ova))
#                 rx=1 
#  time n.risk n.event survival std.err lower 95% CI upper 95% CI
#    59     13       1    0.923  0.0739        0.789            1
#   115     12       1    0.846  0.1001        0.671            1
#   156     11       1    0.769  0.1169        0.571            1
#                 rx=2 
#     time  n.risk  n.event  survival  std.err lower 95% CI upper 95% CI 
# 421.0000 10.0000   1.0000    0.9000   0.0949       0.7320       1.0000 

# Suspicious Cox regression result due to 0 sample size in one group
ova$status <- 0
ova$status[1:3] <- 1
coxph(Surv(time, status) ~ rx, data = ova)
#         coef exp(coef)  se(coef) z p
# rx -2.13e+01  5.67e-10  2.32e+04 0 1
#
# Likelihood ratio test=4.41  on 1 df, p=0.04
# n= 26, number of events= 3 
# Warning message:
# In fitter(X, Y, strats, offset, init, control, weights = weights,  :
#   Loglik converged before variable  1 ; beta may be infinite. 

summary(survfit(Surv(time, status) ~ rx, data = ova))
#                rx=1 
# time n.risk n.event survival std.err lower 95% CI upper 95% CI
#   59     13       1    0.923  0.0739        0.789            1
#  115     12       1    0.846  0.1001        0.671            1
#  156     11       1    0.769  0.1169        0.571            1
#                rx=2 
# time n.risk n.event survival std.err lower 95% CI upper 95% CI

Extract p-values

fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)

# method 1:
beta <- coef(fit)
se <- sqrt(diag(vcov(fit)))
1 - pchisq((beta/se)^2, 1)

# method 2: https://www.biostars.org/p/65315/
coef(summary(fit))[, "Pr(>|z|)"]

Expectation of life & expected future lifetime

  • The average lifetime is the same as the area under the survival curve.
[math]\displaystyle{ \begin{align} \mu &= \int_0^\infty t f(t) dt \\ &= \int_0^\infty S(t) dt \end{align} }[/math]

by integrating by parts making use of the fact that -f(t) is the derivative of S(t), which has limits S(0)=1 and [math]\displaystyle{ S(\infty)=0 }[/math]. The average lifetime may not be bounded if you have censored data, there's censored observations that last beyond your last recorded death.

[math]\displaystyle{ \frac{1}{S(t_0)} \int_0^{\infty} t\,f(t_0+t)\,dt = \frac{1}{S(t_0)} \int_{t_0}^{\infty} S(t)\,dt, }[/math]

Hazard Ratio (exp^beta) vs Relative Risk

  1. https://en.wikipedia.org/wiki/Hazard_ratio
  2. Hazard represents the instantaneous event rate, which means the probability that an individual would experience an event (e.g. death/relapse) at a particular given point in time after the intervention, assuming that this individual has survived to that particular point of time without experiencing any event. See an example here.
  3. Hazard ratio is a measure of an effect of an intervention of an outcome of interest over time. The hazard ratio is not computed at any one time point. See an example here.
  4. Since there is only one hazard ratio reported, it can can only be interpreted if you assume that the population hazard ratio is consistent over time, and that any differences are due to random sampling. If two survival curves cross, the hazard ratios are certainly not consistent. See Hazard ratio from survival analysis including how the hazard ratio is computed.
  5. Hazard ratio = hazard in the intervention group / Hazard in the control group
  6. A hazard ratio is often reported as a “reduction in risk of death or progression” – This risk reduction is calculated as 1 minus the Hazard Ratio (exp^beta), e.g., HR of 0.84 is equal to a 16% reduction in risk. See this video Interpreting Hazard Ratios and stackexchange.com.
  7. Hazard ratio and its confidence can be obtained in R by using the summary() method; e.g. fit <- coxph(Surv(time, status) ~ x); summary(fit)$conf.int; confint(fit)
  8. The coefficient beta represents the expected change in log hazard if X changes by one unit and all other variables are held constant in Cox models. See Variable selection – A review and recommendations for the practicing statistician by Heinze et al 2018.
  9. Understanding the endpoints in oncology: overall survival, progression free survival, hazard ratio, censored value

Another example (John Fox, Cox Proportional-Hazards Regression for Survival Data) is assuming Y ~ age + prio + others.

  • If exp(beta_age) = 0.944. It means an additional year of age reduces the hazard by a factor of .944 on average, or (1-.944)*100 = 5.6 percent.
  • If exp(beta_prio) = 1.096, it means each prior conviction increases the hazard by a factor of 1.096, or 9.6 percent.

How do you explain the difference between hazard ratio and relative risk to a layman? from Quora.

See Using R for Biomedical Statistics for relative risk, odds ratio, et al.

Odds Ratio, Hazard Ratio and Relative Risk by Janez Stare

For two groups that differ only in treatment condition, the ratio of the hazard functions is given by [math]\displaystyle{ e^\beta }[/math], where [math]\displaystyle{ \beta }[/math] is the estimate of treatment effect derived from the regression model. See here.

Compute ratio ratios from coxph() in R (Hint: exp(beta)).

Prognostic index is defined on http://www.math.ucsd.edu/~rxu/math284/slect6.pdf#page=2.

Basics of the Cox proportional hazards model. Good prognostic factor (b<0 or HR<1) and bad prognostic factor (b>0 or HR>1).

Variable selection: variables were retained in the prediction models if they had a hazard ratio of <0.85 or >1.15 (for binary variables) and were statistically significant at the 0.01 level. see Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study.

library(KMsurv)
# No ties. Section 8.2
data(btrial)
coxph(Surv(time, death) ~ im, data = btrial)
summary(coxph(Surv(time, death) ~ im, data = btrial))$conf.int
#     exp(coef) exp(-coef) lower .95 upper .95
# im  2.664988  0.3752362  1.136362  6.249912

So the hazard ratio and its 95% ci can be obtained from the 1st, 3rd and 4th element in conf.int.

Hazard Ratio, confidence interval, Table 1

  • Google image: survival data cox model hazard ratio table 1
  • To get the 95% CI, use the summary() function
    > mod = coxph(Surv(time,status) ~ x, data = aml)
    > summary(mod)
      n= 23, number of events= 18 
    
                     coef exp(coef) se(coef)     z Pr(>|z|)  
    xNonmaintained 0.9155    2.4981   0.5119 1.788   0.0737 .
    ---
    Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
    
                   exp(coef) exp(-coef) lower .95 upper .95
    xNonmaintained     2.498     0.4003    0.9159     6.813
    
    Concordance= 0.619  (se = 0.063 )
    Likelihood ratio test= 3.38  on 1 df,   p=0.07
    Wald test            = 3.2  on 1 df,   p=0.07
    Score (logrank) test = 3.42  on 1 df,   p=0.06
    
  • To report the HR in table 1 for multiple variables, one must use Univariate Cox regression; for example this one uses lapply().

Hazard Ratio and death probability

https://en.wikipedia.org/wiki/Hazard_ratio#The_hazard_ratio_and_survival

Suppose S0(t)=.2 (20% survived at time t) and the hazard ratio (hr) is 2 (a group has twice the chance of dying than a comparison group), then (Cox model is assumed)

  1. S1(t)=S0(t)hr = .22 = .04 (4% survived at t)
  2. The corresponding death probabilities are 0.8 and 0.96.
  3. If a subject is exposed to twice the risk of a reference subject at every age, then the probability that the subject will be alive at any given age is the square of the probability that the reference subject (covariates = 0) would be alive at the same age. See p10 of this lecture notes.
  4. exp(x*beta) is the relative risk associated with covariate value x.

Hazard Ratio Forest Plot

The forest plot quickly summarizes the hazard ratio data across multiple variables –If the line crosses the 1.0 value, the hazard ratio is not significant and there is no clear advantage for either arm.

Hazard ratio forest plot: ggforest() from survminer

Forest Plot for a Meta-analysis of Several Different Randomised Control Trials

Using R for Biomedical Statistics

Restricted mean survival time

  • survival::print.survfit(). How to compute the mean survival time.
    print(km, print.rmean=TRUE) # assume the longest survival time is the horizon
    print(km, print.rmean=TRUE, rmean=250)
    
  • survRM2 package
  • PWEALL:: rmsth()
    R> library(survRM2)
    R> D = rmst2.sample.data()
    R> nrow(D)
    [1] 312
    R> head(D[,1:3])
           time status arm
    1  1.095140      1   1
    2 12.320329      0   1
    3  2.770705      1   1
    4  5.270363      1   1
    5  4.117728      0   0
    6  6.852841      1   0
    R> time   = D$time
    R> status = D$status
    R> arm    = D$arm
    R> rmst2(time, status, arm, tau=10)
    
    The truncation time: tau = 10  was specified. 
    
    Restricted Mean Survival Time (RMST) by arm 
                  Est.    se lower .95 upper .95
    RMST (arm=1) 7.146 0.283     6.592     7.701
    RMST (arm=0) 7.283 0.295     6.704     7.863
    
    
    Restricted Mean Time Lost (RMTL) by arm 
                  Est.    se lower .95 upper .95
    RMTL (arm=1) 2.854 0.283     2.299     3.408
    RMTL (arm=0) 2.717 0.295     2.137     3.296
    
    
    Between-group contrast 
                           Est. lower .95 upper .95     p
    RMST (arm=1)-(arm=0) -0.137    -0.939     0.665 0.738
    RMST (arm=1)/(arm=0)  0.981     0.878     1.096 0.738
    RMTL (arm=1)/(arm=0)  1.050     0.787     1.402 0.738
    
    R> library(PWEALL)
    R> PWEALL::rmsth(time, status, tcut=10)
    $tcut
    [1] 10
    $rmst
    [1] 7.208579
    $var
    [1] 13.00232
    $vadd
    [1] 3.915123
    
    R> PWEALL::rmsth(time[arm == 0], status[arm ==0], tcut=10)
    $tcut
    [1] 10
    $rmst
    [1] 7.283416
    $var
    [1] 13.30564
    $vadd
    [1] 3.73545
    
    R> PWEALL::rmsth(time[arm == 1], status[arm ==1], tcut=10)
    $tcut
    [1] 10
    $rmst
    [1] 7.146493
    $var
    [1] 12.49073
    $vadd
    [1] 3.967705
    
  • surv2sampleComp, 生存曲線下面積RMST(Restricted mean survival time)
  • Clustered restricted mean survival time regression Chen, 2022

Piece-wise constant baseline hazard model, Poisson model and Breslow estimate

Estimate baseline hazard [math]\displaystyle{ h_0(t) }[/math], Breslow cumulative baseline hazard [math]\displaystyle{ H_0(t) }[/math], baseline survival function [math]\displaystyle{ S_0(t) }[/math] and the survival function [math]\displaystyle{ S(t) }[/math]

Google: how to estimate baseline hazard rate

basehaz(coxph(Surv(time,status)~1,data=aml)) 

because the (Breslow) hazard estimator for a Cox model reduces to the Nelson-Aalen estimator when there are no covariates. You can also compute it from information returned by survfit().

fit <- survfit(Surv(time, status) ~ 1, data = aml)
cumsum(fit$n.event/fit$n.risk) # the Nelson-Aalen estimator for the times given by fit$times
-log(fit$surv)   # cumulative hazard

Manually compute

Breslow estimator of the baseline cumulative hazard rate reduces to the Nelson-Aalen estimator [math]\displaystyle{ \sum_{t_i \le t} \frac{d_i}{Y_i} }[/math] ([math]\displaystyle{ Y_i }[/math] is the number at risk at time [math]\displaystyle{ t_i }[/math]) when there are no covariates present; see p283 of Klein 2003.

[math]\displaystyle{ \begin{align} \hat{H}_0(t) &= \sum_{t_i \le t} \frac{d_i}{W(t_i;b)}, \\ W(t_i;b) &= \sum_{j \in R(t_i)} \exp(b' z_j) \end{align} }[/math]

where [math]\displaystyle{ t_1 \lt t_2 \lt \cdots \lt t_D }[/math] denotes the distinct death times and [math]\displaystyle{ d_i }[/math] be the number of deaths at time [math]\displaystyle{ t_i }[/math]. The estimator of the baseline survival function [math]\displaystyle{ S_0(t) = \exp [-H_0(t)] }[/math] is given by [math]\displaystyle{ \hat{S}_0(t) = \exp [-\hat{H}_0(t)] }[/math]. Below we use the formula to compute the cumulative hazard (and survival function) and compare them with the result obtained using R's built-in functions. The following code is a modification of the snippet from the post Breslow cumulative hazard and basehaz().

bhaz <- function(beta, time, status, x) {
  # time can be duplicated
  # x (covariate) must be continuous
  data <- data.frame(time,status,x)
  data <- data[order(data$time), ]
  dt   <- unique(data$time)
  k    <- length(dt)
  risk <- exp(data.matrix(data[,-c(1:2)]) %*% beta)
  h    <- rep(0,k)
  
  for(i in 1:k) {
    h[i] <- sum(data$status[data$time==dt[i]]) / sum(risk[data$time>=dt[i]])          
  }
  
  return(data.frame(h, dt))
}

# Example 1 'ovarian' which has unique survival time
all(table(ovarian$futime) == 1) # TRUE

fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
# 1. compute the cumulative baseline hazard 
# 1.1 manually using Breslow estimator at the observed time
h0 <- bhaz(fit$coef, ovarian$futime, ovarian$fustat, ovarian$age)
H0 <- cumsum(h0$h)
# 1.2 use basehaz (always compute at the observed time)
# since we consider the baseline, we need to use centered=FALSE
H0.bh <- basehaz(fit, centered = FALSE)
cbind(H0, h0$dt, H0.bh)
range(abs(H0 - H0.bh$hazard)) # [1] 6.352747e-22 5.421011e-20

# 2. compute the estimation of the survival function
# 2.1 manually using Breslow estimator at t = observed time (one dim, sorted) 
#     and observed age (another dim):
# S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
S1 <- outer(exp(-H0),  exp(coef(fit) * ovarian$age), "^")
dim(S1) # row = times, col = age
# 2.2 How about considering times not at observed (e.g. h0$dt - 10)?
# Let's look at the hazard rate
newtime <- h0$dt - 10
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
S2 <- outer(exp(-H0),  exp(coef(fit) * ovarian$age), "^")
dim(S2) # row = newtime, col = age

# 2.3 use summary() and survfit() to obtain the estimation of the survival
S3 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = h0$dt)$surv
dim(S3)  # row = times, col = age
range(abs(S1 - S3)) # [1] 2.117244e-17 6.544321e-12
# 2.4 How about considering times not at observed (e.g. h0$dt - 10)?
# Note that we cannot put times larger than the observed
S4 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = newtime)$surv
range(abs(S2 - S4)) # [1] 0.000000e+00 6.544321e-12
# Example 2 'kidney' which has duplicated time
fit <- coxph(Surv(time, status) ~ age, data = kidney)
# manually compute the breslow cumulative baseline hazard
#   at the observed time
h0 <- with(kidney, bhaz(fit$coef, time, status, age))
H0 <- cumsum(h0$h)
# use basehaz (always compute at the observed time)
# since we consider the baseline, we need to use centered=FALSE
H0.bh <- basehaz(fit, centered = FALSE)
head(cbind(H0, h0$dt, H0.bh))
range(abs(H0 - H0.bh$hazard)) # [1] 0.000000000 0.005775414

# manually compute the estimation of the survival function
# at t = observed time (one dim, sorted) and observed age (another dim):
# S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
S1 <- outer(exp(-H0),  exp(coef(fit) * kidney$age), "^")
dim(S1) # row = times, col = age
# How about considering times not at observed (h0$dt - 1)?
# Let's look at the hazard rate
newtime <- h0$dt - 1
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
S2 <- outer(exp(-H0),  exp(coef(fit) * kidney$age), "^")
dim(S2) # row = newtime, col = age

# use summary() and survfit() to obtain the estimation of the survival
S3 <- summary(survfit(fit, data.frame(age = kidney$age)), times = h0$dt)$surv
dim(S3)  # row = times, col = age
range(abs(S1 - S3)) # [1] 0.000000000 0.002783715
# How about considering times not at observed (h0$dt - 1)?
# We cannot put times larger than the observed
S4 <- summary(survfit(fit, data.frame(age = kidney$age)), times = newtime)$surv
range(abs(S2 - S4)) # [1] 0.000000000 0.002783715
  • basehaz() (an alias for survfit) from stackexchange.com and here. basehaz() has a parameter centered which by default is TRUE. Actually basehaz() gives cumulative hazard H(t). See here. That is, exp(-basehaz(fit)$hazard) is the same as summary(survfit(fit))$surv. basehaz() function is not useful.
fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian) 
> fit
Call:
coxph(formula = Surv(futime, fustat) ~ age, data = ovarian)

      coef exp(coef) se(coef)    z      p
age 0.1616    1.1754   0.0497 3.25 0.0012

Likelihood ratio test=14.3  on 1 df, p=0.000156
n= 26, number of events= 12 

# Note the default 'centered = TRUE' for basehaz() 
> exp(-basehaz(fit)$hazard) # exp(-cumulative hazard)
 [1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
 [8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
[15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
[22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
> dim(ovarian)
[1] 26  6
> exp(-basehaz(fit)$hazard)[ovarian$fustat == 1]
 [1] 0.9880206 0.9738738 0.9545899 0.8973620 0.8243117 0.8243117 0.7750981
 [8] 0.7750981 0.5204807 0.5204807 0.5204807 0.5204807
> summary(survfit(fit))$surv 
 [1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
 [8] 0.7750981 0.7244924 0.6734146 0.5962187 0.5204807
> summary(survfit(fit, data.frame(age=mean(ovarian$age))), 
          time=ovarian$futime[ovarian$fustat == 1])$surv
# Same result as above
> summary(survfit(fit, data.frame(age=mean(ovarian$age))), 
                    time=ovarian$futime)$surv
 [1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
 [8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
[15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
[22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807

Predicted survival probability in Cox model: survfit.coxph(), plot.survfit() & summary.survfit( , times)

For theory, see section 8.6 Estimation of the survival function in Klein & Moeschberger.

For R, see Extract survival probabilities in Survfit by groups

plot.survfit(). fun="log" to plot log survival curve, fun="event" plot cumulative events, fun="cumhaz" plots cumulative hazard (f(y) = -log(y)).

The plot function below will draw 4 curves: [math]\displaystyle{ S_0(t)^{\exp(\hat{\beta}_{age}*60)} }[/math], [math]\displaystyle{ S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageII})} }[/math], [math]\displaystyle{ S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIII})} }[/math], [math]\displaystyle{ S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIV})} }[/math].

library(KMsurv) # Data package for Klein & Moeschberge
data(larynx)
larynx$stage <- factor(larynx$stage)
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)

# Figure 8.3 from Section 8.6
plot(survfit(coxobj, newdata = data.frame(age=rep(60, 4), stage=factor(1:4))), lty = 1:4)

# Estimated probability for a 60-year old for different stage patients
out <- summary(survfit(coxobj, data.frame(age = rep(60, 4), stage=factor(1:4))), times = 5)
out$surv
#  time n.risk n.event survival1 survival2 survival3 survival4
#    5     34      40     0.702     0.665      0.51     0.142
sum(larynx$time >=5) # n.risk
# [1] 34
sum(larynx$delta[larynx$time <=5]) # n.event
# [1] 40
sum(larynx$time >5) # Wrong
# [1] 31
sum(larynx$delta[larynx$time <5]) # Wrong
# [1] 39

# 95% confidence interval
out$lower
# 0.8629482 0.9102532 0.7352413 0.548579
out$upper
# 0.5707952 0.4864903 0.3539527 0.03691768

We need to pay attention when the number of covariates is large (and we don't want to specify each covariates in the formula). The key is to create a data frame and use dot (.) in the formula. This is to fix a warning message: 'newdata' had XXX rows but variables found have YYY rows from running survfit(, newdata).

Another way is to use as.formula() if we don't want to create a new object.

xsub <- data.frame(xtrain)
colnames(xsub) <- paste0("x", 1:ncol(xsub))

coxobj <- coxph(Surv(ytrain[, "time"], ytrain[, "status"]) ~ ., data = xsub)

newdata <- data.frame(xtest)
colnames(newdata) <- paste0("x", 1:ncol(newdata))

survprob <- summary(survfit(coxobj, newdata=newdata), 
                    times = 5)$surv[1, ]  
# since there is only 1 time point, we select the first row in surv (surv is a matrix with one row).

The predictSurvProb() function from the pec package can also be used to extract survival probability predictions from various modeling approaches.

Predicted survival probabilities from glmnet: c060/peperr, biospear packages and manual computation

## S3 method for class 'glmnet'
predictProb(object, response, x, times, complexity, ...)

set.seed(1234)
junk <- biospear::simdata(n=500, p=500, q.main = 10, q.inter = 0, 
                  prob.tt = .5, m0=1, alpha.tt=0, 
                  beta.main= -.5, b.corr = .7, b.corr.by=25, 
                  wei.shape = 1, recr=3, fu=2, timefactor=1)
summary(junk$time)
library(glmnet)
library(c060) # Error: object 'predictProb' not found
library(peperr)

y <- cbind(time=junk$time, status=junk$status)
x <- cbind(1, junk[, "treat", drop = FALSE])
names(x) <- c("inter", "treat")
x <- as.matrix(x)
cvfit <- cv.glmnet(x, y, family = "cox")
obj <- glmnet(x, y, family = "cox")
xnew <- matrix(c(0,0), nr=1)
predictProb(obj, y, xnew, times=1, complexity = cvfit$lambda.min)
# Error in exp(lp[response[, 1] >= t.unique[i]]) : 
#   non-numeric argument to mathematical function
# In addition: Warning message:
# In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL'
expSurv(res, traindata, method, ci.level = .95, boot = FALSE, nboot, smooth = TRUE,
  pct.group = 4, time, trace = TRUE, ncores = 1)
# S3 method for resexpSurv
predict(object, newdata, ...)
# continue the example
# BMsel() takes a little while
resBM <- biospear::BMsel(
    data = junk, 
    method = "lasso", 
    inter = FALSE, 
    folds = 5)
  
# Note: if we specify time =5 in expsurv(), we will get an error message
# 'time' is out of the range of the observed survival time.
# Note: if we try to specify more than 1 time point, we will get the following msg
# 'time' must be an unique value; no two values are allowed.
esurv <- biospear::expSurv(
    res = resBM,
    traindata = junk,
    boot = FALSE,
    time = median(junk$time),
    trace = TRUE)
# debug(biospear:::plot.resexpSurv)
plot(esurv, method = "lasso")
# This is equivalent to doing the following
xx <- attributes(esurv)$m.score[, "lasso"]
wc <- order(xx); wgr <- 1:nrow(esurv$surv)
p1 <- plot(x = xx[wc], y = esurv$surv[wgr, "lasso"][wc], 
           xlab='prognostic score', ylab='survival prob')
# prognostic score beta*x in this cases.
# ignore treatment effect and interactions
xxmy <- as.vector(as.matrix(junk[, names(resBM$lasso)]) %*% resBM$lasso)
# See page4 of the paper. Scaled scores were used in the plot
range(abs(xx - (xxmy-quantile(xxmy, .025)) / (quantile(xxmy, .975) -  quantile(xxmy, .025))))
# [1] 1.500431e-09 1.465241e-06

h0 <- bhaz(resBM$lasso, junk$time, junk$status, junk[, names(resBM$lasso)])
newtime <- median(junk$time)
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
# newx <- junk[ , names(resBM$lasso)]
# Compute the estimate of the survival probability at training x and time = median(junk$time)
# using Breslow method
S2 <- outer(exp(-H0),  exp(xxmy), "^") # row = newtime, col = newx
range(abs(esurv$surv[wgr, "lasso"] - S2))
# [1] 6.455479e-18 2.459136e-06
# My implementation of the prognostic plot
#   Note that the x-axis on the plot is based on prognostic scores beta*x, 
#   not on treatment modifying scores gamma*x as described in the paper.
#   Maybe it is because inter = FALSE in BMsel() we have used.
p2 <- plot(xxmy[wc], S2[wc], xlab='prognostic score', ylab='survival prob')  # cf p1

> names(esurv)
[1] "surv"  "lower" "upper"
> str(esurv$surv)
 num [1:500, 1:2] 0.772 0.886 0.961 0.731 0.749 ...
 - attr(*, "dimnames")=List of 2
  ..$ : NULL
  ..$ : chr [1:2] "lasso" "oracle"

esurv2 <- predict(esurv, newdata = junk)
esurv2$surv       # All zeros?

Bug from the sample data (interaction was considered here; inter = TRUE) ?

set.seed(123456)
resBM <-  BMsel(
  data = Breast,
  x = 4:ncol(Breast),
  y = 2:1,
  tt = 3,
  inter = TRUE,
  std.x = TRUE,
  folds = 5,
  method = c("lasso", "lasso-pcvl"))

esurv <- expSurv(
  res = resBM,
  traindata = Breast,
  boot = FALSE,
  smooth = TRUE,
  time = 4,
  trace = TRUE
)
Computation of the expected survival
Computation of analytical confidence intervals
Computation of smoothed B-splines
Error in cobs(x = x, y = y, print.mesg = F, print.warn = F, method = "uniform",  : 
  There is at least one pair of adjacent knots that contains no observation.

Loglikelihood

  • fit$loglik is a vector of length 2 (initial model, fitted model). So deviance can be calculated by -2*fit$loglik[2]; see here for an example from BhGLM package.
  • Use survival::anova() command to do a likelihood ratio test. Note this function does not work on glmnet object.
  • residuals.coxph Calculates martingale, deviance, score or Schoenfeld residuals for a Cox proportional hazards model.
  • No deviance() on coxph object!
  • logLik() function will return fit$loglik[2]
  • Gradient descent for the elastic net Cox-PH model

glmnet

[math]\displaystyle{ \begin{align} \mathrm{AIC} &= 2k - 2\ln(\hat L) \\ \mathrm{AICc} &= \mathrm{AIC} + \frac{2k^2 + 2k}{n - k - 1} \end{align} }[/math]
fit <- glmnet(x, y, family = "multinomial") 

tLL <- fit$nulldev - deviance(fit) # ln(L)
k <- fit$df
n <- fit$nobs
AICc <- -tLL+2*k+2*k*(k+1)/(n-k-1)
AICc
f <- glmnet(x = x, y = y, family = family)
f$aic <- deviance(f) + 2 * f$df
set.seed(10101)
N=1000;p=6
nzc=p/3
x=matrix(rnorm(N*p),N,p)
beta=rnorm(nzc)
fx=x[,seq(nzc)]%*%beta/3
hx=exp(fx)
ty=rexp(N,hx)
tcens=rbinom(n=N,prob=.3,size=1)# censoring indicator
y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)
coxobj <- coxph(Surv(ty, 1-tcens) ~ x)
coxobj_small <- coxph(Surv(ty, 1-tcens) ~1)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(ty, 1 - tcens)
# Model 1: ~ x
# Model 2: ~ 1
# loglik  Chisq Df P(>|Chi|)  
# 1 -4095.2                      
# 2 -4102.7 15.151  6   0.01911 *

fit2 <- glmnet(x,y,family="cox", lambda=0) # ridge regression
deviance(fit2)                             # 2*(loglike_sat - loglike)
# [1] 8190.313
coxnet.deviance(x=x, y=y, beta=coef(fit2)) # 2*(loglike_sat - loglike)
# [1] 8190.313   
# https://github.com/cran/glmnet/blob/master/R/coxnet.deviance.R#L79

assess.glmnet(fit2, x=x, y=y)      # returns deviance and c-index
fit2$df
# [1] 6
fit2$nulldev - deviance(fit2) # Log-Likelihood ratio statistic
# [1] 15.15097
1-pchisq(fit2$nulldev - deviance(fit2), fit2$df)
# [1] 0.01911446

Here is another example including a factor covariate:

library(KMsurv) # Data package for Klein & Moeschberge
data(larynx)
larynx$stage <- factor(larynx$stage)
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)
coef(coxobj)
# age    stage2    stage3    stage4 
# 0.0190311 0.1400402 0.6423817 1.7059796
coxobj_small <- coxph(Surv(time, delta)~age, data = larynx)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(time, delta)
# Model 1: ~ age + stage
# Model 2: ~ age
# loglik  Chisq Df P(>|Chi|)   
# 1 -187.71                       
# 2 -195.55 15.681  3  0.001318 **
#   ---
#   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

# Now let's look at the glmnet() function.
# It seems glmnet does not recognize factor covariates.
coxobj2 <- with(larynx, glmnet(cbind(age, stage), Surv(time, delta), family = "cox", lambda=0))
coxobj2$nulldev - deviance(coxobj2)  # Log-Likelihood ratio statistic
# [1] 15.72596
coxobj1 <- with(larynx, glmnet(cbind(1, age), Surv(time, delta), family = "cox", lambda=0))
deviance(coxobj1) - deviance(coxobj2) 
# [1] 13.08457
1-pchisq(deviance(coxobj1) - deviance(coxobj2) , coxobj2$df-coxobj1$df)
# [1] 0.0002977376

High dimensional data

https://cran.r-project.org/web/views/Survival.html

glmnet + Cox models

Error in glmnet: x should be a matrix with 2 or more columns

https://stackoverflow.com/questions/29231123/why-cant-pass-only-1-coulmn-to-glmnet-when-it-is-possible-in-glm-function-in-r

Error in coxnet: (list) object cannot be coerced to type 'double'

Fix: do not use data.frame in X. Use cbind() instead.

Prediction

Prognostic index/risk scores

  • International Prognostic Index
  • In R,
    • coxph() defines risk score as exp(linear predictor).
    • survC1 package defines risk score as coxph's linear predictor; see his paper on Stat in Med 2011. Some medical papers (such as this one) also defines it in this way.
  • Low scores correspond to the lowest predicted risk and high scores correspond to the greatest predicted risk.
  • The test data were first segregated into high-risk and low-risk groups by the median of training risk scores. Assessment of performance of survival prediction models for cancer prognosis
  • On the paper "The C-index is not proper for the evaluation of t-year predicted risk" Blanche et al 2018 defined the true t-year predicted risk by [math]\displaystyle{ P(T \le t | Z) = 1 - Survival }[/math]

linear.predictors component in coxph object

The $linear.predictors component is not [math]\displaystyle{ \beta' x }[/math]. It is [math]\displaystyle{ \beta' (x-\mu_x) }[/math]. See this post.

predict.coxph, prognostic index & risk score

  • predict.coxph() Compute fitted values and regression terms for a model fitted by coxph. The Cox model is a relative risk model; predictions of type "linear predictor", "risk", and "terms" are all relative to the sample from which they came. By default, the reference value for each of these is the mean covariate within strata. The primary underlying reason is statistical: a Cox model only predicts relative risks between pairs of subjects within the same strata, and hence the addition of a constant to any covariate, either overall or only within a particular stratum, has no effect on the fitted results. Returned value: a vector or matrix of predictions, or a list containing the predictions (element "fit") and their standard errors (element "se.fit") if the se.fit option is TRUE.
predict(object, newdata,
    type=c("lp", "risk", "expected", "terms", "survival"),
    se.fit=FALSE, na.action=na.pass, terms=names(object$assign), collapse,
    reference=c("strata", "sample"),  ...)

type:

library(coxph)
fit <- coxph(Surv(time, status) ~ age , lung)
fit
#  Call:
#  coxph(formula = Surv(time, status) ~ age, data = lung)
#       coef exp(coef) se(coef)    z     p
# age 0.0187      1.02   0.0092 2.03 0.042
#
# Likelihood ratio test=4.24  on 1 df, p=0.0395  n= 228, number of events= 165 
fit$means
#      age 
# 62.44737 

# type = "lr" (Linear predictor)
as.numeric(predict(fit,type="lp"))[1:10]   
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518  0.21626733
# [7]  0.10394626  0.16010680 -0.17685643 -0.02709500
0.0187 * (lung$age[1:10] - fit$means)
# [1]  0.21603421  0.10383421 -0.12056579 -0.10186579 -0.04576579  0.21603421
# [7]  0.10383421  0.15993421 -0.17666579 -0.02706579
fit$linear.predictors[1:10]
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518
# [6]  0.21626733  0.10394626  0.16010680 -0.17685643 -0.02709500

# type = "risk" (Risk score)
> as.numeric(predict(fit,type="risk"))[1:10]
 [1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342 1.1095408
 [8] 1.1736362 0.8379001 0.9732688
> exp((lung$age-mean(lung$age)) * 0.0187)[1:10]
 [1] 1.2411448 1.1094165 0.8864188 0.9031508 0.9552657 1.2411448
 [7] 1.1094165 1.1734337 0.8380598 0.9732972
> exp(fit$linear.predictors)[1:10]
 [1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342
 [7] 1.1095408 1.1736362 0.8379001 0.9732688

threshold/cutoff

  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5882539/ An optimal threshold on the score to separate patients into low- and high-risk groups was determined using the MaxStat package to select the cutoff value producing the maximal log-rank score in the training cohort.
  • maxstat: Maximally Selected Rank Statistics (cf the matrixStats: Functions that Apply to Rows and Columns of Matrices (and to Vectors) package).

Survival risk prediction

  • Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data Simon 2011. The authors have noted the CV has been used for optimization of tuning parameters but the data available are too limited for effective into training & test sets.
    • The CV Kaplan-Meier curves are essentially unbiased and the separation between the curves gives a fair representation of the value of the expression profiles for predicting survival risk.
    • The log-rank statistic does not have the usual chi-squared distribution under the null hypothesis. This is because the data was used to create the risk groups.
    • Survival ROC curve can be used as a measure of predictive accuracy for the survival risk group model at a certain landmark time.
    • The ROC curve can be misleading. For example if re-substitution is used, the AUC can be very large.
    • The p-value for the significance of the test that AUC=.5 for the cross-validated survival ROC curve can be computed by permutations.
  • Measure of assessment for prognostic prediction
0/1 Survival
Sensitivity [math]\displaystyle{ P(Pred=1|True=1) }[/math] [math]\displaystyle{ P(\beta' X \gt c | T \lt t) }[/math]
Specificity [math]\displaystyle{ P(Pred=0|True=0) }[/math] [math]\displaystyle{ P(\beta' X \le c | T \ge t) }[/math]

Survival time prediction

Assessing the performance of prediction models

Hazard ratio

hazard.ratio()

hazard.ratio(x, surv.time, surv.event, weights, strat, alpha = 0.05, 
             method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)

Odds ratio

Limitations of the Odds Ratio in Gauging the Performance of a Diagnostic, Prognostic, or Screening Marker Pepe 2004. C statistic measures discrimination ability better than relative risk does; see this paper.

D index

D.index()

D.index(x, surv.time, surv.event, weights, strat, alpha = 0.05, 
        method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)

AUC

See ROC curve.

Comparison:

Definition Interpretation
Two class [math]\displaystyle{ P(Z_{case} \gt Z_{control}) }[/math] the probability that a randomly selected case will have a higher test result (marker value) than a randomly selected control. It represents a measure of concordance between the marker and the disease status. ROC curves are particularly useful for comparing the discriminatory capacity of different potential biomarkers. (Heagerty & Zheng 2005)
Survival data [math]\displaystyle{ P(\beta' Z_1 \gt \beta' Z_2|T_1 \lt T_2) }[/math] (Roughly speaking) the probability of concordance between predicted and observed responses. The probability that the predictions for a random pair of subjects are concordant with their outcomes. (Heagerty & Zheng 2005). (Precisely) fraction of pairs in your data, where the observation with the higher survival time has the higher probability of survival predicted by your model.

p95 of Heagerty and Zheng (2005) gave a relationship of C-statistic:

[math]\displaystyle{ C = P(M_j \gt M_k | T_j \lt T_k) = \int_t \mbox{AUC(t) w(t)} \; dt }[/math]

where M is the marker value and [math]\displaystyle{ w(t) = 2 \cdot f(t) \cdot S(t) }[/math]. When the interest is in the accuracy of a regression model we will use [math]\displaystyle{ M_i = Z_i^T \beta }[/math].

The time-dependent AUC is also related to time-dependent C-index. [math]\displaystyle{ C_\tau = P(M_j \gt M_k | T_j \lt T_k, T_j \lt \tau) = \int_t \mbox{AUC(t)} \mbox{w}_{\tau}(t) \; dt }[/math] where [math]\displaystyle{ w_\tau(t) = 2 \cdot f(t) \cdot S(t)/(1-S^2(\tau)) }[/math].

Concordance index/C-index/C-statistic interpretation and R packages

  • The area under ROC curve (plot of sensitivity of 1-specificity) is also called C-statistic. It is a measure of discrimination generalized for survival data (Harrell 1982 & 2001). The ROC are functions of the sensitivity and specificity for each value of the measure of model. (Nancy Cook, 2007)
    • The sensitivity of a test is the probability of a positive test result, or of a value above a threshold, among those with disease (cases).
    • The specificity of a test is the probability of a negative test result, or of a value below a threshold, among those without disease (noncases).
    • Perfect discrimination corresponds to a c-statistic of 1 & is achieved if the scores for all the cases are higher than those for all the non-cases.
    • The c-statistic is the probability that the measure or predicted risk/risk score is higher for a case than for a noncase.
    • The c-statistic is not the probability that individuals are classified correctly or that a person with a high test score will eventually become a case.
    • C-statistic is a rank-based measure. The c-statistic describes how well models can rank order cases and noncases, but not a function of the actual predicted probabilities.
  • How to interpret the output for calculating concordance index (c-index)? [math]\displaystyle{ P(\beta' Z_1 \gt \beta' Z_2|T_1 \lt T_2) }[/math] where T is the survival time and Z is the covariates.
    • It is the fraction of pairs in your data, where the observation with the higher survival time has the higher probability of survival predicted by your model.
    • High values mean that your model predicts higher probabilities of survival for higher observed survival times.
    • The c index estimates the probability of concordance between predicted and observed responses. A value of 0.5 indicates no predictive discrimination and a value of 1.0 indicates perfect separation of patients with different outcomes. (p371 Harrell 1996)
  • Drawback of C-statistics:
    • Even though rank indexes such as c are widely applicable and easily interpretable, they are not sensitive for detecting small differences in discrimination ability between two models. This is due to the fact that a rank method considers the (prediction, outcome) pairs (0.01,0), (0.9, 1) as no more concordant than the pairs (0.05,0), (0.8, 1). A more sensitive likelihood-ratio Chi-square-based statistic that reduces to R2 in the linear regression case may be substituted. (p371 Harrell 1996)
    • If the model is correct, the likelihood based measures may be more sensitive in detecting differences in prediction ability, compared to rank-based measures such as C-indexes. (Uno 2011 p 1113)
  • What is Harrell’s C-index? C = #concordant pairs / (# concordant pairs + # discordant pairs)
  • http://dmkd.cs.vt.edu/TUTORIAL/Survival/Slides.pdf
  • Concordance vignette from the survival package. It has a good summary of different ways (such as Kendall's tau and Somers' d) to calculate the concordance statistic. The concordance function in the survival package can be used with various types of models including logistic and linear regression.
  • Assessment of Discrimination in Survival Analysis (C-statistics, etc) webpage
  • 5 Ways to Estimate Concordance Index for Cox Models in R, Why Results Aren't Identical?, C-index/C-statistic 计算的5种不同方法及比较. The 5 functions are rcorrcens() from Hmisc, summary()$concordance from survival, survConcordance() from survival, concordance.index() from survcomp and cph() from rms.
  • Summary of R packages to compute C-statistic
Package Function New data?
survival summary(coxph(formula, data))$concordance["C"], Cindex() no, yes
survC1 Est.Cval() no
survAUC UnoC() yes
timeROC ? ?
compareC ? ?
survcomp concordance.index() ?
Hmisc rcorr.cens() no
pec cindex() yes

Integrated brier score (≈ "mean squared error" of prediction for survival data)

Assessment and comparison of prognostic classification schemes for survival data Graf et al Stat. Med. 1999 2529-45, Consistent Estimation of the Expected Brier Score in General Survival Models with Right‐Censored Event Times Gerds et al 2006.

  • Because the point predictions of event-free times will almost inevitably given inaccurate and unsatisfactory result, the mean square error of prediction [math]\displaystyle{ \frac{1}{n}\sum_1^n (T_i - \hat{T}(X_i))^2 }[/math] method will not be considered. See Parkes 1972 or Henderson 2001.
  • Another approach is to predict the survival or event status [math]\displaystyle{ Y=I(T \gt \tau) }[/math] at a fixed time point [math]\displaystyle{ \tau }[/math] for a patient with X=x. This leads to the expected Brier score [math]\displaystyle{ E[(Y - \hat{S}(\tau|X))^2] }[/math] where [math]\displaystyle{ \hat{S}(\tau|X) }[/math] is the estimated event-free probabilities (survival probability) at time [math]\displaystyle{ \tau }[/math] for subject with predictor variable [math]\displaystyle{ X }[/math].
  • The time-dependent Brier score (without censoring)
[math]\displaystyle{ \begin{align} \mbox{Brier}(\tau) &= \frac{1}{n}\sum_1^n (I(T_i\gt \tau) - \hat{S}(\tau|X_i))^2 \end{align} }[/math]
  • The time-dependent Brier score (with censoring, C is the censoring variable)
[math]\displaystyle{ \begin{align} \mbox{Brier}(\tau) = \frac{1}{n}\sum_i^n\bigg[\frac{(\hat{S}_C(t_i))^2I(t_i \leq \tau, \delta_i=1)}{\hat{S}_C(t_i)} + \frac{(1 - \hat{S}_C(t_i))^2 I(t_i \gt \tau)}{\hat{S}_C(\tau)}\bigg] \end{align} }[/math]

where [math]\displaystyle{ \hat{S}_C(t_i) = P(C \gt t_i) }[/math], the Kaplan-Meier estimate of the censoring distribution with [math]\displaystyle{ t_i }[/math] the survival time of patient i. The integration of the Brier score can be done by over time [math]\displaystyle{ t \in [0, \tau] }[/math] with respect to some weight function W(t) for which a natual choice is [math]\displaystyle{ (1 - \hat{S}(t))/(1-\hat{S}(\tau)) }[/math]. The lower the iBrier score, the larger the prediction accuracy is.

  • Useful benchmark values for the Brier score are 33%, which corresponds to predicting the risk by a random number drawn from U[0, 1], and 25% which corresponds to predicting 50% risk for everyone. See Evaluating Random Forests for Survival Analysis using Prediction Error Curves by Mogensen et al J. Stat Software 2012 (pec package). The paper has a good summary of different R package implementing Brier scores.

R function

Papers on high dimensional covariates

  • Assessment of survival prediction models based on microarray data, Bioinformatics , 2007, vol. 23 (pg. 1768-74)
  • Allowing for mandatory covariates in boosting estimation of sparse high-dimensional survival models, BMC Bioinformatics , 2008, vol. 9 pg. 14

Kendall's tau, Goodman-Kruskal's gamma, Somers' d

C-statistics

  • For two groups data (one with event, one without), C-statistic has an intuitive interpretation: if two individuals are selected at random, one with the event and one without, then the C-statistic is the probability that the model predicts a higher risk for the individual with the event. Analysis of Biomarker Data: logs, odds ratios and ROC curves by Grund 2010
  • C-statistics is the probability of concordance between predicted and observed survival.
  • Comparing two correlated C indices with right‐censored survival outcome: a one‐shot nonparametric approach Kang et al, Stat in Med, 2014. compareC package for comparing two correlated C-indices with right censored outcomes. Harrell’s Concordance. The s.e. of the Harrell's C-statistics can be estimated by the delta method. [math]\displaystyle{ \begin{align} C_H = \frac{\sum_{i,j}I(t_i \lt t_{j}) I(\hat{\beta} Z_i \gt \hat{\beta} Z_j) \delta_i}{\sum_{i,j} I(t_i \lt t_j) \delta_i} \end{align} }[/math] converges to a censoring-dependent quantity [math]\displaystyle{ P(\beta'Z_1 \gt \beta' Z_2|T_1 \lt T_2, T_1 \lt \text{min}(D_1,D_2)). }[/math] Here D is the censoring variable.
  • On the C-statistics for Evaluating Overall Adequacy of Risk Prediction Procedures with Censored Survival Data by Uno et al 2011. Let [math]\displaystyle{ \tau }[/math] be a specified time point within the support of the censoring variable. [math]\displaystyle{ \begin{align} C(\tau) = \text{UnoC}(\hat{\pi}, \tau) = \frac{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i \lt t_{i'}, t_i \lt \tau) I(\hat{\beta}'Z_i \gt \hat{\beta}'Z_{i'}) \delta_i}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i \lt t_{i'}, t_i \lt \tau) \delta_i} \end{align} }[/math], a measure of the concordance between [math]\displaystyle{ \hat{\beta} Z_i }[/math] (the linear predictor) and the survival time. [math]\displaystyle{ \hat{S}_C(t) }[/math] is the Kaplan-Meier estimator for the censoring distribution/variable/time (cf event time); flipping the definition of [math]\displaystyle{ \delta_i }[/math]/considering failure events as "censored" observations and censored observations as "failures" and computing the KM as usual; see p207 of Satten 2001 and the source code from the kmcens() in survC1. Note that [math]\displaystyle{ C_\tau }[/math] converges to [math]\displaystyle{ P(\beta'Z_1 \gt \beta' Z_2|T_1 \lt T_2, T_1 \lt \tau). }[/math]
    • Uno's estimator does not require the fitted model to be correct . See also table V in the simulation study where the true model is log-normal regression.
    • Uno's estimator is consistent for a population concordance measure that is free of censoring. See the coverage result in table IV and V from his simulation study. Other forms of C-statistic estimate population parameters that may depend on the current study-specific censoring distribution.
    • To accommodate discrete risk scores, in survC1::Est.Cval(), it is using the formula [math]\displaystyle{ . \begin{align} \frac{\sum_{i,i'}[ (\hat{S}_C(t_i))^{-2}I(t_i \lt t_{i'}, t_i \lt \tau) I(\hat{\beta}'Z_i \gt \hat{\beta}'Z_{i'}) \delta_i + 0.5 * (\hat{S}_C(t_i))^{-2}I(t_i \lt t_{i'}, t_i \lt \tau) I(\hat{\beta}'Z_i = \hat{\beta}'Z_{i'}) \delta_i ]}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i \lt t_{i'}, t_i \lt \tau) \delta_i} \end{align} }[/math]. Note that pec::cindex() is using the same formula but survAUC::UnoC() does not.
    • If the specified [math]\displaystyle{ \tau }[/math] (tau) is 'too' large such that very few events were observed or very few subjects were followed beyond this time point, the standard error estimate for [math]\displaystyle{ \hat{C}_\tau }[/math] can be quite large.
    • Uno mentioned from (page 95) Heagerty and Zheng 2005 that when T is right censoring, one would typically consider [math]\displaystyle{ C_\tau }[/math] with a fixed, prespecified follow-up period [math]\displaystyle{ (0, \tau) }[/math].
    • Uno also mentioned that when the data is right censored, the censoring variable D is usually shorter than that of the failure time T, the tail part of the estimated survival function of T is rather unstable. Thus we consider a truncated version of C.
    • Heagerty and Zheng (2005) p95 said [math]\displaystyle{ C_\tau }[/math] is the probability that the predictions for a random pair of subjects are concordant with their outcomes, given that the smaller event time occurs in [math]\displaystyle{ (0, \tau) }[/math].
    • real data 1: fit a Cox model. Get risk scores [math]\displaystyle{ \hat{\beta}'Z }[/math]. Compute the point and confidence interval estimates (M=500 indep. random samples with the same sample size as the observation data) of [math]\displaystyle{ C_\tau }[/math] for different [math]\displaystyle{ \tau }[/math]. Compare them with the conventional C-index procedure (Korn).
    • real data 1: compute [math]\displaystyle{ C_\tau }[/math] for a full model and a reduce model. Compute the difference of them ([math]\displaystyle{ C_\tau^{(A)} - C_\tau^{(B)} = .01 }[/math]) and the 95% confidence interval (-0.00, .02) of the difference for testing the importance of some variable (HDL in this case). Though HDL is quite significant (p=0) with respect to the risk of CV disease but its incremental value evaluated via C-statistics is quite modest.
    • real data 2: goal - evaluate the prognostic value of a new gene signature in predicting the time to death or metastasis for breast cancer patients. Two models were fitted; one with age+ER and the other is gene+age+ER. For each model we can calculate the point and interval estimates of [math]\displaystyle{ C_\tau }[/math] for different [math]\displaystyle{ \tau }[/math]s.
    • simulation: T is from Weibull regression for case 1 and log-normal regression for case 2. Covariates = (age, ER, gene). 3 kinds of censoring were considered. Sample size is 100, 150, 200 and 300. 1000 iterations. Compute coverage probabilities and average length of 95% confidence intervals, bias and root mean square error for [math]\displaystyle{ \tau }[/math] equals to 10 and 15. Compared with the conventional approach, the new method has higher coverage probabilities and less bias in 6 scenarios.
  • Statistical methods for the assessment of prognostic biomarkers (Part I): Discrimination by Tripep et al 2010
  • Gonen and Heller 2005 concordance index for Cox models
    • [math]\displaystyle{ P(T_2\gt T_1|g(Z_1)\gt g(Z_2)) }[/math]. Gonen and Heller's c statistic which is independent of censoring.
    • GHCI() from survAUC package. Strangely only one parameter is needed. survAUC allows for testing data but CPE package does not have an option for testing data.
TR <- ovarian[1:16,]
TE <- ovarian[17:26,]
train.fit  <- coxph(Surv(futime, fustat) ~ age,
                    x=TRUE, y=TRUE, method="breslow", data=TR)
lpnew <- predict(train.fit, newdata=TE)      
survAUC::GHCI(lpnew) # .8515

lpnew2 <- predict(train.fit, newdata = TR)
survAUC::GHCI(lpnew2) # 0.8079495

CPE::phcpe(train.fit, CPE.SE = TRUE) 
# $CPE
# [1] 0.8079495
# $CPE.SE
# [1] 0.0670646

Hmisc::rcorr.cens(-TR$age, Surv(TR$futime, TR$fustat))["C Index"]
# 0.7654321 
Hmisc::rcorr.cens(TR$age, Surv(TR$futime, TR$fustat))["C Index"]
# 0.2345679 
  • Uno's C-statistics (2011) and some examples using different packages
    • C-statistic may or may not be a decreasing function of tau. However, AUC(t) may not be decreasing; see Fig 1 of Blanche et al 2018.
      library(survAUC); library(pec)
      set.seed(1234)
      dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001) # simulWebib was defined above
      #     coef exp(coef) se(coef)     z      p
      # x -0.744     0.475    0.269 -2.76 0.0057
      TR <- dat[1:80,]
      TE <- dat[81:100,]
      train.fit  <- coxph(Surv(time, status) ~ x, data=TR)
      plot(survfit(Surv(time, status) ~ 1, data =TR))
      
      lpnew <- predict(train.fit, newdata=TE)
      Surv.rsp <- Surv(TR$time, TR$status)
      Surv.rsp.new <- Surv(TE$time, TE$status)              
      sapply(c(.25, .5, .75),
             function(qtl) UnoC(Surv.rsp, Surv.rsp.new, lpnew, time=quantile(TR$time, qtl)))
      # [1] 0.2580193 0.2735142 0.2658271
      sapply(c(.25, .5, .75), 
             function(qtl) cindex( list(matrix( -lpnew, nrow = nrow(TE))), 
              formula = Surv(time, status) ~ x,
              data = TE, 
              eval.times = quantile(TR$time, qtl))$AppC$matrix)
      # [1] 0.5041490 0.5186850 0.5106746
    • Four elements are needed for computing truncated C-statistic using survAUC::UnoC. But it seems pec::cindex does not need the training data.
      • training data including covariates,
      • testing data including covariates,
      • predictor from new data,
      • truncation time/evaluation time/prediction horizon.
    • (From ?UnoC) Uno's estimator is based on inverse-probability-of-censoring weights and does not assume a specific working model for deriving the predictor lpnew. It is assumed, however, that there is a one-to-one relationship between the predictor and the expected survival times conditional on the predictor. Note that the estimator implemented in UnoC is restricted to situations where the random censoring assumption holds.
    • survAUC::UnoC(). The tau parameter: Truncation time. The resulting C tells how well the given prediction model works in predicting events that occur in the time range from 0 to tau. [math]\displaystyle{ P(\beta'Z_1 \gt \beta' Z_2|T_1 \lt T_2, T_1 \lt \tau). }[/math] Con: no confidence interval estimate for [math]\displaystyle{ C_\tau }[/math] nor [math]\displaystyle{ C_\tau^{(A)} - C_\tau^{(B)} }[/math]
    • pec::cindex(). At each timepoint of eval.times the c-index is computed using only those pairs where one of the event times is known to be earlier than this timepoint. If eval.times is missing or Inf then the largest uncensored event time is used. See a more general example from here
    • Est.Cval() from the survC1 package (the only package gives confidence intervals of C-statistic or deltaC, authored by H. Uno). It doesn't take new data nor the vector of predictors obtained from the test data. Pro: Inf.Cval() can compute the confidence interval (perturbation-resampling based) of [math]\displaystyle{ C_\tau }[/math] & Inf.Cval.Delta() for the difference [math]\displaystyle{ C_\tau^{(A)} - C_\tau^{(B)} }[/math].
      library(survAUC)
      # require training and predict sets
      TR <- ovarian[1:16,]
      TE <- ovarian[17:26,]
      train.fit  <- coxph(Surv(futime, fustat) ~ age, data=TR)
      
      lpnew <- predict(train.fit, newdata=TE)
      Surv.rsp <- Surv(TR$futime, TR$fustat)
      Surv.rsp.new <- Surv(TE$futime, TE$fustat)              
      
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*1) 
      # [1] 0.9761905
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*2) 
      # [1] 0.7308979
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*3) 
      # [1] 0.7308979
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*4) 
      # [1] 0.7308979
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*5) 
      # [1] 0.7308979
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors)
      # [1] 0.7308979
      # So the function UnoC() can obtain the exact result as Est.Cval().
      # Now try on a new data set. Question: why do we need Surv.rsp?
      UnoC(Surv.rsp, Surv.rsp.new, lpnew)
      # [1] 0.7333333
      UnoC(Surv.rsp, Surv.rsp.new, lpnew, time=365.25*2)
      # [1] 0.7333333
      
      library(pec)
      cindex( list(matrix( -lpnew, nrow = nrow(TE))), 
              formula = Surv(futime, fustat) ~ age,
              data = TE, eval.times = 365.25*2)$AppC
      # $matrix
      # [1] 0.7333333
      
      library(survC1)
      Est.Cval(cbind(TE, lpnew), tau = 365.25*2, nofit = TRUE)$Dhat
      # [1] 0.7333333
      
      # tau is mandatory (>0), no need to have training and predict sets
      Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*1)$Dhat
      # [1] 0.9761905
      Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*2)$Dhat
      # [1] 0.7308979
      Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*3)$Dhat
      # [1] 0.7308979
      Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*4)$Dhat
      # [1] 0.7308979
      Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*5)$Dhat
      # [1] 0.7308979
      
      svg("~/Downloads/c_stat_scatter.svg", width=8, height=5)
      par(mfrow=c(1,2))
      plot(TR$futime, train.fit$linear.predictors, main="training data", 
           xlab="time", ylab="predictor")
      mtext("C=.731 at t=2", 3)
      plot(TE$futime, lpnew, main="testing data", xlab="time", ylab="predictor")
      mtext("C=.733 at t=2", 3)
      dev.off()
      File:C stat scatter.svg
  • Assessing the prediction accuracy of a cure model for censored survival data with long-term survivors: Application to breast cancer data
  • The use of ROC for defining the validity of the prognostic index in censored data
  • Use and Misuse of the Receiver Operating Characteristic Curve in Risk Prediction Cook 2007
  • Evaluating Discrimination of Risk Prediction Models: The C Statistic by Pencina et al, JAMA 2015
  • Blanche et al(2018) The c-index is not proper for the evaluation of t-year predicted risks
    • There is a bug on script line 154.
    • With a fixed prediction horizon, the concordance index can be higher for a misspecified model than for a correctly specified model. The time-dependent AUC does not have this problem.
    • (page 8) We now show that when a misspecified prediction model satisfies the ranking condition but the true distribution does not, then it is possible that the misspecified model achieves a misleadingly high c-index.
    • The traditional C‐statistic used for the survival models is not guaranteed to identify the “best” model for estimating the risk of t-year survival. In contrast, measures of predicted error do not suffer from these limitations. See this paper The relationship between the C‐statistic and the accuracy of program‐specific evaluations by Wey et al 2018
    • Unfortunately, a drawback of Harrell’s c-index for the time to event and competing risk settings is that the measure does not provide a value specific to the time horizon of prediction (e.g., a 3-year risk). See this paper The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models by Kattan and Gerds 2018.
    • In Fig 1 Y-axis is concordance (AUC/C) and X-axis is time, the caption said The ability of (some variable) to discriminate patients who will either die or be transplanted within the next t-years from those who will be event-free at time t.
    • The [math]\displaystyle{ \tau }[/math] considered here is the maximal end of follow-up time
    • AUC (riskRegression::Score()), Uno-C (pec::cindex()), Harrell's C (Hmisc::rcorr.cens() for censored and summary(fit)$concordance for uncensored) are considered.
    • The C_IPCW(t) or C_Harrell(t) is obtained by artificially censoring the outcome at time t. So C_IPCW(t) is different from Uno's version.

C-statistic limitations

See the discussion section of The relationship between the C‐statistic and the accuracy of program‐specific evaluations by Wey 2018

  • Correctly specified models can have low or high C‐statistics. Thus, the C‐statistic cannot identify a correctly specified model.
  • the traditional C‐statistic used for the survival models is not guaranteed to identify the “best” model for estimating the risk of, for example, 1‐year survival

Importantly, there exists no measure of risk discrimination or predicted error that can identify a correctly specified model, because they all depend on unknown characteristics of the data. For example, the C‐statistic depends on the variability in recipient‐level risk, while measures of squared error such as the Brier Score depend on residual variability.

Analysis of Biomarker Data: logs, odds ratios and ROC curves. This paper does not consider the survival time data. It has some summary about C-statistic (interpretation, warnings).

  • The C-statistic is relatively insensitive to the added contribution of a new marker when the two models, with and without biomarker, estimate risk on a continuous scale. In fact, many new biomarkers provide only minimal increase in the C-statistic when added to the Framingham model for CHD risk.
  • The classical C-statistic assumes that high sensitivity and high specificity are equally desirable. This is not always the case – for example, when screening the general population for a low-prevalence outcome requiring invasive follow-up, high specificity is important, while cancer screening in a high-risk group would emphasize high sensitivity.
  • To achieve a noticeable increase in the C-statistic, a biomarker must have a very strong independent association with the event risk (say ORs of 10 or higher per 1 SD increase).

C-statistic applications

  • Semiparametric Regression Analysis of Multiple Right- and Interval-Censored Events by Gao et al, JASA 2018
  • A c statistic of 0.7–0.8 is considered good, while >0.8 is considered excellent. See this paper. 2018
  • The C statistic, also termed concordance statistic or c-index, is analogous to the area under the curve and is a global measure of model discrimination. Discrimination refers to the ability of a risk prediction model to separate patients who develop a health outcome from patients who do not develop a health outcome. Effectively, the C statistic is the probability that a model will result in a higher-risk score for a patient who develops the outcomes of interest compared with a patient who does not develop the outcomes of interest. See the paper JAMA 2018

C-statistic vs LRT comparing nested models

1. Binary data

# https://stats.stackexchange.com/questions/46523/how-to-simulate-artificial-data-for-logistic-regression
set.seed(666)
x1 = rnorm(1000)           # some continuous variables 
x2 = rnorm(1000)
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
pr = 1/(1+exp(-z))         # pass through an inv-logit function
y = rbinom(1000,1,pr)      # bernoulli response variable
df = data.frame(y=y,x1=x1,x2=x2)
fit <- glm( y~x1+x2,data=df,family="binomial")
summary(fit) 
# Estimate Std. Error z value Pr(>|z|)    
# (Intercept)   0.9915     0.1185   8.367   <2e-16 ***
#   x1            2.2731     0.1789  12.709   <2e-16 ***
#   x2            3.1853     0.2157  14.768   <2e-16 ***
#   ---
#   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# 
# (Dispersion parameter for binomial family taken to be 1)
# 
# Null deviance: 1355.16  on 999  degrees of freedom
# Residual deviance:  582.93  on 997  degrees of freedom
# AIC: 588.93
confint.default(fit)
#                 2.5 %   97.5 %
# (Intercept) 0.7592637 1.223790
# x1          1.9225261 2.623659
# x2          2.7625861 3.608069

# LRT - likelihood ratio test
fit2 <- glm( y~x1,data=df,family="binomial")
anova.res <- anova(fit2, fit)
# Analysis of Deviance Table
# 
# Model 1: y ~ x1
# Model 2: y ~ x1 + x2
#   Resid. Df Resid. Dev Df Deviance
# 1       998    1186.16            
# 2       997     582.93  1   603.23
1-pchisq( abs(anova.res$Deviance[2]), abs(anova.res$Df[2]))
# [1] 0

# Method 1: use ROC package to compute AUC
library(ROC)
set.seed(123)
markers <- predict(fit, newdata = data.frame(x1, x2), type = "response")
roc1 <- rocdemo.sca( truth=y, data=markers, rule=dxrule.sca )
auc <- AUC(roc1); print(auc) # [1] 0.9459085

markers2 <- predict(fit2, newdata = data.frame(x1), type = "response")
roc2 <- rocdemo.sca( truth=y, data=markers2, rule=dxrule.sca )
auc2 <- AUC(roc2); print(auc2) # [1] 0.7259098
auc - auc2 # [1] 0.2199987

# Method 2: use pROC package to compute AUC
roc_obj <- pROC::roc(y, markers)
pROC::auc(roc_obj) # Area under the curve: 0.9459

# Method 3: Compute AUC by hand
# https://www.r-bloggers.com/calculating-auc-the-area-under-a-roc-curve/
auc_probability <- function(labels, scores, N=1e7){
  pos <- sample(scores[labels], N, replace=TRUE)
  neg <- sample(scores[!labels], N, replace=TRUE)
  # sum( (1 + sign(pos - neg))/2)/N # does the same thing
  (sum(pos > neg) + sum(pos == neg)/2) / N # give partial credit for ties
}
auc_probability(as.logical(y), markers) # [1] 0.945964

2. Survival data

library(survival)
data(ovarian)
head(ovarian)
range(ovarian$futime) # [1]   59 1227
plot(survfit(Surv(futime, fustat) ~ 1, data = ovarian))

coxph(Surv(futime, fustat) ~ rx + age, data = ovarian)
#        coef exp(coef) se(coef)     z      p
# rx  -0.8040    0.4475   0.6320 -1.27 0.2034
# age  0.1473    1.1587   0.0461  3.19 0.0014
#
# Likelihood ratio test=15.9  on 2 df, p=0.000355
# n= 26, number of events= 12 

require(survC1)
covs0 <- as.matrix(ovarian[, c("rx")])
covs1 <- as.matrix(ovarian[, c("rx", "age")])
tau=365.25*1
Delta=Inf.Cval.Delta(ovarian[, 1:2], covs0, covs1, tau, itr=200)
round(Delta, digits=3)
#          Est    SE Lower95 Upper95
# Model1 0.844 0.119   0.611   1.077
# Model0 0.659 0.148   0.369   0.949
# Delta  0.185 0.197  -0.201   0.572

Time dependent ROC curves

tdrocc()

Calibration

Graphical calibration curves and the integrated calibration index (ICI) for survival models

Prognostic markers vs predictive markers (and other biomarkers)

Treatment Effect

  • Tian 2014: [math]\displaystyle{ P(T^1 \geq t_0|z) - P(T^{-1} \geq t_0|z) }[/math]
  • Bonetti 2000: Hazard ratio
  • Janes 2014: [math]\displaystyle{ \Delta(Y) = \rho_0(Y) - \rho_1(Y) = P(D=1|T=0, Y) - P(D=1|T=1, Y) }[/math]
    • Subjects with [math]\displaystyle{ \Delta(Y)\lt 0 }[/math] are called marker-negative; standard/controlled treatment is favored.
    • Subjects with [math]\displaystyle{ \Delta(Y)\gt 0 }[/math] are called marker-positive; new treatment is favored. The rule is applying treatment onlyto marker-positive patients. And for this portion of patients, the average benefit of treatment is calculated by [math]\displaystyle{ B_{pos} = E(\Delta(Y) | \Delta(Y) \gt 0) }[/math]. See p103 on the paper.

Subgroup identification

Identification of subgroups via partial linear regression modeling approach Zhou 2021

Some packages

personalized package

personalized: Estimation and Validation Methods for Subgroup Identification and Personalized Medicine. Subgroup Identification and Precision Medicine with the {personalized} R Package (youtube)

SurvMetrics

SurvMetrics: Predictive Evaluation Metrics in Survival Analysis

SurvBenchmark

SurvBenchmark: comprehensive benchmarking study of survival analysis methods using both omics data and clinical data

Lasso estimation of hierarchical interactions for analyzing heterogeneity of treatment effect

Lasso estimation of hierarchical interactions for analyzing heterogeneity of treatment effect 2021

Quantifying treatment differences in confirmatory trials under non-proportional hazards

Quantifying treatment differences in confirmatory trials under non-proportional hazards

The source code in Github.

Computation for gene expression (microarray) data

n <- 500
g <- 10000
y <- rexp(n)
status <- ifelse(runif(n) < .7, 1, 0)
x <- matrix(rnorm(n*g), nr=g)
treat <- rbinom(n, 1, .5)
# Method 1
system.time(for(i in 1:g) coxph(Surv(y, status) ~ x[i, ] + treat + treat:x[i, ]))
# 28 seconds 

# Method 2
system.time(apply(x, 1, function(z) coxph(Surv(y, status) ~ z + treat + treat:z)))
# 29 seconds

# Method 3 (Windows)
dyn.load("C:/Program Files (x86)/ArrayTools/Fortran/surv64.dll")			  
tme <- y
sorted <- order(tme)
stime <- as.double(tme[sorted])
sstat <- as.integer(status[sorted])
x1 <- x[,sorted]
imodel <- 1  # imodel=1, fit univariate gene expression. Return p-values vector.
nvar <- 1
system.time(outx1 <- .Fortran("coxfitc", as.integer(n), as.integer(g), as.integer(0),
                 stime, sstat, t(x1), as.double(0), as.integer(imodel), 
                 double(2*n+2*nvar*nvar+3*nvar), logdiff = double(g)))
# 1.69 seconds on R i386
# 0.79 seconds on R x64

# method 4: GSA
genenames=paste("g", 1:g, sep="")
#create some random gene sets
genesets=vector("list", 50)
for(i in 1:50){
  genesetsi=paste("g", sample(1:g,size=30), sep="")
}
geneset.names=paste("set",as.character(1:50),sep="")
debug(GSA.func)
GSA.obj<-GSA(x,y, genenames=genenames, genesets=genesets,  
             censoring.status=status,
             resp.type="Survival", nperms=1)
Browse[3]> str(catalog.unique)
 int [1:1401] 7943 227 4069 3011 8402 1586 2443 2777 673 9021 ...
Browse[3]> system.time(cox.func(x[catalog.unique,], y, censoring.status, s0=0))
# 1.3 seconds
Browse[2]> system.time(cox.func(x, y, censoring.status, s0=0))
# 7.259 seconds

Single gene vs mult-gene survival models

A comparative study of survival models for breast cancer prognostication revisited: the benefits of multi-gene models by Grzadkowski et al 2018. To concordance of biomarker performance, the authors use the Concordance Correlation Coefficient (CCC) as introduced by Lin (1989) and further amended in Lin (2000).

Random papers using C-index, AUC or Brier scores

R

More, Web tools

Others

Machine learning

mlr3proba: an R package for machine learning in survival analysis

Constrained randomization

Constrained randomization to evaulate the vaccine rollout in nursing homes

Principles and Practice of Clinical Research

Principles of Clinical Pharmacology

Principles of Clinical Pharmacology

Progressive disease, stable disease

STANDARD OF CARE

Everyone would receive (which may be no therapy) if no biomarker was used. Cf: experimental therapy with effect that might be relatedto the value of a continuous biomarker.

Breast cancer

HER2-positive breast cancer

TNBC (Triple-negative breast cancer)

https://en.wikipedia.org/wiki/TNBC