Survival data: Difference between revisions

From 太極
Jump to navigation Jump to search
 
(82 intermediate revisions by the same user not shown)
Line 22: Line 22:


== Calculating survival time in R ==
== Calculating survival time in R ==
[https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Calculating_survival_times_-_base_R base R and lubridate] methods. Emily C. Zabor
* [https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Calculating_survival_times_-_base_R base R and lubridate] methods. Emily C. Zabor


=== Convert days to years or months ===
=== Convert days to years or months ===
Line 34: Line 34:
# 365.25/12 = 30.4375
# 365.25/12 = 30.4375
</pre>
</pre>
=== Overall survival, progression-free, recurrence-free survival ===
* [https://www.cancer.gov/about-cancer/diagnosis-staging/prognosis Understanding Statistics About Survival]
* [https://en.wikipedia.org/wiki/Survival_rate Survival rate].
* [https://en.wikipedia.org/wiki/Survival_rate#Overall_survival Overall survival]. It is common to use '''diagnostic date''' or '''treatment/intervention starting date''' as the starting point. It is usually used as an indication of how well a treatment works. [https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-023-07730-1 How is overall survival assessed in randomised clinical trials in cancer and are subsequent treatment lines considered? A systematic review]
** Event: if a patient died (any cause), T=(date of death) - (date of therapy start)
** Censored: if a patient was still alive at last follow-up, T=(date of last follow-up) - (date of therapy start)
* [https://en.wikipedia.org/wiki/Progression-free_survival Progression-free survival]
* '''Recurrence-free survival''' (RFS)
** Event: if a patient relapsed or died. T=(date of relapse or dealth, whichever comes first) - (date of therapy/treatment start)
** Censored: if a patient had not relapsed and was still alive at last follow-up. T=(date of last follow-up) - (date of therapy start)
* '''Disease-free survival''' (same as RFS)
=== Progression-free interval ===
[https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-024-05897-1 A novel approach to the analysis of Overall Survival (OS) as response with Progression-Free Interval (PFI) as condition based on the RNA-seq expression data in The Cancer Genome Atlas (TCGA)]


== [https://en.wikipedia.org/wiki/Censoring_(statistics) Censoring] ==
== [https://en.wikipedia.org/wiki/Censoring_(statistics) Censoring] ==
Line 186: Line 205:
** Cox and Brewlow: 1972 S(t) = exp(-Lambda(t))
** Cox and Brewlow: 1972 S(t) = exp(-Lambda(t))
** Aalen: 1978 Lambda(t)
** Aalen: 1978 Lambda(t)
* https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3932959/ A practical guide to understanding Kaplan-Meier curves] 2010
* D distinct times <math>t_1 < t_2 < \cdots < t_D</math>. At time <math>t_i</math> there are <math>d_i</math> events. Let <math>Y_i</math> be the number of individuals who are at risk at time <math>t_i</math>. The quantity <math>d_i/Y_i</math> provides an estimate of the conditional probability that an individual who survives to just prior to time <math>t_i</math> experiences the event at time <math>t_i</math>. The '''KM estimator of the survival function''' and the '''Nelson-Aalen estimator of the cumulative hazard''' (their relationship is given below) are define as follows (<math>t_1 \le t</math>):
* D distinct times <math>t_1 < t_2 < \cdots < t_D</math>. At time <math>t_i</math> there are <math>d_i</math> events. Let <math>Y_i</math> be the number of individuals who are at risk at time <math>t_i</math>. The quantity <math>d_i/Y_i</math> provides an estimate of the conditional probability that an individual who survives to just prior to time <math>t_i</math> experiences the event at time <math>t_i</math>. The '''KM estimator of the survival function''' and the '''Nelson-Aalen estimator of the cumulative hazard''' (their relationship is given below) are define as follows (<math>t_1 \le t</math>):
: <math>
: <math>
Line 379: Line 400:
</pre>
</pre>


=== Estimating x-year survival ===
=== Continuous predictor ===
[https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Estimating_(x)-year_survival Survival Analysis in R]. See the explanation there why the “naive” estimate is wrong when we ignore censoring? Correct is 41% but naive is 47%.
There could be several reasons why we might want to consider Kaplan-Meier (KM) curves using a continuous covariate:
* '''Visualizing Survival Differences''': KM curves can help visualize survival differences across different levels of a continuous covariate. For example, if the covariate is age, we might be interested in how survival probabilities differ across various age groups.
* '''Detecting Non-Proportional Hazards''': KM curves can help detect non-proportional hazards, which occur when the hazard ratios between groups change over time. This can be particularly useful when dealing with continuous covariates, as the relationship between the covariate and survival may not be constant over time.
* '''Understanding the Effect of Covariates''': KM curves can provide insights into the effect of continuous covariates on survival time. This can be useful in understanding the impact of treatment dosage, biomarker levels, or other continuous measures on patient survival.
* '''Developing Diagnostic Tools''': Some researchers have proposed methods to create KM-type curves for continuous covariates as diagnostic tools. These tools can help visualize the confounder-adjusted effect of continuous variables on a time-to-event outcome.
 
The Kaplan-Meier estimator is a (non-parametric) univariable method, meaning it approximates the survival function using at most one variable/predictor. When you have a continuous predictor, one common approach is to convert the continuous variable into a categorical variable by creating groups. This can be done by determining cut-points, such as using the median of the predictor as the group’s cut point.
 
However, this approach has its limitations. The choice of cut-point can greatly influence the results, and arbitrary cut-points may lead to loss of information. Moreover, this method does not adjust for possible confounders.
 
=== Estimating x-year probability of survival ===
[https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Estimating_(x)-year_survival Survival Analysis in R]. See the explanation there why '''the “naive” estimate is wrong''' when we ignore censoring? Correct is 41% but naive is 47%.
<pre>
<pre>
plot(survfit(Surv(time, status) ~ 1, data = lung))
plot(survfit(Surv(time, status) ~ 1, data = lung))
Line 387: Line 419:


This is useful when we want to compare difference in (overall) survival probability at (5) years based on (A model) (high/low risk groups were defined by the median of scores of the training data).
This is useful when we want to compare difference in (overall) survival probability at (5) years based on (A model) (high/low risk groups were defined by the median of scores of the training data).
=== xlab, ylab ===
* Survival probability, Time since diagnosis (year)
* OS probability, Time to death (months)
* RFS probability, Time to relapse or death (months)


=== Median survival and 95% CI ===
=== Median survival and 95% CI ===
Line 397: Line 434:
* [https://stats.stackexchange.com/a/19180 What happens if a survival curve doesn't reach 0.5?] It means you can't compute the median.
* [https://stats.stackexchange.com/a/19180 What happens if a survival curve doesn't reach 0.5?] It means you can't compute the median.


survfit(Surv(time, status) ~ 1, data). Note the [https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Estimating_median_survival_time "naive" estimate is wrong] (median survival time among patients who died). Correct is 310 but naive is 226.  
survfit(Surv(time, status) ~ 1, data). Note the [https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Estimating_median_survival_time ''' "naive" estimate is wrong'''] (median survival time among patients who died). Correct is 310 but naive is 226.  
<pre>
<pre>
R> survfit(Surv(time, status) ~ 1, data = lung) # correct
Call: survfit(formula = Surv(time, status) ~ 1, data = lung)
      n events median 0.95LCL 0.95UCL
[1,] 228    165    310    285    363
R> lung %>%
    filter(status == 1) %>%
    summarize(median_surv = median(time)) # wrong
  median_surv
1        284
R> median(lung$time) # wrong
[1] 255.5
R> survfit(Surv(time, status) ~ 1, data = aml)
R> survfit(Surv(time, status) ~ 1, data = aml)
Call: survfit(formula = Surv(time, status) ~ 1, data = aml)
Call: survfit(formula = Surv(time, status) ~ 1, data = aml)
Line 411: Line 461:
x=Maintained    11      7    31      18      NA
x=Maintained    11      7    31      18      NA
x=Nonmaintained 12    11    23      8      NA
x=Nonmaintained 12    11    23      8      NA
# Extract the median survival time
R> library(survMisc)
R> fit <- survfit(Surv(time, status) ~ 1, data = lung)
R> median_survival_time <- median(fit)
50
310
</pre>
</pre>


=== Inverse Probability of Weighting ===
=== Restricted mean survival time ===
* [https://www.tandfonline.com/doi/abs/10.1080/01621459.2019.1660173?journalCode=uasa20 Robust Inference Using Inverse Probability Weighting], [http://www.econ.cuhk.edu.hk/econ/images/content/news_event/seminars/2018-19_2ndTerm/XinweiMa-JMP.pdf pdf]
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-13-152 Restricted mean survival time: an alternative to the hazard ratio for the design and analysis of randomized trials with a time-to-event outcome] Royston 2013
* [http://www.rebeccabarter.com/blog/2017-07-05-ip-weighting/ The intuition behind inverse probability weighting in causal inference]
* [https://onbiostatistics.blogspot.com/2019/04/the-use-of-restricted-mean-survival.html The Use of Restricted Mean Survival Time (RMST) Method When Proportional Hazards Assumption is in Doubt]
** To estimate treatment effect for time to event, Hazard Ratio (HR) is commonly used.
** HR is often assumed to be constant over time (i.e., proportional hazard assumption).
** Recently, we have some doubt about this assumption.
** If the PH assumption does not hold, the interpretation of HR can be difficult.
* RMST is defined as the area under the survival curve up to t* ('''truncated time''' or '''horizon'''), which should be pre-specified for a randomized trial. Uno 2014
<ul>
<li>[https://rdrr.io/cran/survival/man/print.survfit.html survival::print.survfit()]. [https://stackoverflow.com/a/43173569 How to compute the mean survival time].
<pre>
fit <- survfit(Surv(time, status == 1) ~ x, data = aml)
print(fit, print.rmean=TRUE) # assume the longest survival time is the horizon
#                  n events rmean* se(rmean) median 0.95LCL 0.95UCL
# x=Maintained    11      7  52.6    19.83    31      18      NA
# x=Nonmaintained 12    11  22.7      4.18    23      8      NA
#    * restricted mean with upper limit =  161
print(fit, print.rmean=TRUE, rmean=250)
#                  n events rmean* se(rmean) median 0.95LCL 0.95UCL
# x=Maintained    11      7  27.4      3.01    31      18      NA
# x=Nonmaintained 12    11  21.2      3.53    23      8      NA
#    * restricted mean with upper limit =  36


=== Inverse Probability of Censoring Weighted (IPCW) ===
# To extract the RMST values
* [https://en.wikipedia.org/wiki/Inverse_probability_weighting Inverse probability weighting] from Wikipedia
survival:::survmean(fit, rmean=36)[[1]][, "rmean"]
* R packages
#    x=Maintained x=Nonmaintained
** [https://github.com/cran/survC1/blob/master/R/FUN-cstat-ver003b.R#L259 kmcens()] from survC1 package
#        27.42500        21.15278
** [https://github.com/cran/TreatmentSelection/blob/master/R/timetoevent_subroutines.R get.censoring.weights()] from TreatmentSelection package (not sure)
</pre>
** [https://www.rdocumentation.org/packages/pec/versions/2020.11.17/topics/ipcw pec:ipcw()] Estimation of censoring probabilities
</li>
** [https://rdrr.io/cran/timeROC/man/timeROC.html timeROC] Time-dependent ROC curve estimation
<li>[https://cran.r-project.org/web/packages/survRM2/vignettes/survRM2-vignette3-2.html survRM2] package </li>
** [https://www.jstatsoft.org/article/view/v043i13 ipw]: An R Package for Inverse Probability Weighting 2011
<li>[https://cran.r-project.org/web/packages/PWEALL/index.html PWEALL:: rmsth()]
* [https://onlinelibrary.wiley.com/doi/pdf/10.1002/bimj.200610301 Consistent Estimation of the Expected Brier Score in General Survival Models with Right‐Censored Event Times] Gerds et al 2006.
<pre>
* Inverse probability weighting https://www.bmj.com/content/352/bmj.i189 Examples are considered.
R> library(survRM2)
* [https://onlinelibrary.wiley.com/doi/full/10.1111/j.0006-341X.2000.00779.x Correcting for Noncompliance and Dependent Censoring in an AIDS Clinical Trial with Inverse Probability of Censoring Weighted (IPCW) Log‐Rank Tests] by James M. Robins, Biometrics 2000.
R> D = rmst2.sample.data()
* [https://amstat.tandfonline.com/doi/abs/10.1198/000313001317098185#.WtO9eOjwb94 The Kaplan–Meier Estimator as an Inverse-Probability-of-Censoring Weighted Average] by Satten 2001. IPCW.
R> nrow(D)
* IPCW https://www.math.leidenuniv.nl/scripties/MasterWillems.pdf#page=41
[1] 312
* [https://www.sciencedirect.com/science/article/abs/pii/S0010482519302082 ipcwswitch]: An R package for inverse probability of censoring weighting with an application to switches in clinical trials
R> head(D[,1:3])
* [https://pubmed.ncbi.nlm.nih.gov/29462286/ The c-index is not proper for the evaluation of $t$-year predicted risks] Blanche 2018. the IPCW c-index can be larger or smaller than IPCW AUC(t) depending on t.
      time status arm
* [https://www.sciencedirect.com/science/article/pii/S1532046416000496 Adapting machine learning techniques to censored time-to-event health record data: A general-purpose approach using inverse probability of censoring weighting] Vock 2016. Referred in [https://stats.stackexchange.com/a/510985 Kaplan-Meier IPCW].  
1  1.095140      1  1
2 12.320329      0  1
3  2.770705      1  1
4  5.270363      1  1
5  4.117728      0  0
6  6.852841      1  0
R> time  = D$time
R> status = D$status
R> arm    = D$arm
R> rmst2(time, status, arm, tau=10)
 
The truncation time: tau = 10  was specified.  


The plots below show by flipping the status variable, we can accurately ''recover'' the survival function of the censoring variable. See [[R#Superimpose_a_density_plot_or_any_curves|the R code here]] for superimposing the true exponential distribution on the KM plot of the censoring variable.
Restricted Mean Survival Time (RMST) by arm
{{Pre}}
              Est.   se lower .95 upper .95
require(survival)
RMST (arm=1) 7.146 0.283    6.592    7.701
n = 10000
RMST (arm=0) 7.283 0.295    6.704    7.863
beta1 = 2; beta2 = -1
lambdaT = 1 # baseline hazard
lambdaC = 2  # hazard of censoring
set.seed(1234)
x1 = rnorm(n,0)
x2 = rnorm(n,0)
# true event time
# T = rweibull(n, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2)) # Wrong
T = Vectorize(rweibull)(n=1, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2))


# method 1: exponential censoring variable
C <- rweibull(n, shape=1, scale=lambdaC) 
time = pmin(T,C) 
status <- 1*(T <= C)
mean(status)
summary(T)
summary(C)
par(mfrow=c(2,1), mar = c(3,4,2,2)+.1)
status2 <- 1-status
plot(survfit(Surv(time, status2) ~ 1),
    ylab="Survival probability",
    main = 'Exponential censoring time')


# method 2: uniform censoring variable
Restricted Mean Time Lost (RMTL) by arm
C <- runif(n, 0, 21)
              Est.    se lower .95 upper .95
time = pmin(T,C) 
RMTL (arm=1) 2.854 0.283    2.299    3.408
status <- 1*(T <= C)  
RMTL (arm=0) 2.717 0.295    2.137    3.296
status2 <- 1-status
plot(survfit(Surv(time, status2) ~ 1),
    ylab="Survival probability",
    main = "Uniform censoring time")
</pre>


[[:File:Ipcw.svg]]


=== stepfun() and plot.stepfun() ===
Between-group contrast
* [https://www.r-bloggers.com/veterinary-epidemiologic-research-modelling-survival-data-non-parametric-analyses/ Draw cumulative hazards using stepfun()]
                      Est. lower .95 upper .95    p
* For KM curve case, see an example [[#Kaplan_.26_Meier_and_Nelson-Aalen:_survfit.formula.28.29|above]].
RMST (arm=1)-(arm=0) -0.137    -0.939    0.665 0.738
RMST (arm=1)/(arm=0)  0.981    0.878    1.096 0.738
RMTL (arm=1)/(arm=0) 1.050    0.787    1.402 0.738


=== GGally package (ggplot object) ===
R> library(PWEALL)
[https://ggobi.github.io/ggally/reference/ggsurv.html ggsurv()] from the [https://cran.r-project.org/web/packages/GGally/ GGally] package. '''GGally has 2 times downloaded of survminer & more authors'''.
R> PWEALL::rmsth(time, status, tcut=10)
$tcut
[1] 10
$rmst
[1] 7.208579
$var
[1] 13.00232
$vadd
[1] 3.915123


Advantage: return object class is c("gg", "ggplot") while survminer::ggsurvplot returns object class "ggsurvplot" "ggsurv", "list".  
R> PWEALL::rmsth(time[arm == 0], status[arm ==0], tcut=10)
$tcut
[1] 10
$rmst
[1] 7.283416
$var
[1] 13.30564
$vadd
[1] 3.73545


It seems to be better to apply '''order.legend = FALSE''' if we want the default color palette has the same order as the levels. For example
R> PWEALL::rmsth(time[arm == 1], status[arm ==1], tcut=10)
<pre>
$tcut
data(lung, package = "survival")
[1] 10
sf.sex <- survival::survfit(Surv(time, status) ~ sex, data = lung)
$rmst
ggsurv(sf.sex)  # 2 = Salmon, 1 = Iris blue
[1] 7.146493
                # Colors are defined by the final survival time
$var
[1] 12.49073
$vadd
[1] 3.967705
</pre>
</li>
<li>[https://cran.r-project.org/web/packages/surv2sampleComp/index.html surv2sampleComp], [https://r-statistics-fan.hatenablog.com/entry/2014/08/04/225135 生存曲線下面積RMST(Restricted mean survival time)] </li>
<li>[https://onlinelibrary.wiley.com/doi/abs/10.1002/bimj.202200002 Clustered restricted mean survival time regression] Chen, 2022 </li>
</ul>


ggsurv(sf.sex, order.legend = FALSE) # 1 = Salmon, 2 = Iris blue
=== Inverse Probability of Weighting ===
                        # More consistent with what we expect
* https://en.wikipedia.org/wiki/Inverse_probability_weighting
                        # Colors are defined by the levels
* [https://www.tandfonline.com/doi/abs/10.1080/01621459.2019.1660173?journalCode=uasa20 Robust Inference Using Inverse Probability Weighting], [http://www.econ.cuhk.edu.hk/econ/images/content/news_event/seminars/2018-19_2ndTerm/XinweiMa-JMP.pdf pdf]
* [http://www.rebeccabarter.com/blog/2017-07-05-ip-weighting/ The intuition behind inverse probability weighting in causal inference]
* Idea:
** Inverse Probability of Weighting (IPW) is a statistical technique used in '''causal inference''' to adjust for the '''bias introduced by non-random sampling or missing data'''. IPW is used to estimate the '''population average treatment effect''' from observational data, by weighting the contribution of each individual in the sample based on their probability of receiving the treatment or being observed.
** The basic idea behind IPW is to use the observed covariates to infer the probabilities of treatment assignment or missing data, and then use these probabilities as weights to correct for the bias in the sample. By doing so, IPW allows for estimation of treatment effects as if the sample were randomly assigned, and it provides a consistent estimate of the population average treatment effect under certain assumptions.
* Example:
** Suppose we want to study the effect of a new drug on blood pressure. We collect data from a sample of patients, but some of them do not take the drug as prescribed, and others drop out of the study before it ends. We want to use this sample to estimate the average treatment effect of the drug on blood pressure.
** To do this using IPW, we first need to estimate the probability of receiving the treatment (i.e., taking the drug as prescribed) and the probability of being observed (i.e., not dropping out of the study) for each patient. We can use logistic regression or other methods to estimate these probabilities based on the patient's covariates (e.g., age, sex, baseline blood pressure, etc.).
** Once we have these probabilities, we can use them as weights to adjust for the bias introduced by non-random treatment assignment and missing data. For each patient, we multiply their outcome (blood pressure) by the inverse of their probability of receiving the treatment and being observed, and then take the weighted average over the sample. This gives us an estimate of the average treatment effect of the drug on blood pressure that corrects for the bias introduced by non-random sampling and missing data.
* Numerical example
** Suppose we have a sample of 100 patients, and we observe the following: 1) 40 patients take the drug as prescribed and have a mean blood pressure reduction of 10 mmHg. 2) 30 patients do not take the drug as prescribed and have a mean blood pressure reduction of 5 mmHg. 3) 20 patients drop out of the study before it ends and have a mean blood pressure reduction of 7 mmHg. 4) 10 patients both take the drug as prescribed and complete the study, and have a mean blood pressure reduction of 12 mmHg.
** To estimate the average treatment effect of the drug on blood pressure using IPW, we first need to estimate the probability of receiving the treatment (i.e., taking the drug as prescribed) and the probability of being observed (i.e., not dropping out of the study) for each patient. For simplicity, let's assume that these probabilities are equal for all patients.
** Suppose
**# For the 40 patients who took the drug as prescribed: Weight = 1 / 0.5 = 2, Weighted outcome = 2 * 10 = 20,
**# For the 30 patients who did not take the drug as prescribed: Weight = 1 / 0.5 = 2, Weighted outcome = 2 * 5 = 10,
**# For the 20 patients who dropped out of the study: Weight = 1 / 0.5 = 2, Weighted outcome = 2 * 7 = 14, 
**# For the 10 patients who both took the drug as prescribed and completed the study: Weight = 1 / 0.5 = 2. Weighted outcome = 2 * 12 = 24
** IPW estimate of average treatment effect = (20 + 10 + 14 + 24) / 100 = 9.8 mmHg. This IPW estimate of 9.8 mmHg suggests that, on average, the drug reduces blood pressure by 9.8 mmHg. This estimate corrects for the bias introduced by non-random treatment assignment and missing data.
** What is 0.5 when we calculate the weight in the above example?
*** In the numerical example I provided earlier, the value of 0.5 used in the weight calculation represents '''the estimated probability of receiving the treatment''' (i.e., taking the drug as prescribed) and the probability of being observed (i.e., not dropping out of the study) for each patient.
*** For simplicity, I assumed that these probabilities are equal for all patients and equal to 0.5 in the example. This is often not the case in real-world data, and these probabilities need to be estimated using methods such as '''logistic regression''' or '''propensity score'''.
*** The weight for each patient is then calculated as the inverse of these probabilities: Weight = 1 / probability of receiving the treatment and being observed
*** So, in the example, the weight for each patient is equal to 1 / 0.5 = 2. '''This weight represents the importance of each patient in the IPW estimate of the average treatment effect.'''
** The weights in IPW are usually obtained using one of the following methods:
**# '''Logistic Regression''': This is a common method for estimating the weights in IPW. We use logistic regression to estimate the probability of receiving the treatment or being observed as a function of the patient's covariates. The coefficients from the logistic regression model are then used to calculate the weights for each patient.
**# '''Propensity Score''': This is another common method for estimating the weights in IPW. The propensity score is defined as the probability of receiving the treatment given the patient's covariates. We can estimate the propensity score using logistic regression or other methods, and then use it to calculate the weights for each patient.
**# Weight Truncation: This is a method to stabilize the weights in IPW, especially when some of the weights are very large. Weight truncation involves replacing the weights that are larger than a certain threshold with the threshold. This reduces the influence of outliers on the IPW estimate and helps to prevent over-fitting.
**# Other Methods: There are also other methods for estimating the weights in IPW, such as the Bayesian Hierarchical Modeling and the Kernel Density Estimation. These methods are more complex but can provide more accurate and flexible estimates of the weights, especially when the relationship between the treatment and the covariates is non-linear.
<ul>
<li>Mathematical formula for IPW
* Let Y be the outcome of interest (e.g., a continuous or binary variable), T be the treatment indicator (e.g., 0 for control group and 1 for treatment group), X be a vector of covariates, and W be the weight for each individual i. The IPW estimate of the average treatment effect (ATE) is given by:<math>
ATE = E[Y|T=1] - E[Y|T=0]
</math>
: where E[Y|T=1] and E[Y|T=0] are the expected values of Y for the treated and control groups, respectively. These expected values can be estimated using the '''weighted sample mean''' as follows
::<math>
\begin{align}
E[Y|T=1] &= (1/N1) * Σ(W_i * Y_i) \text{ for i in treatment group} \\


# More options
E[Y|T=0] &= (1/N0) * Σ(W_i * Y_i) \text{ for i in control group}
ggsurv(sf.sex, order.legend = FALSE, surv.col = scales::hue_pal()(2))
\end{align}
</pre>
</math>
: where <math>N1</math> and <math>N0</math> are the number of individuals in the treatment and control groups, respectively, and <math>W_i</math> is the weight for individual <math>i</math>.
* The weights <math>W_i</math> are usually estimated using one of the methods discussed earlier (e.g., logistic regression, propensity score, etc.). '''The IPW estimate of the ATE is unbiased if the weights are correctly estimated and if the distribution of the covariates X is well-balanced between the treatment and control groups.'''
* It is important to note that IPW is a complex method that requires careful estimation of the weights and assessment of the assumptions of the model. It is also sensitive to the choice of the covariates X and the model used to estimate the weights. Therefore, it is important to carefully evaluate the validity and robustness of the IPW estimate before drawing any conclusions.
</ul>


To combine multiple ggplot2 plots, use the '''ggpubr''' package. gridExtra is not developed after 2017.
=== Inverse Probability of Censoring Weighting (IPCW) ===
<pre>
* R packages
library(GGally)
** [https://github.com/cran/survC1/blob/master/R/FUN-cstat-ver003b.R#L259 kmcens()] from survC1 package
library(survival)
** [https://github.com/cran/TreatmentSelection/blob/master/R/timetoevent_subroutines.R get.censoring.weights()] from TreatmentSelection package (not sure)
data(lung, package = "survival")
** [https://www.rdocumentation.org/packages/pec/versions/2020.11.17/topics/ipcw pec:ipcw()] Estimation of censoring probabilities
sf.lung <- survfit(Surv(time, status) ~ sex, data = lung)
** [https://rdrr.io/cran/timeROC/man/timeROC.html timeROC] Time-dependent ROC curve estimation
p1 <- ggsurv(sf.lung, plot.cens = FALSE, lty.est = c(1, 3), size.est = 0.8,
** [https://www.jstatsoft.org/article/view/v043i13 ipw]: An R Package for Inverse Probability Weighting 2011
            xlab = "Time", ylab = "Survival", main = "Lower score")
* [https://onlinelibrary.wiley.com/doi/pdf/10.1002/bimj.200610301 Consistent Estimation of the Expected Brier Score in General Survival Models with Right‐Censored Event Times] Gerds et al 2006.
p1 <- p1 + annotate("text", x=0, y=.25, hjust=0, label="zxcvb")
* Inverse probability weighting https://www.bmj.com/content/352/bmj.i189 Examples are considered.
p2 <- ggsurv(sf.lung, plot.cens = FALSE, lty.est = c(1, 3), size.est = 0.8,
* [https://onlinelibrary.wiley.com/doi/full/10.1111/j.0006-341X.2000.00779.x Correcting for Noncompliance and Dependent Censoring in an AIDS Clinical Trial with Inverse Probability of Censoring Weighted (IPCW) Log‐Rank Tests] by James M. Robins, Biometrics 2000.
            xlab = "Time", ylab = "Survival", main = "High score")
* [https://amstat.tandfonline.com/doi/abs/10.1198/000313001317098185#.WtO9eOjwb94 The Kaplan–Meier Estimator as an Inverse-Probability-of-Censoring Weighted Average] by Satten 2001. IPCW.
p2 <- p2 + annotate("text", x=0, y=.25, hjust=0, label="asdfg")
* IPCW https://www.math.leidenuniv.nl/scripties/MasterWillems.pdf#page=41
* [https://www.sciencedirect.com/science/article/abs/pii/S0010482519302082 ipcwswitch]: An R package for inverse probability of censoring weighting with an application to switches in clinical trials
* [https://pubmed.ncbi.nlm.nih.gov/29462286/ The c-index is not proper for the evaluation of $t$-year predicted risks] Blanche 2018. the IPCW c-index can be larger or smaller than IPCW AUC(t) depending on t.
* [https://www.sciencedirect.com/science/article/pii/S1532046416000496 Adapting machine learning techniques to censored time-to-event health record data: A general-purpose approach using inverse probability of censoring weighting] Vock 2016. Referred in [https://stats.stackexchange.com/a/510985 Kaplan-Meier IPCW].  


# gridExtra::grid.arrange(p1, p2, ncol=2, nrow =1) # no common legend option
The plots below show by flipping the status variable, we can accurately ''recover'' the survival function of the censoring variable. See [[R#Superimpose_a_density_plot_or_any_curves|the R code here]] for superimposing the true exponential distribution on the KM plot of the censoring variable.
ggpubr::ggarrange(p1, p2,  common.legend = TRUE, legend = "right")
{{Pre}}
# return object class: "gg"  "ggplot"    "ggarrange"
require(survival)
</pre>
n = 10000
beta1 = 2; beta2 = -1
lambdaT = 1 # baseline hazard
lambdaC = 2  # hazard of censoring
set.seed(1234)
x1 = rnorm(n,0)
x2 = rnorm(n,0)
# true event time
# T = rweibull(n, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2)) # Wrong
T = Vectorize(rweibull)(n=1, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2))


=== Survival curves with number at risk at bottom: survminer package ===
# method 1: exponential censoring variable
R function survminer::ggsurvplot()
C <- rweibull(n, shape=1, scale=lambdaC) 
<ul>
time = pmin(T,C)
<li>[https://raw.githubusercontent.com/rstudio/cheatsheets/master/survminer.pdf Cheatsheet] by RStudio. It includes KM curves (ggsurvplot), diagnostics (ggcoxdiagnostics) and summary of Cox model (ggforest).
status <- 1*(T <= C)  
<li>[https://rpkgs.datanovia.com/survminer/reference/ggsurvplot.html ggsurvplot()]
mean(status)
<li>[https://github.com/kassambara/survminer/issues/283 Error: object of type 'symbol' is not subsettable]. Use '''survminer::surv_fit()''' in lieu of survival::survfit().
summary(T)
<li>To save ggsurvplot(), use '''ggsave(FILE, res$plot)''' . To save arrange_ggsurvplots(), use '''ggsave(FILE, res)'''
summary(C)
<li>http://www.sthda.com/english/wiki/survminer-r-package-survival-data-analysis-and-visualization
par(mfrow=c(2,1), mar = c(3,4,2,2)+.1)
<li>http://www.sthda.com/english/articles/24-ggpubr-publication-ready-plots/81-ggplot2-easy-way-to-mix-multiple-graphs-on-the-same-page/#mix-table-text-and-ggplot
status2 <- 1-status
<li>[https://rpkgs.datanovia.com/survminer/survminer_cheatsheet.pdf Cheatsheet]
plot(survfit(Surv(time, status2) ~ 1),  
<li>http://r-addict.com/2016/05/23/Informative-Survival-Plots.html
    ylab="Survival probability",
<li>[https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Add_the_numbers_at_risk_table Add the numbers at risk table]. '''cowplot::plot_grid()''' was used to combine the KM plot and risk table together.
    main = 'Exponential censoring time')
<li>[https://dominicmagirr.github.io/post/adjusting-for-covariates-under-non-proportional-hazards/ Adjusting for covariates under non-proportional hazards]. '''break.x.by''' or '''break.time.by''' to control x axis breaks.  
 
# method 2: uniform censoring variable
C <- runif(n, 0, 21)
time = pmin(T,C)
status <- 1*(T <= C)
status2 <- 1-status
plot(survfit(Surv(time, status2) ~ 1),
    ylab="Survival probability",
    main = "Uniform censoring time")
</pre>
 
[[:File:Ipcw.svg]]
 
<ul>
<li>Numerical example
* Suppose we have a sample of 100 patients and we are interested in '''estimating the mean survival time'''. We observe the survival times for 80 of the patients and 20 are censored, meaning that the event of interest (death in this case) has not occurred at the time of data collection.
* Let's assume that we have estimated the '''probability of censoring for each individual''' using a logistic regression model. The probabilities are given by:
<pre>
Individual 1: p_1 = 0.1
Individual 2: p_2 = 0.2
...
Individual 100: p_100 = 0.05
</pre>
* The IPCW weights for each individual are then calculated as the inverse of the probability of censoring:
<pre>
<pre>
survminer::ggsurvplot(, risk.table = TRUE,
Individual 1: w_1 = 1 / p_1 = 1 / 0.1 = 10
                      break.x.by = 6,
Individual 2: w_2 = 1 / p_2 = 1 / 0.2 = 5
                      legend.title = "",
...
                      xlab = "Time (months)",
Individual 100: w_100 = 1 / p_100 = 1 / 0.05 = 20
                      ylab = "Overall survival",
                      risk.table.fontsize = 4,
                      legend = c(0.8,0.8)))
</pre>
</pre>
* The IPCW estimate of the mean survival time is then calculated as the weighted average of the survival times, where the weights are the IPCW weights:
<pre>
<pre>
library(survminer)
IPCW estimate = (w_1 * survival time of individual 1 + w_2 * survival time of individual 2 + ... + w_100 * survival time of individual 100) / (w_1 + w_2 + ... + w_100)
ggsurvplot(survo, risk.table = TRUE, pval=TRUE, pval.method = TRUE,
          palette = c("#F8766D", "#00BFC4")) # (Salmon, Iris Blue)
</pre>
<li>[https://stackoverflow.com/questions/59951004/arrange-ggsurv-plots-with-one-shared-legend Arrange ggsurv plots with one shared legend]. Note we can add a title to a corner of an individual plot by a trick '''ggsurvplot()$plot + labs(title = "A")''' .
<li>Combine a List of Survfit Objects on the Same Plot [https://rpkgs.datanovia.com/survminer/reference/ggsurvplot_combine.html ggsurvplot_combine()]
<li>Arranging Multiple ggsurvplots
[https://rpkgs.datanovia.com/survminer/reference/arrange_ggsurvplots.html arrange_ggsurvplots()]. When I need to put two KM curves plot side by side using arrange_ggsurvplots(), some issues came out (these properties seem to inherit from [https://rdrr.io/cran/gridExtra/man/arrangeGrob.html arrangeGrob]):
* if I try it on a terminal the function will open two graph devices and the first one is blank? 
* if I try it on a terminal with ''print = FALSE'' option, it still open a blank graph window,
* if I try it in RStudio, the plot is not generated in RStudio but in a separate X window,
* if I just draw a plot from ggsurvplot(), the plot is drawn in RStudio as we want.
* ggpubr::ggarrange() is better than arrange_ggsurvplots()
<li>[https://github.com/kassambara/survminer/issues/54 Add custom annotation to ggsurvplot]. However, even I use the same x-value in ggsurvplot(pval.coord) and ggplot2::annotate(x), the texts are not aligned in x-axis.
<pre>
ggsurv$plot <- ggsurv$plot+  
              ggplot2::annotate("text",
                                x = 100, y = 0.2, # x and y coordinates of the text
                                label = "My label", size=1)
</pre>
</pre>
* The IPCW estimate takes into account the probability of censoring for each individual, and it gives more weight to individuals who are at higher risk of censoring, which can help to reduce the bias in the estimated mean survival time.
</ul>
</ul>
Paper examples
* [https://www.nature.com/articles/nm.4466/figures/6 High-dimensional single-cell analysis predicts response to anti-PD-1 immunotherapy]


Questions:
=== stepfun() and plot.stepfun() ===
* How to remove tick mark on censored observations especially the case with a large sample size?
* [https://www.r-bloggers.com/veterinary-epidemiologic-research-modelling-survival-data-non-parametric-analyses/ Draw cumulative hazards using stepfun()]
* For KM curve case, see an example [[#Kaplan_.26_Meier_and_Nelson-Aalen:_survfit.formula.28.29|above]].


=== ggfortify ===
=== GGally package (ggplot object) ===
* [https://cran.r-project.org/web/packages/ggfortify/index.html ggfortify]: Data Visualization Tools for Statistical Analysis Results
[https://ggobi.github.io/ggally/reference/ggsurv.html ggsurv()] from the [https://cran.r-project.org/web/packages/GGally/ GGally] package. '''GGally has 2 times downloaded of survminer & more authors'''.
* [https://rviews.rstudio.com/2022/09/06/deep-survival/ Beneath and Beyond the Cox Model]


=== ggsurvfit ===
Advantage: return object class is c("gg", "ggplot") while survminer::ggsurvplot returns object class "ggsurvplot" "ggsurv", "list".  
[https://cran.r-project.org/web/packages/ggsurvfit/index.html ggsurvfit]: Easy and Flexible Time-to-Event Figures


=== KMunicate ===
It seems to be better to apply '''order.legend = FALSE''' if we want the default color palette has the same order as the levels. For example
https://cran.r-project.org/web/packages/KMunicate/index.html
<pre>
data(lung, package = "survival")
sf.sex <- survival::survfit(Surv(time, status) ~ sex, data = lung)
ggsurv(sf.sex)  # 2 = Salmon, 1 = Iris blue
                # Colors are defined by the final survival time


=== Life table ===
ggsurv(sf.sex, order.legend = FALSE) # 1 = Salmon, 2 = Iris blue
* https://www.r-bloggers.com/veterinary-epidemiologic-research-modelling-survival-data-non-parametric-analyses/
                        # More consistent with what we expect
* [https://www.rdocumentation.org/packages/KMsurv/versions/0.1-5/topics/lifetab lifetab()]
                        # Colors are defined by the levels


=== Re-construct survival data from KM curves ===
# More options
[https://cran.r-project.org/web/packages/reconstructKM/ reconstructKM] package
ggsurv(sf.sex, order.legend = FALSE, surv.col = scales::hue_pal()(2))
</pre>


=== Compare the KM curve to the Cox model curve ===
To combine multiple ggplot2 plots, use the '''ggpubr''' package. gridExtra is not developed after 2017.
* [https://stats.stackexchange.com/a/469975 Visually Comparing the Kaplan-Meier Curve to the Cox PH Model Curve]
<pre>
* [https://youtu.be/K7bmmbD7KIg?t=700 Survival Analysis Part 3 | Kaplan Meier vs. Exponential vs. Cox Proportional Hazards (Pros & Cons)] (video)
library(GGally)
* [https://dominicmagirr.github.io/post/2022-01-17-confidence-interval-for-a-survival-curve-based-on-a-cox-model/ Confidence interval for a survival curve based on a Cox model]
library(survival)
data(lung, package = "survival")
sf.lung <- survfit(Surv(time, status) ~ sex, data = lung)
p1 <- ggsurv(sf.lung, plot.cens = FALSE, lty.est = c(1, 3), size.est = 0.8,
            xlab = "Time", ylab = "Survival", main = "Lower score")
p1 <- p1 + annotate("text", x=0, y=.25, hjust=0, label="zxcvb")
p2 <- ggsurv(sf.lung, plot.cens = FALSE, lty.est = c(1, 3), size.est = 0.8,
            xlab = "Time", ylab = "Survival", main = "High score")
p2 <- p2 + annotate("text", x=0, y=.25, hjust=0, label="asdfg")


== Alternatives to survival function plot ==
# gridExtra::grid.arrange(p1, p2, ncol=2, nrow =1) # no common legend option
https://www.rdocumentation.org/packages/survival/versions/2.43-1/topics/plot.survfit
ggpubr::ggarrange(p1, p2,  common.legend = TRUE, legend = "right")
The '''fun''' argument, a transformation of the survival curve
# return object class: "gg"  "ggplot"    "ggarrange"
* fun = "event" or "F": f(y) = 1-y; it calculates P(T < t). This is like a t-year risk (Blanche 2018).
</pre>
* fun = "cumhaz": cumulative hazard function (f(y) = -log(y)); it calculates H(t). See [https://stats.stackexchange.com/a/60250 Intuition for cumulative hazard function].


== Breslow estimate ==
=== Survival curves with number at risk at bottom: survminer package ===
* http://support.sas.com/documentation/cdl/en/statug/68162/HTML/default/viewer.htm#statug_lifetest_details03.htm
R function survminer::ggsurvplot()
* Breslow estimate is the exponentiation of the negative Nelson-Aalen estimate of the cumulative hazard function
<ul>
<li>[https://github.com/rstudio/cheatsheets/blob/main/survminer.pdf survminer Cheatsheet] by RStudio. It includes KM curves (ggsurvplot), diagnostics (ggcoxdiagnostics) and summary of Cox model (ggforest).
<li>sthda
* [http://www.sthda.com/english/wiki/survival-analysis-basics Survival Analysis Basics]
* [http://www.sthda.com/english/wiki/survminer-r-package-survival-data-analysis-and-visualization survminer R package: Survival Data Analysis and Visualization]
* [http://www.sthda.com/english/articles/24-ggpubr-publication-ready-plots/81-ggplot2-easy-way-to-mix-multiple-graphs-on-the-same-page/#mix-table-text-and-ggplot ggpubr: Publication Ready Plots]


== Logrank/log-rank/log rank test ==
<li>[https://rpkgs.datanovia.com/survminer/reference/ggsurvplot.html ggsurvplot()]
* [https://en.wikipedia.org/wiki/Logrank_test Logrank test] is a hypothesis test to compare the survival distributions of two samples. The logrank test statistic compares estimates of the hazard functions of the two groups at each observed event time.
* ggsurvplot_facet() - if we want to create KM curves based subset of data (one plot)
<ul>
* ggsurvplot_group_by() - if we want to create KM curves based subset of data (separate plots)
<li>[https://www.rdocumentation.org/packages/survival/versions/3.1-8/topics/survdiff survdiff], [http://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html Extract p-value from survdiff]
* ggsurvplot_list() - if we want to create a list of KM curves (practical application?)
* ggsurvplot_combine() - if we want to combine OS and PFS for example in one plot
<li>[https://github.com/kassambara/survminer/issues/283 Error: object of type 'symbol' is not subsettable]. Use '''survminer::surv_fit()''' in lieu of survival::survfit()
* This is needed if we want to separate Surv() (formula) and survfit() in two statements. For instance, if we want to fit the same data with different formulas.
* [https://rpkgs.datanovia.com/survminer/reference/surv_fit.html surv_fit()]
<li>To save ggsurvplot(), use '''ggsave(FILE, res$plot)''' . To save arrange_ggsurvplots(), use '''ggsave(FILE, res)'''
<li>http://r-addict.com/2016/05/23/Informative-Survival-Plots.html
<li>[https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Add_the_numbers_at_risk_table Add the numbers at risk table]. '''cowplot::plot_grid()''' was used to combine the KM plot and risk table together.
<li>[https://dominicmagirr.github.io/post/adjusting-for-covariates-under-non-proportional-hazards/ Adjusting for covariates under non-proportional hazards]. '''break.x.by''' or '''break.time.by''' to control x axis breaks.  
<syntaxhighlight lang='r'>
gp <- survminer::ggsurvplot(, risk.table = TRUE,
                      break.x.by = 6,  # if we use 'months' as time unit
                      legend.title = "",  # default is "Strata"
                      legend.labs = c("Male", "Female"), # c("Sex=Male", "Sex=Female")
                      palette = c("blue", "red") # Change color palettes
                      conf.int = FALSE,
                      linetype = 1, # Or linetype = "strata"
                      xlab = "Time (months)",
                      ylab = "Overall survival",
                      surv.median.line = "hv", # Specify median survival
                      ggtheme = theme_bw(),
                      risk.table.fontsize = 4,
                      legend = c(0.8,0.8)))
gp$plot <- gp$plot + scale_linetype_manual(values = c("solid", "solid"))
gp$plot <- gp$plot + annotate("text", x=75, y=1,
                                label=paste(pval,hrpe,hrci,sep="\n"),
                                cex=4, hjust=0, vjust=1)
gp$table <- gp$table +
            theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())
</syntaxhighlight>
<pre>
library(survminer)
ggsurvplot(survo, risk.table = TRUE, pval=TRUE, pval.method = TRUE,
          palette = c("#F8766D", "#00BFC4")) # (Salmon, Iris Blue)
</pre>
<li>[https://stackoverflow.com/questions/59951004/arrange-ggsurv-plots-with-one-shared-legend Arrange ggsurv plots with one shared legend]. Note we can add a title to a corner of an individual plot by a trick '''ggsurvplot()$plot + labs(title = "A")''' .
<li><span style="color: red">Arranging Multiple ggsurvplots</span>
[https://rpkgs.datanovia.com/survminer/reference/arrange_ggsurvplots.html arrange_ggsurvplots()]. When I need to put two KM curves plot side by side using arrange_ggsurvplots(), some issues came out (these properties seem to inherit from [https://rdrr.io/cran/gridExtra/man/arrangeGrob.html arrangeGrob]):
* if I try it on a terminal the function will open two graph devices and the first one is blank? 
* if I try it on a terminal with ''print = FALSE'' option, it still open a blank graph window,
* '''if I try it in RStudio, the plot is not generated in RStudio but in a separate X window.''' It does not matter I am using macOS or Linux.
* if I just draw a plot from ggsurvplot(), the plot is drawn in RStudio as we want.
* ggpubr::ggarrange() is an alternative to arrange_ggsurvplots() but ggpubr::ggarrange() does not work with ggsurvplot() objects.
* survminer::ggsurvplot_combine() will put two curves in one plot.
* Solution: using the [https://cran.r-project.org/web/packages/patchwork/index.html patchwork] package. [https://github.com/kassambara/survminer/issues/421 A single legend for multiple ggsurvplots using arrange_ggsurvplot]
:<syntaxhighlight lang='r'>
library(patchwork)
res1 <- ggsurvplot()
...
res1$plot + res2$plot + res3$plot + res4$plot + plot_layout(nrow=2, byrow = FALSE)
</syntaxhighlight>
<li>[https://github.com/kassambara/survminer/issues/54 Add custom annotation to ggsurvplot]. However, even I use the same x-value in ggsurvplot(pval.coord) and ggplot2::annotate(x), the texts are not aligned in x-axis.
<pre>
<pre>
sdf <- survdiff(Surv(time, status) ~ treatment, data=mydf)
ggsurv$plot <- ggsurv$plot+
pvalue <- 1 - pchisq(sdf$chisq, length(sdf$n) - 1)  
              ggplot2::annotate("text",  
                                x = 100, y = 0.2, # x and y coordinates of the text
                                label = "My label", size=1)
</pre>
</pre>
</li>
 
<li>Use '''solid''' instead ''dashed'' lines for median times. Modify the line [https://github.com/kassambara/survminer/blob/64de6746a7ee9cd758dc8d14ca88c8383e6851a6/R/ggsurvplot_core.R#L133 p <- .add_surv_median(p, fit, type = surv.median.line, fun = fun, data = data)] by adding linetype = "solid".
</ul>
</ul>
* [https://onlinelibrary.wiley.com/doi/10.1111/biom.13102 On null hypotheses in survival analysis] Stensrud 2019
Paper examples
* [https://onlinelibrary.wiley.com/doi/full/10.1111/biom.12770 Efficiency of two sample tests via the restricted mean survival time for analyzing event time observations] Tian 2017
* [https://www.nature.com/articles/nm.4466/figures/6 High-dimensional single-cell analysis predicts response to anti-PD-1 immunotherapy]
* [https://myweb.uiowa.edu/pbreheny/7210/f15/notes/9-24.pdf#page=7 Stratified log-rank tests]
 
* survival package has a [https://www.rdocumentation.org/packages/survival/versions/3.1-8/topics/strata strata] function that we can use in the [https://www.rdocumentation.org/packages/survival/versions/3.1-8/topics/survdiff survdiff()] function.
Questions:
** Differentiate '''group''' and '''strata'''
* How to remove tick mark on censored observations especially the case with a large sample size?
** The '''strata''' is useful when we suspect there is a confounding factor
 
* [http://www.ms.uky.edu/~mai/research/LogRank2006.pdf Log-rank Test: When does it Fail and how to fix it]
finalfit R package:
* [https://web.stanford.edu/~lutian/coursepdf/survweek3.pdf Survival Analysis: Logrank Test]
* [https://finalfit.org/reference/surv_plot.html surv_plot()] function
* [https://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Survival/BS704_Survival5.html Comparing Survival Curves]
 
* [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.8750?af=R Survival analysis using a 5‐step stratified testing and amalgamation routine (5‐STAR) in randomized clinical trials] by Mehrotra 2020
=== ggfortify ===
* [https://cran.r-project.org/web/packages/ggfortify/index.html ggfortify]: Data Visualization Tools for Statistical Analysis Results
* [https://rviews.rstudio.com/2022/09/06/deep-survival/ Beneath and Beyond the Cox Model]
 
=== ggsurvfit ===
[https://cran.r-project.org/web/packages/ggsurvfit/index.html ggsurvfit]: Easy and Flexible Time-to-Event Figures
 
=== KMunicate ===
https://cran.r-project.org/web/packages/KMunicate/index.html


=== Logrank test vs Cox model ===
=== Life table ===
* [https://www.statalist.org/forums/forum/general-stata-discussion/general/1426667-logrank-test-vs-cox-model Logrank test vs Cox model].
* https://www.r-bloggers.com/veterinary-epidemiologic-research-modelling-survival-data-non-parametric-analyses/
** The cox model relies on the proportional hazards assumption. The logrank test does not. If your data are not consistent with the proportional hazards assumption, then the cox results may not be valid.
* [https://www.rdocumentation.org/packages/KMsurv/versions/0.1-5/topics/lifetab lifetab()]
** the graph you show does not seem consistent with the proportional hazards assumption.
* [https://en.wikipedia.org/wiki/Logrank_test#Relationship_to_other_statistics Logrank test relationship to other statistics] & assumptions from wikipedia.
* [https://stats.stackexchange.com/a/486810 The logrank test statistic is equivalent to the score of a Cox regression. Is there an advantage of using a logrank test over a Cox regression?] Since the log-rank test is a special case of the Cox model, it does not have fewer assumptions or more power. IMHO we no longer need to be using or teaching the log-rank test. Answered by Frank Harrell.
** I can confirm the log-rank tests and Cox regression pvalues are very close by using median as a cutoff from one data with 7288 proteins. The scatterplot shows both p-values are on a 45 degree line and the p-values distribution is like Uniform.
* [https://journals.lww.com/anesthesia-analgesia/Fulltext/2021/04000/Kaplan_Meier_Curves,_Log_Rank_Tests,_and_Cox.7.aspx Kaplan-Meier Curves, Log-Rank Tests, and Cox Regression for Time-to-Event Data].
** The '''null hypothesis tested by the log-rank test''' is that the survival curves are identical over time; it thus compares the entire curves rather than the survival probability at a specific time point.
** The log-rank test assesses statistical significance but does not estimate an '''effect size'''.
** The Cox proportional hazards regression5 technique does not actually model the survival time or probability but the so-called '''hazard function'''. This function can be thought of as the ''instantaneous risk of experiencing the event'' of interest at a certain time point.
** While the HR is not the same as a relative risk, it can for all practical purposes be interpreted as such. See [https://journals.lww.com/anesthesia-analgesia/Fulltext/2018/09000/Survival_Analysis_and_Interpretation_of.32.aspx Survival Analysis and Interpretation of Time-to-Event Data: The Tortoise and the Hare].
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC403858/ The logrank test] in BMJ, 2004
** The logrank test is based on the same assumptions as the Kaplan Meier survival curve—namely, that censoring is unrelated to prognosis, the survival probabilities are the same for subjects recruited early and late in the study, and the events happened at the times specified. Deviations from these assumptions matter most if they are satisfied differently in the groups being compared, for example if censoring is more likely in one group than another.
** The logrank test is most likely to detect a difference between groups when the risk of an event is consistently greater for one group than another. It is unlikely to detect a difference when survival curves cross, as can happen when comparing a medical with a surgical intervention.
** [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1065034/ Statistics review 12: Survival analysis]
** [https://www.sohu.com/a/311943302_655370 生存分析(三)log-rank检验在什么情况下失效?] Wilcoxon test
* Visualize a survival estimate according to a continuous variable.
** [https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Smooth_survival_plot_-_quantile_of_survival Smooth survival plot - quantile of survival]
** [https://www.researchgate.net/post/What_would_cox_regression_for_continuous_covariate_looks_like_How_to_interpret_it What would cox regression for continuous covariate looks like? How to interpret it?] You could bin the continuous variable into buckets and then look at a Kaplan Meier curve to get an idea of how various levels of the continuous variable impact survival - assuming the variable isn't time-dependent.
** Cox regression themselves does not estimate survival. Cox regression estimates Hazard Ratio.
* How to access the fit of a Cox regression?
* Read the comment in the section '''Analyzing Continuous Variables''' [https://towardsdatascience.com/kaplan-meier-mistakes-48cd9e168b09 Kaplan Meier Mistakes]
** Analyzing Continuous Variables. Optimal cutpoint is problematic because testing every cutpoint creates a multiple testing problem. dichotomization causes loss of statistical power; using binary variables instead of continuous variables can triple the number of samples needed to detect an effect. Dichotomization also makes poor assumptions about the distribution of risk among patients,
** Covariate Adjustment. Kaplan Meier is a univariate method. ''At a minimum the variable should be analyzed in a Cox model with other basic prognostic factors.''
** Added Value. AUC-ROC, the Likelihood Ratio Test, and R² .
* An example
<pre>
R> sdf <- survdiff(Surv(futime, fustat) ~ rx, data = ovarian)
R> sdf$chisq
[1] 1.06274
R> 1 - pchisq(sdf$chisq, length(sdf$n) - 1)
[1] 0.3025911                                <----------
R> fit <- coxph(Surv(futime, fustat) ~ rx, data = ovarian)
R> coef(summary(fit))[, "Pr(>|z|)"]
[1] 0.3096304
R> fit$score
[1] 1.06274
R> summary(fit)
Call:
coxph(formula = Surv(futime, fustat) ~ rx, data = ovarian)


  n= 26, number of events= 12
=== Re-construct survival data from KM curves ===
[https://cran.r-project.org/web/packages/reconstructKM/ reconstructKM] package


      coef exp(coef) se(coef)      z Pr(>|z|)
=== Calculation by hand ===
rx -0.5964    0.5508  0.5870 -1.016    0.31
[https://statsandr.com/blog/what-is-survival-analysis/ What is survival analysis? Examples by hand and in R]


  exp(coef) exp(-coef) lower .95 upper .95
=== Compare the KM curve to the Cox model curve ===
rx    0.5508      1.816    0.1743      1.74
* [https://stats.stackexchange.com/a/469975 Visually Comparing the Kaplan-Meier Curve to the Cox PH Model Curve]
* [https://youtu.be/K7bmmbD7KIg?t=700 Survival Analysis Part 3 | Kaplan Meier vs. Exponential vs. Cox Proportional Hazards (Pros & Cons)] (video)
* [https://dominicmagirr.github.io/post/2022-01-17-confidence-interval-for-a-survival-curve-based-on-a-cox-model/ Confidence interval for a survival curve based on a Cox model]
 
=== Publication examples ===
* [https://www.nature.com/articles/s41698-023-00402-y/figures/3 FACT cohort patient survival analyses stratified by HRD status]


Concordance= 0.608  (se = 0.07 )
== Alternatives to survival function plot ==
Likelihood ratio test= 1.05  on 1 df,   p=0.3
https://www.rdocumentation.org/packages/survival/versions/2.43-1/topics/plot.survfit
Wald test            = 1.03  on 1 df,  p=0.3
The '''fun''' argument, a transformation of the survival curve
Score (logrank) test = 1.06  on 1 df,  p=0.3  <--- df = model df
* fun = "event" or "F": f(y) = 1-y; it calculates P(T < t). This is like a t-year risk (Blanche 2018).
</pre>
* fun = "cumhaz": cumulative hazard function (f(y) = -log(y)); it calculates H(t). See [https://stats.stackexchange.com/a/60250 Intuition for cumulative hazard function].


=== Create 2 groups from a continuous variable ===
== Breslow estimate ==
See [https://dplyr.tidyverse.org/reference/case_when.html case_when()] or [https://wiki.taichimd.us/view/Tidyverse#Opioid_prescribing_habits_in_texas tidyverse]
* http://support.sas.com/documentation/cdl/en/statug/68162/HTML/default/viewer.htm#statug_lifetest_details03.htm
<pre>
* Breslow estimate is the exponentiation of the negative Nelson-Aalen estimate of the cumulative hazard function
merged_data = merged_data %>%
  mutate(group = case_when(
    KRAS_expression > quantile(KRAS_expression, 0.5) ~ 'KRAS_High',
    KRAS_expression < quantile(KRAS_expression, 0.5) ~ 'KRAS_Low',
    TRUE ~ NA_character_
  ))


fit = survfit(Surv(time, status) ~ group, data = merged_data)
== Logrank/log-rank/log rank test ==
</pre>
* [https://en.wikipedia.org/wiki/Logrank_test Logrank test] is a hypothesis test to compare the survival distributions of two samples. The logrank test statistic compares estimates of the hazard functions of the two groups at each observed event time.
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC403858/ Statistics Notes - The logrank test] 2004
** Calculation and an example data are provided.
** It is also possible to test for a trend in survival across ordered groups.
** The logrank test is based on the same assumptions as the Kaplan Meier survival curve - namely, that censoring is unrelated to prognosis, the survival probabilities are the same for subjects recruited early and late in the study, and the events happened at the times specified.
** The logrank test is most likely to detect a difference between groups when the risk of an event is consistently greater for one group than another. It is unlikely to detect a difference when survival curves cross, as can happen when comparing a medical with a surgical intervention.


=== Optimal cut-off ===
* [https://web.stanford.edu/~lutian/coursepdf/unitweek3.pdf STAT331 Course notes]
* [https://aacrjournals.org/clincancerres/article/10/21/7252/183525 X-Tile: A New Bio-Informatics Tool for Biomarker Assessment and Outcome-Based Cut-Point Optimization ].
* '''Score test''' from the Cox regression is also labeled as logrank test. [https://stats.stackexchange.com/a/362383 Logrank p-value for >2 groups].
** [https://youtu.be/vqTi_TAd5Ao?t=125 Youtube video]
:<syntaxhighlight lang='r'>
** Specifically, a χ2 value is calculated for every possible division of the population shown on the grid using a color code. the program can select the optimal division of the data by selecting the highest χ2 value.
?coxph
** This is used by [https://translational-medicine.biomedcentral.com/articles/10.1186/s12967-021-03180-y Detecting prognostic biomarkers of breast cancer by regularized Cox proportional hazards models] Li 2021.
test1 <- list(time=c(4,3,1,1,2,2,3),
* [https://stats.stackexchange.com/questions/200907/how-to-determine-the-cut-point-of-continuous-predictor-in-survival-analysis-opt How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point?]  
              status=c(1,1,1,0,1,1,0),
** '''...usually the variable itself or some continuous transformation of it will be more useful in an outcome model.'''
              x=c(0,2,1,1,1,0,0),
** Are the costs of placing a low-risk case into the high-risk category really the same as the opposite type of error?
              sex=c(0,0,0,0,1,1,1))
summary(coxph(Surv(time, status) ~ x, test1) )
# Call:
# coxph(formula = Surv(time, status) ~ x, data = test1)
#
#  n= 7, number of events= 5
#
#    coef exp(coef) se(coef)    z Pr(>|z|)
# x 0.4608    1.5853  0.5628 0.819    0.413
#
#  exp(coef) exp(-coef) lower .95 upper .95
# x    1.585    0.6308    0.5261    4.777
#
# Concordance= 0.643  (se = 0.135 )
# Likelihood ratio test= 0.66  on 1 df,  p=0.4
# Wald test            = 0.67  on 1 df,  p=0.4
# Score (logrank) test = 0.71  on 1 df,  p=0.4
</syntaxhighlight>
<ul>
<li>[https://www.rdocumentation.org/packages/survival/versions/3.1-8/topics/survdiff survdiff], [http://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html Extract p-value from survdiff]
<pre>
sdf <- survdiff(Surv(time, status) ~ treatment, data=mydf)
pvalue <- 1 - pchisq(sdf$chisq, length(sdf$n) - 1)
</pre>
</li>
</ul>
* [https://onlinelibrary.wiley.com/doi/10.1111/biom.13102 On null hypotheses in survival analysis] Stensrud 2019
* [https://onlinelibrary.wiley.com/doi/full/10.1111/biom.12770 Efficiency of two sample tests via the restricted mean survival time for analyzing event time observations] Tian 2017
* [https://myweb.uiowa.edu/pbreheny/7210/f15/notes/9-24.pdf#page=7 Stratified log-rank tests]
* survival package has a [https://www.rdocumentation.org/packages/survival/versions/3.1-8/topics/strata strata] function that we can use in the [https://www.rdocumentation.org/packages/survival/versions/3.1-8/topics/survdiff survdiff()] function.
** Differentiate '''group''' and '''strata'''
** The '''strata''' is useful when we suspect there is a confounding factor
* [http://www.ms.uky.edu/~mai/research/LogRank2006.pdf Log-rank Test: When does it Fail and how to fix it]
* [https://web.stanford.edu/~lutian/coursepdf/survweek3.pdf Survival Analysis: Logrank Test]
* [https://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Survival/BS704_Survival5.html Comparing Survival Curves]
* [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.8750?af=R Survival analysis using a 5‐step stratified testing and amalgamation routine (5‐STAR) in randomized clinical trials] by Mehrotra 2020


== Survival curve with confidence interval ==
=== Logrank test vs Cox model ===
http://www.sthda.com/english/wiki/survminer-r-package-survival-data-analysis-and-visualization
* [https://www.statalist.org/forums/forum/general-stata-discussion/general/1426667-logrank-test-vs-cox-model Logrank test vs Cox model].
 
** The cox model relies on the proportional hazards assumption. The logrank test does not. If your data are not consistent with the proportional hazards assumption, then the cox results may not be valid.
== Parametric models and survival function for censored data ==
** the graph you show does not seem consistent with the proportional hazards assumption.
Assume the CDF of survival time ''T'' is <math>F(\cdot)</math> and the CDF of the censoring time ''C'' is <math>G(\cdot)</math>,
* [https://en.wikipedia.org/wiki/Logrank_test#Relationship_to_other_statistics Logrank test relationship to other statistics] & assumptions from wikipedia.
: <math>
* [https://stats.stackexchange.com/a/486810 The logrank test statistic is equivalent to the score of a Cox regression. Is there an advantage of using a logrank test over a Cox regression?] Since the log-rank test is a special case of the Cox model, it does not have fewer assumptions or more power. IMHO we no longer need to be using or teaching the log-rank test. Answered by Frank Harrell.
\begin{align}
** [https://www.fharrell.com/post/logrank/ The log-rank Test Assumes More Than the Cox Model]. Numerical examples were given.
P(T>t, \delta=1) &= \int_t^\infty (1-G(s))dF(s), \\
** I can confirm the log-rank tests and Cox regression pvalues are very close by using median as a cutoff from one data with 7288 proteins. The scatterplot shows both p-values are on a 45 degree line and the p-values distribution is like Uniform.
P(T>t, \delta=0) &= \int_t^\infty (1-F(s))dG(s)
* [https://journals.lww.com/anesthesia-analgesia/Fulltext/2021/04000/Kaplan_Meier_Curves,_Log_Rank_Tests,_and_Cox.7.aspx Kaplan-Meier Curves, Log-Rank Tests, and Cox Regression for Time-to-Event Data].
\end{align}
** The '''null hypothesis tested by the log-rank test''' is that the survival curves are identical over time; it thus compares the entire curves rather than the survival probability at a specific time point.
</math>
** The log-rank test assesses statistical significance but does not estimate an '''effect size'''.
 
** The Cox proportional hazards regression5 technique does not actually model the survival time or probability but the so-called '''hazard function'''. This function can be thought of as the ''instantaneous risk of experiencing the event'' of interest at a certain time point.
* http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=23
** While the HR is not the same as a relative risk, it can for all practical purposes be interpreted as such. See [https://journals.lww.com/anesthesia-analgesia/Fulltext/2018/09000/Survival_Analysis_and_Interpretation_of.32.aspx Survival Analysis and Interpretation of Time-to-Event Data: The Tortoise and the Hare].
* http://www.ms.uky.edu/~mai/sta635/LikelihoodCensor635.pdf#page=2 survival function of <math>f(T, \delta)</math>
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC403858/ The logrank test] in BMJ, 2004
* https://web.stanford.edu/~lutian/coursepdf/unit2.pdf#page=3 joint density of <math>f(T, \delta)</math>
** The logrank test is based on the same assumptions as the Kaplan Meier survival curve—namely, that censoring is unrelated to prognosis, the survival probabilities are the same for subjects recruited early and late in the study, and the events happened at the times specified. Deviations from these assumptions matter most if they are satisfied differently in the groups being compared, for example if censoring is more likely in one group than another.
* http://data.princeton.edu/wws509/notes/c7.pdf#page=6
** The logrank test is most likely to detect a difference between groups when the risk of an event is consistently greater for one group than another. It is unlikely to detect a difference when survival curves cross, as can happen when comparing a medical with a surgical intervention.
* Special case: ''T'' follows [https://en.wikipedia.org/wiki/Log-normal_distribution Log normal distribution] and ''C'' follows <math>U(0, \xi)</math>.
** [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1065034/ Statistics review 12: Survival analysis]
 
** [https://www.sohu.com/a/311943302_655370 生存分析(三)log-rank检验在什么情况下失效?] Wilcoxon test
=== R ===
* Visualize a survival estimate according to a continuous variable.  
* [https://cran.r-project.org/web/packages/flexsurv/index.html flexsurv] package.
** [https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Smooth_survival_plot_-_quantile_of_survival Smooth survival plot - quantile of survival]
* [https://devinincerti.com/2019/06/18/parametric_survival.html Parametric survival modeling] which uses the '''flexsurv''' package.
** [https://www.researchgate.net/post/What_would_cox_regression_for_continuous_covariate_looks_like_How_to_interpret_it What would cox regression for continuous covariate looks like? How to interpret it?] You could bin the continuous variable into buckets and then look at a Kaplan Meier curve to get an idea of how various levels of the continuous variable impact survival - assuming the variable isn't time-dependent.
* Used in [https://cran.rstudio.com/web/packages/simsurv/vignettes/simsurv_usage.html simsurv] package
** Cox regression themselves does not estimate survival. Cox regression estimates Hazard Ratio.
 
* How to access the fit of a Cox regression?
== Parametric models and likelihood function for uncensored data ==
* Read the comment in the section '''Analyzing Continuous Variables''' [https://towardsdatascience.com/kaplan-meier-mistakes-48cd9e168b09 Kaplan Meier Mistakes]
[https://stat.ethz.ch/R-manual/R-devel/library/survival/html/plot.survfit.html plot.survfit()]
** Analyzing Continuous Variables. Optimal cutpoint is problematic because testing every cutpoint creates a multiple testing problem. dichotomization causes loss of statistical power; using binary variables instead of continuous variables can triple the number of samples needed to detect an effect. Dichotomization also makes poor assumptions about the distribution of risk among patients,
** Covariate Adjustment. Kaplan Meier is a univariate method. ''At a minimum the variable should be analyzed in a Cox model with other basic prognostic factors.''
** Added Value. AUC-ROC, the Likelihood Ratio Test, and R² .
* An example
<pre>
R> sdf <- survdiff(Surv(futime, fustat) ~ rx, data = ovarian)
R> sdf$chisq
[1] 1.06274
R> 1 - pchisq(sdf$chisq, length(sdf$n) - 1)
[1] 0.3025911                                <----------
R> fit <- coxph(Surv(futime, fustat) ~ rx, data = ovarian)
R> coef(summary(fit))[, "Pr(>|z|)"]
[1] 0.3096304
R> fit$score
[1] 1.06274
R> summary(fit)
Call:
coxph(formula = Surv(futime, fustat) ~ rx, data = ovarian)


* Exponential. <math> T \sim Exp(\lambda) </math>. <math>H(t) = \lambda t.</math> and <math>ln(S(t)) = -H(t) = -\lambda t.</math>
  n= 26, number of events= 12
* Weibull. <math> T \sim W(\lambda,p).</math> <math>H(t) = \lambda^p t^p.</math> and <math>ln(-ln(S(t))) = ln(\lambda^p t^p)=const + p ln(t) </math>.


http://www.math.ucsd.edu/~rxu/math284/slect4.pdf
      coef exp(coef) se(coef)      z Pr(>|z|)
rx -0.5964    0.5508  0.5870 -1.016    0.31


See also [http://data.princeton.edu/wws509/notes/c7.pdf#page=9 accelerated life models] where a set of covariates were used to model survival time.
  exp(coef) exp(-coef) lower .95 upper .95
rx    0.5508      1.816    0.1743      1.74


* log-normal model
Concordance= 0.608  (se = 0.07 )
** [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5680655/ Comparing of Cox model and parametric models in analysis of effective factors on event time of neuropathy in patients with type 2 diabetes]. According to AIC, '''log-normal''' model with the lowest Akaike's was the best-fitted model among Cox and parametric models.
Likelihood ratio test= 1.05  on 1 df,  p=0.3
** [https://stackoverflow.com/a/11105255 Generating/plotting a log-normal survival function]
Wald test            = 1.03  on 1 df,   p=0.3
** [https://statisticsglobe.com/log-normal-distribution-in-r-dlnorm-plnorm-qlnorm-rlnorm Log Normal Distribution in R (4 Examples) | dlnorm, plnorm, qlnorm & rlnorm Functions]
Score (logrank) test = 1.06  on 1 df,  p=0.3  <--- df = model df
** [https://devinincerti.com/2019/06/18/parametric_survival.html Parametric survival modeling]
</pre>


== Survival modeling ==
=== Two-sided vs one-sided p-value ===
=== Accelerated life models - a direct extension of the classical linear model ===
The p-value that R (or SAS) returns is for a two-sided test. To obtain a one-sided p-value from this, simply divide the two-sided p-value by 2. [https://www.lexjansen.com/pharmasug/2021/AP/PharmaSUG-2021-AP-042.pdf Survival Statistics with PROC LIFETEST and PROC PHREG: Pitfall-Avoiding Survival Lessons for Programmers].
http://data.princeton.edu/wws509/notes/c7.pdf and also Kalbfleish and Prentice (1980).
<pre>
> survdiff(Surv(futime, fustat) ~ rx,data=ovarian)
Call:
survdiff(formula = Surv(futime, fustat) ~ rx, data = ovarian)


<math>
      N Observed Expected (O-E)^2/E (O-E)^2/V
log T_i = x_i' \beta + \epsilon_i
rx=1 13        7    5.23    0.596      1.06
</math>
rx=2 13        5    6.77    0.461      1.06
Therefore
* <math>T_i = exp(x_i' \beta) T_{0i} </math>. So if there are two groups (x=1 and x=0), and <math>exp(\beta) = 2</math>, it means one group live twice as long as people in another group.
* <math>S_1(t) = S_0(t/ exp(x' \beta))</math>. This explains the meaning of '''accelerated failure-time'''. '''Depending on the sign of <math>\beta' x</math>, the time is either accelerated by a constant factor or degraded by a constant factor'''. If <math>exp(\beta)=2</math>, the probability that a member in group one (eg treatment) will be alive at age t is exactly the same as the probability that a member in group zero (eg control group) will be alive at age t/2.
* The hazard function <math>\lambda_1(t) = \lambda_0(t/exp(x'\beta))/ exp(x'\beta) </math>. So if <math>exp(\beta)=2</math>, at any given age people in group one would be exposed to half the risk of people in group zero half their age.


In applications,
Chisq= 1.1  on 1 degrees of freedom, p= 0.3
* If the errors are normally distributed, then we obtain a log-normal model for the T. Estimation of this model for censored data by maximum likelihood is known in the econometric literature as a Tobit model.
> pchisq(1.1, 1, lower.tail = F)
* If the errors have an extreme value distribution, then T has an exponential distribution. The hazard <math>\lambda</math> satisfies the log linear model <math>\log \lambda_i = x_i' \beta</math>.
[1] 0.2942661
> pnorm(sqrt(1.1), 0, 1, lower.tail = F)
[1] 0.1471331
</pre>


=== Proportional hazard models ===
=== Create 2 groups from a continuous variable ===
Note PH models is a type of multiplicative hazard rate models <math>h(x|Z) = h_0(x)c(\beta' Z)</math> where <math>c(\beta' Z) = \exp(\beta ' Z)</math>.
See [https://dplyr.tidyverse.org/reference/case_when.html case_when()] or [https://wiki.taichimd.us/view/Tidyverse#Opioid_prescribing_habits_in_texas tidyverse]
<pre>
merged_data = merged_data %>%
  mutate(group = case_when(
    KRAS_expression > quantile(KRAS_expression, 0.5) ~ 'KRAS_High',
    KRAS_expression < quantile(KRAS_expression, 0.5) ~ 'KRAS_Low',
    TRUE ~ NA_character_
  ))


Assumption: Survival curves for two strata (determined by the particular choices of values for covariates) must have '''hazard functions that are proportional over time''' (i.e. '''constant relative hazard over time'''). [https://stats.stackexchange.com/questions/24552/proportional-hazards-assumption-meaning Proportional hazards assumption meaning]. The ratio of the hazard rates from two individuals with covariate value <math>Z</math> and <math>Z^*</math> is a constant function time.
fit = survfit(Surv(time, status) ~ group, data = merged_data)
: <math>
</pre>
\begin{align}
\frac{h(t|Z)}{h(t|Z^*)} = \frac{h_0(t)\exp(\beta 'Z)}{h_0(t)\exp(\beta ' Z^*)} = \exp(\beta' (Z-Z^*)) \mbox{    independent of time}
\end{align}
</math>


Test the assumption; see [[#Check_the_proportional_hazard_.28constant_HR_over_time.29_assumption_by_cox.zph.28.29_-_Schoenfeld_Residuals|here]].
=== Optimal cut-off ===
* [https://aacrjournals.org/clincancerres/article/10/21/7252/183525 X-Tile: A New Bio-Informatics Tool for Biomarker Assessment and Outcome-Based Cut-Point Optimization ].
** [https://youtu.be/vqTi_TAd5Ao?t=125 Youtube video]
** Specifically, a χ2 value is calculated for every possible division of the population shown on the grid using a color code. the program can select the optimal division of the data by selecting the highest χ2 value.  
** This is used by [https://translational-medicine.biomedcentral.com/articles/10.1186/s12967-021-03180-y Detecting prognostic biomarkers of breast cancer by regularized Cox proportional hazards models] Li 2021.
* [https://stats.stackexchange.com/questions/200907/how-to-determine-the-cut-point-of-continuous-predictor-in-survival-analysis-opt How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point?]  
** '''...usually the variable itself or some continuous transformation of it will be more useful in an outcome model.'''
** Are the costs of placing a low-risk case into the high-risk category really the same as the opposite type of error?


== Weibull and Exponential model to Cox model ==
== Survival curve with confidence interval ==
* https://socserv.socsci.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Cox-Regression.pdf. It also includes model diagnostic and all stuff is illustrated in R.
http://www.sthda.com/english/wiki/survminer-r-package-survival-data-analysis-and-visualization
* http://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_9.pdf


In summary:
== Parametric models and survival function for censored data ==
* Weibull distribution (Klein) <math>h(t) = p \lambda (\lambda t)^{p-1}</math> and <math>S(t) = exp(-\lambda t^p)</math>. If p >1, then the risk increases over time. If p<1, then the risk decreases over time.
Assume the CDF of survival time ''T'' is <math>F(\cdot)</math> and the CDF of the censoring time ''C'' is <math>G(\cdot)</math>,
** Note that Weibull distribution has a different parametrization. See http://data.princeton.edu/pop509/ParametricSurvival.pdf#page=2. <math>h(t) = \lambda^p p t^{p-1}</math> and <math>S(t) = exp(-(\lambda t)^p)</math>. [https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Weibull.html R] and [https://en.wikipedia.org/wiki/Weibull_distribution wikipedia] also follows this parametrization except that <math>h(t) = p t^{p-1}/\lambda^p</math> and <math>S(t) = exp(-(t/\lambda)^p)</math>.
* Exponential distribution <math>h(t)</math> = constant (independent of t). This is a special case of Weibull distribution (p=1).
* Weibull (and also exponential) <strike>distribution</strike> regression model is the only case which belongs to both the proportional hazards and the accelerated life families.
: <math>
: <math>
\begin{align}
\begin{align}
\frac{h(x|Z_1)}{h(x|Z_2)} = \frac{h_0(x\exp(-\gamma' Z_1)) \exp(-\gamma ' Z_1)}{h_0(x\exp(-\gamma' Z_2)) \exp(-\gamma ' Z_2)} = \frac{(a/b)\left(\frac{x \exp(-\gamma ' Z_1)}{b}\right)^{a-1}\exp(-\gamma ' Z_1)}{(a/b)\left(\frac{x \exp(-\gamma ' Z_2)}{b}\right)^{a-1}\exp(-\gamma ' Z_2)}  \quad \mbox{which is independent of time x}
P(T>t, \delta=1) &= \int_t^\infty (1-G(s))dF(s), \\
P(T>t, \delta=0) &= \int_t^\infty (1-F(s))dG(s)
\end{align}
\end{align}
</math>
</math>
* [https://en.wikipedia.org/wiki/Proportional_hazards_model#Specifying_the_baseline_hazard_function Using the Weibull baseline hazard is the only circumstance under which the model satisfies both the proportional hazards, and accelerated failure time models]
* If X is exponential distribution with mean <math>b</math>, then X^(1/a) follows Weibull(a, b). See [https://en.wikipedia.org/wiki/Exponential_distribution Exponential distribution] and [https://en.wikipedia.org/wiki/Weibull_distribution Weibull distribution].
* [http://krex.k-state.edu/dspace/bitstream/handle/2097/8787/AngelaCrumer2011.pdf?sequence=3 Derivation] of mean and variance of Weibull distribution.


{| class="wikitable"
* http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=23
|-
* http://www.ms.uky.edu/~mai/sta635/LikelihoodCensor635.pdf#page=2 survival function of <math>f(T, \delta)</math>
! !! f(t)=h(t)*S(t) !! h(t) !! S(t) !! Mean
* https://web.stanford.edu/~lutian/coursepdf/unit2.pdf#page=3 joint density of <math>f(T, \delta)</math>
|-
* http://data.princeton.edu/wws509/notes/c7.pdf#page=6
| Exponential (Klein p37) || <math>\lambda \exp(-\lambda t)</math> || <math>\lambda</math> || <math>\exp(-\lambda t)</math> || <math>1/\lambda</math>
* Special case: ''T'' follows [https://en.wikipedia.org/wiki/Log-normal_distribution Log normal distribution] and ''C'' follows <math>U(0, \xi)</math>.
|-
 
| Weibull (Klein, Bender, [https://en.wikipedia.org/wiki/Weibull_distribution#Alternative_parameterizations wikipedia]) || <math>p\lambda t^{p-1}\exp(-\lambda t^p)</math> || <math>p\lambda t^{p-1}</math> || <math>exp(-\lambda t^p)</math> || <math>\frac{\Gamma(1+1/p)}{\lambda^{1/p}}</math>
=== R ===
|-
* [https://cran.r-project.org/web/packages/flexsurv/index.html flexsurv] package.
| Exponential ([https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Exponential.html R]) || <math>\lambda \exp(-\lambda t)</math>, <math>\lambda</math> is rate || <math>\lambda</math> || <math>\exp(-\lambda t)</math> || <math>1/\lambda</math>
* [https://devinincerti.com/2019/06/18/parametric_survival.html Parametric survival modeling] which uses the '''flexsurv''' package.
|-
* Used in [https://cran.rstudio.com/web/packages/simsurv/vignettes/simsurv_usage.html simsurv] package
| Weibull ([https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Weibull.html R], [https://en.wikipedia.org/wiki/Weibull_distribution wikipedia]) || <math>\frac{a}{b}\left(\frac{t}{b}\right)^{a-1} \exp(-(\frac{t}{b})^a)</math>,<br/><math>a</math> is shape, and <math>b</math> is scale || <math>\frac{a}{b}\left(\frac{t}{b}\right)^{a-1}</math> || <math>\exp(-(\frac{t}{b})^a)</math> || <math>b\Gamma(1+1/a)</math>
|}
* Accelerated failure-time model. Let <math>Y=\log(T)=\mu + \gamma'Z + \sigma W</math>. Then the survival function of <math>T</math> at the covariate Z,
: <math>
\begin{align}
S_T(t|Z) &= P(T > t |Z) \\
        &= P(Y > \ln t|Z) \\
        &= P(\mu + \sigma W > \ln t-\gamma' Z | Z) \\
        &= P(e^{\mu + \sigma W} > t\exp(-\gamma'Z) | Z) \\
        &= S_0(t \exp(-\gamma'Z)).
\end{align}
</math>
where <math>S_0(t)</math> denote the survival function T when Z=0. Since <math>h(t) = -\partial \ln (S(t))</math>, the hazard function of T with a covariate value Z is related to a baseline hazard rate <math>h_0</math> by (p56 Klein)
: <math>
\begin{align}
h(t|Z) = h_0(t\exp(-\gamma' Z)) \exp(-\gamma ' Z)
\end{align}
</math>


{{Pre}}
== Parametric models and likelihood function for uncensored data ==
> mean(rexp(1000)^(1/2))
[https://stat.ethz.ch/R-manual/R-devel/library/survival/html/plot.survfit.html plot.survfit()]
[1] 0.8902948
> mean(rweibull(1000, 2, 1))
[1] 0.8856265


> mean((rweibull(1000, 2, scale=4)/4)^2)
* Exponential. <math> T \sim Exp(\lambda) </math>. <math>H(t) = \lambda t.</math> and <math>ln(S(t)) = -H(t) = -\lambda t.</math>
[1] 1.008923
* Weibull. <math> T \sim W(\lambda,p).</math> <math>H(t) = \lambda^p t^p.</math> and <math>ln(-ln(S(t))) = ln(\lambda^p t^p)=const + p ln(t) </math>.
</pre>


=== Graphical way to check Weibull, AFT, PH ===
http://www.math.ucsd.edu/~rxu/math284/slect4.pdf
http://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_9.pdf#page=40


=== Weibull is related to Extreme value distribution ===
See also [http://data.princeton.edu/wws509/notes/c7.pdf#page=9 accelerated life models] where a set of covariates were used to model survival time.
* [https://www.itl.nist.gov/div898/handbook/apr/section1/apr163.htm Log(Weibull) = Extreme value]  
* [http://www.mathwave.com/articles/extreme-value-distributions.html Extreme Value Distributions] from mathwave.com
* [https://en.wikipedia.org/wiki/Generalized_extreme_value_distribution Generalized extreme value distribution] from wikipedia
* [https://www.rdocumentation.org/packages/EnvStats/versions/2.3.1/topics/EVD Density, distribution function, quantile function, and random generation for the (largest) extreme value distribution] from EnvStats R package
* [http://www.dataanalysisclassroom.com/lesson60/ Lesson 60 – Extreme value distributions in R]


=== Weibull distribution and bathtub ===
* log-normal model
* https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1740-9713.2018.01177.x by John Crocker
** [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5680655/ Comparing of Cox model and parametric models in analysis of effective factors on event time of neuropathy in patients with type 2 diabetes]. According to AIC, '''log-normal''' model with the lowest Akaike's was the best-fitted model among Cox and parametric models.
* https://www.sciencedirect.com/topics/materials-science/weibull-distribution
** [https://stackoverflow.com/a/11105255 Generating/plotting a log-normal survival function]
* https://en.wikipedia.org/wiki/Bathtub_curve
** [https://statisticsglobe.com/log-normal-distribution-in-r-dlnorm-plnorm-qlnorm-rlnorm Log Normal Distribution in R (4 Examples) | dlnorm, plnorm, qlnorm & rlnorm Functions]
** [https://devinincerti.com/2019/06/18/parametric_survival.html Parametric survival modeling]


=== Weibull distribution and reliability ===
== Survival modeling ==
[https://www.r-bloggers.com/survival-analysis-fitting-weibull-models-for-improving-device-reliability-in-r/ Survival Analysis – Fitting Weibull Models for Improving Device Reliability in R] (simulation)
=== Accelerated life models - a direct extension of the classical linear model ===
http://data.princeton.edu/wws509/notes/c7.pdf and also Kalbfleish and Prentice (1980).


=== Optimisation of a Weibull survival model using Optimx() ===
<math>
[https://www.joshua-entrop.com/post/optim_weibull_reg/ Optimisation of a Weibull survival model using Optimx() in R]
log T_i = x_i' \beta + \epsilon_i
</math>
Therefore
* <math>T_i = exp(x_i' \beta) T_{0i} </math>. So if there are two groups (x=1 and x=0), and <math>exp(\beta) = 2</math>, it means one group live twice as long as people in another group.
* <math>S_1(t) = S_0(t/ exp(x' \beta))</math>. This explains the meaning of '''accelerated failure-time'''. '''Depending on the sign of <math>\beta' x</math>, the time is either accelerated by a constant factor or degraded by a constant factor'''. If <math>exp(\beta)=2</math>, the probability that a member in group one (eg treatment) will be alive at age t is exactly the same as the probability that a member in group zero (eg control group) will be alive at age t/2.
* The hazard function <math>\lambda_1(t) = \lambda_0(t/exp(x'\beta))/ exp(x'\beta) </math>. So if <math>exp(\beta)=2</math>, at any given age people in group one would be exposed to half the risk of people in group zero half their age.


== CDF follows Unif(0,1) ==
In applications,
https://stats.stackexchange.com/questions/161635/why-is-the-cdf-of-a-sample-uniformly-distributed
* If the errors are normally distributed, then we obtain a log-normal model for the T. Estimation of this model for censored data by maximum likelihood is known in the econometric literature as a Tobit model.
* If the errors have an extreme value distribution, then T has an exponential distribution. The hazard <math>\lambda</math> satisfies the log linear model <math>\log \lambda_i = x_i' \beta</math>.


Take the Exponential distribution for example
=== Proportional hazard models ===
{{Pre}}
Note PH models is a type of multiplicative hazard rate models <math>h(x|Z) = h_0(x)c(\beta' Z)</math> where <math>c(\beta' Z) = \exp(\beta ' Z)</math>.
stem(pexp(rexp(1000)))
stem(pexp(rexp(10000)))
</pre>


Another example is from [https://github.com/faithghlee/SurvivalDataSimulation/blob/master/Simulation_Code.r simulating survival time]. Note that this is exactly [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.2059 Bender et al 2005] approach. See also the [https://cran.rstudio.com/web/packages/simsurv/index.html simsurv] (newer) and [https://cran.rstudio.com/web/packages/survsim/index.html survsim] (older) packages.
Assumption: Survival curves for two strata (determined by the particular choices of values for covariates) must have '''hazard functions that are proportional over time''' (i.e. '''constant relative hazard over time'''). [https://stats.stackexchange.com/questions/24552/proportional-hazards-assumption-meaning Proportional hazards assumption meaning]. The ratio of the hazard rates from two individuals with covariate value <math>Z</math> and <math>Z^*</math> is a constant function time.
{{Pre}}
: <math>
set.seed(100)
\begin{align}
\frac{h(t|Z)}{h(t|Z^*)} = \frac{h_0(t)\exp(\beta 'Z)}{h_0(t)\exp(\beta ' Z^*)} = \exp(\beta' (Z-Z^*)) \mbox{   independent of time}
\end{align}
</math>


#Define the following parameters outlined in the step:
Test the assumption; see [[#Check_the_proportional_hazard_.28constant_HR_over_time.29_assumption_by_cox.zph.28.29_-_Schoenfeld_Residuals|here]].
n = 1000
beta_0 = 0.5
beta_1 = -1
beta_2 = 1


b = 1.6 #This will be changed later as mentioned in Step 5 of documentation
== Weibull and Exponential model to Cox model ==
* https://socserv.socsci.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Cox-Regression.pdf. It also includes model diagnostic and all stuff is illustrated in R.
* http://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_9.pdf


#Step 1
In summary:
x_1<-rbinom(n, 1, 0.25)
* Weibull distribution (Klein) <math>h(t) = p \lambda (\lambda t)^{p-1}</math> and <math>S(t) = exp(-\lambda t^p)</math>. If p >1, then the risk increases over time. If p<1, then the risk decreases over time.
x_2<-rbinom(n, 1, 0.7)
** Note that Weibull distribution has a different parametrization. See http://data.princeton.edu/pop509/ParametricSurvival.pdf#page=2. <math>h(t) = \lambda^p p t^{p-1}</math> and <math>S(t) = exp(-(\lambda t)^p)</math>. [https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Weibull.html R] and [https://en.wikipedia.org/wiki/Weibull_distribution wikipedia] also follows this parametrization except that <math>h(t) = p t^{p-1}/\lambda^p</math> and <math>S(t) = exp(-(t/\lambda)^p)</math>.
 
* Exponential distribution <math>h(t)</math> = constant (independent of t). This is a special case of Weibull distribution (p=1).
#Step 2
* Weibull (and also exponential) <strike>distribution</strike> regression model is the only case which belongs to both the proportional hazards and the accelerated life families.
U<-runif(n, 0,1)
: <math>
T<-(-log(U)*exp(-(beta_0+beta_1*x_1+beta_2*x_2))) #Eqn (5)  
\begin{align}
\frac{h(x|Z_1)}{h(x|Z_2)} = \frac{h_0(x\exp(-\gamma' Z_1)) \exp(-\gamma ' Z_1)}{h_0(x\exp(-\gamma' Z_2)) \exp(-\gamma ' Z_2)} = \frac{(a/b)\left(\frac{x \exp(-\gamma ' Z_1)}{b}\right)^{a-1}\exp(-\gamma ' Z_1)}{(a/b)\left(\frac{x \exp(-\gamma ' Z_2)}{b}\right)^{a-1}\exp(-\gamma ' Z_2)}  \quad \mbox{which is independent of time x}
\end{align}
</math>
* [https://en.wikipedia.org/wiki/Proportional_hazards_model#Specifying_the_baseline_hazard_function Using the Weibull baseline hazard is the only circumstance under which the model satisfies both the proportional hazards, and accelerated failure time models]
* If X is exponential distribution with mean <math>b</math>, then X^(1/a) follows Weibull(a, b). See [https://en.wikipedia.org/wiki/Exponential_distribution Exponential distribution] and [https://en.wikipedia.org/wiki/Weibull_distribution Weibull distribution].
* [http://krex.k-state.edu/dspace/bitstream/handle/2097/8787/AngelaCrumer2011.pdf?sequence=3 Derivation] of mean and variance of Weibull distribution.


Fn <- ecdf(T) # https://stat.ethz.ch/R-manual/R-devel/library/stats/html/ecdf.html
{| class="wikitable"
# verify F(T) or 1-F(T) ~ U(0, 1)
|-
hist(Fn(T))
! !! f(t)=h(t)*S(t) !! h(t) !! S(t) !! Mean
# look at the plot of survival probability vs time
|-
plot(T, 1 - Fn(T))
| Exponential (Klein p37) || <math>\lambda \exp(-\lambda t)</math> || <math>\lambda</math> || <math>\exp(-\lambda t)</math> || <math>1/\lambda</math>
</pre>
|-
 
| Weibull (Klein, Bender, [https://en.wikipedia.org/wiki/Weibull_distribution#Alternative_parameterizations wikipedia]) || <math>p\lambda t^{p-1}\exp(-\lambda t^p)</math> || <math>p\lambda t^{p-1}</math> || <math>exp(-\lambda t^p)</math> || <math>\frac{\Gamma(1+1/p)}{\lambda^{1/p}}</math>
== Simulate survival data ==
|-
Note that status = 1 means an event (e.g. death) happened; Ti <= Ci. That is, the status variable used in R/Splus means the death indicator.
| Exponential ([https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Exponential.html R]) || <math>\lambda \exp(-\lambda t)</math>, <math>\lambda</math> is rate || <math>\lambda</math> || <math>\exp(-\lambda t)</math> || <math>1/\lambda</math>
 
|-
<ul>
| Weibull ([https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Weibull.html R], [https://en.wikipedia.org/wiki/Weibull_distribution wikipedia]) || <math>\frac{a}{b}\left(\frac{t}{b}\right)^{a-1} \exp(-(\frac{t}{b})^a)</math>,<br/><math>a</math> is shape, and <math>b</math> is scale || <math>\frac{a}{b}\left(\frac{t}{b}\right)^{a-1}</math> || <math>\exp(-(\frac{t}{b})^a)</math> || <math>b\Gamma(1+1/a)</math>
<li>http://www.bioconductor.org/packages/release/bioc/manuals/genefilter/man/genefilter.pdf#page=4
|}
{{Pre}}
* Accelerated failure-time model. Let <math>Y=\log(T)=\mu + \gamma'Z + \sigma W</math>. Then the survival function of <math>T</math> at the covariate Z,
y <- rexp(10)
cen <- runif(10)
status <- ifelse(cen < .7, 1, 0)
</pre>
</li>
<li>[https://amstat.tandfonline.com/doi/abs/10.1080/01621459.2020.1740096?journalCode=uasa20 Inference on Selected Subgroups in Clinical Trials] <math>\lambda(t) = \lambda_0(t) e^{\beta_i D}</math> for subgroup ''i=1,2'', respectively where ''D'' is the treatment indicator and <math>\lambda_0(t)</math> is the baseline hazard function of Weibull(1,1). The subjects fall into one of the two subgroups with probability 0.5, and the treatment assignment is also random with equal probability. The response generated from the above model is then censored randomly from the right by a censoring variable C, where log(C) follows the uniform distribution on (-1.25, 1.00). The censoring rate is about 40% across different choices of <math>\beta_i</math> considered in this study. </li>
<li>[http://www.ms.uky.edu/~mai/Rsurv.pdf#page=10 How much power/accuracy is lost by using the Cox model instead of Weibull model when both model are correct?] <math>h(t|x)=\lambda=e^{3x+1} = h_0(t)e^{\beta x}</math> where <math>h_0(t)=e^1, \beta=3</math>.
: '''Note that''' for the '''exponential''' distribution, larger rate/<math>\lambda</math> corresponds to a smaller mean. This relation matches with the Cox regression where a large covariate corresponds to a smaller survival time. So the coefficient 3 in myrates in the below example has the same sign as the coefficient (2.457466 for censored data) in the output of the Cox model fitting.
{{Pre}}
n <- 30
x <- scale(1:n, TRUE, TRUE) # create covariates (standardized)
                            # the original example does not work on large 'n'
myrates <- exp(3*x+1)
set.seed(1234)
y <- rexp(n, rate = myrates) # generates the r.v.
cen <- rexp(n, rate = 0.5 )  #  E(cen)=1/rate
ycen <- pmin(y, cen)
di <- as.numeric(y <= cen)
survreg(Surv(ycen, di)~x, dist="weibull")$coef[2]  # -3.080125
# library(flexsurvreg); flexsurvreg(Surv(ycen, di)~x, dist="weibull")
coxph(Surv(ycen, di)~x)$coef  # 2.457466
 
# no censor
survreg(Surv(y,rep(1, n))~x,dist="weibull")$coef[2]  # -3.137603
survreg(Surv(y,rep(1, n))~x,dist="exponential")$coef[2]  # -3.143095
coxph(Surv(y,rep(1, n))~x)$coef  # 2.717794
 
# See the pdf note for the rest of code
</pre> </li>
<li>Intercept in survreg for the exponential distribution. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=25.
: <math>
: <math>
\begin{align}
\begin{align}
\lambda = exp(-intercept)
S_T(t|Z) &= P(T > t |Z) \\
        &= P(Y > \ln t|Z) \\
        &= P(\mu + \sigma W > \ln t-\gamma' Z | Z) \\
        &= P(e^{\mu + \sigma W} > t\exp(-\gamma'Z) | Z) \\
        &= S_0(t \exp(-\gamma'Z)).
\end{align}
\end{align}
</math>
</math>
{{Pre}}
where <math>S_0(t)</math> denote the survival function T when Z=0. Since <math>h(t) = -\partial \ln (S(t))</math>, the hazard function of T with a covariate value Z is related to a baseline hazard rate <math>h_0</math> by (p56 Klein)
> futime <- rexp(1000, 5)
> survreg(Surv(futime,rep(1,1000))~1,dist="exponential")$coef
(Intercept)  
  -1.618263
> exp(1.618263)
[1] 5.044321
</pre> </li>
<li>Intercept and scale in survreg for a Weibull distribution. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=28.
: <math>
: <math>
\begin{align}
\begin{align}
\gamma &= 1/scale \\
h(t|Z) = h_0(t\exp(-\gamma' Z)) \exp(-\gamma ' Z)
  \alpha &= exp(-(Intercept)*\gamma)  
\end{align}
\end{align}
</math>
</math>
{{Pre}}
{{Pre}}
> survreg(Surv(futime,rep(1,1000))~1,dist="weibull")
> mean(rexp(1000)^(1/2))
Call:
[1] 0.8902948
survreg(formula = Surv(futime, rep(1, 1000)) ~ 1, dist = "weibull")
> mean(rweibull(1000, 2, 1))
[1] 0.8856265


Coefficients:
> mean((rweibull(1000, 2, scale=4)/4)^2)
(Intercept)  
[1] 1.008923
  -1.639469
</pre>


Scale= 1.048049
=== Graphical way to check Weibull, AFT, PH ===
http://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_9.pdf#page=40


Loglik(model)= 620.1  Loglik(intercept only)= 620.1
=== Weibull is related to Extreme value distribution ===
n= 1000
* [https://www.itl.nist.gov/div898/handbook/apr/section1/apr163.htm Log(Weibull) = Extreme value]
</pre> </li>
* [http://www.mathwave.com/articles/extreme-value-distributions.html Extreme Value Distributions] from mathwave.com
<li>rsurv() function from the [https://cran.r-project.org/web/packages/ipred/index.html ipred] package </li>
* [https://en.wikipedia.org/wiki/Generalized_extreme_value_distribution Generalized extreme value distribution] from wikipedia
<li>[http://people.stat.sfu.ca/~raltman/stat402/402L32.pdf#page=4 Use Weibull distribution to model survival data]. We assume the shape is constant across subjects. We then allow the scale to vary across subjects. For subject <math>i</math> with covariate <math>X_i</math>, <math>\log(scale_i)</math> = <math>\beta ' X_i</math>. Note that if we want to make the <math>\beta</math> sign to be consistent with the Cox model, we want to use <math>\log(scale_i)</math> = <math>-\beta ' X_i</math> instead. </li>
* [https://www.rdocumentation.org/packages/EnvStats/versions/2.3.1/topics/EVD Density, distribution function, quantile function, and random generation for the (largest) extreme value distribution] from EnvStats R package
<li>http://sas-and-r.blogspot.com/2010/03/example-730-simulate-censored-survival.html. Assuming shape=1 in the Weibull distribution, then the [[#Weibull_and_Exponential_model_to_Cox_model|hazard function]] can be expressed as a proportional hazard model
* [http://www.dataanalysisclassroom.com/lesson60/ Lesson 60 – Extreme value distributions in R]
: <math>
 
h(t|x) = 1/scale = \frac{1}{\lambda/e^{\beta 'x}} = \frac{e^{\beta ' x}}{\lambda} = h_0(t) \exp(\beta' x)
=== Weibull distribution and bathtub ===
</math>
* https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1740-9713.2018.01177.x by John Crocker
* https://www.sciencedirect.com/topics/materials-science/weibull-distribution
* https://en.wikipedia.org/wiki/Bathtub_curve
 
=== Weibull distribution and reliability ===
[https://www.r-bloggers.com/survival-analysis-fitting-weibull-models-for-improving-device-reliability-in-r/ Survival Analysis – Fitting Weibull Models for Improving Device Reliability in R] (simulation)
 
=== Optimisation of a Weibull survival model using Optimx() ===
[https://www.joshua-entrop.com/post/optim_weibull_reg/ Optimisation of a Weibull survival model using Optimx() in R]
 
== CDF follows Unif(0,1) ==
https://stats.stackexchange.com/questions/161635/why-is-the-cdf-of-a-sample-uniformly-distributed
 
Take the Exponential distribution for example
{{Pre}}
{{Pre}}
n = 10000
stem(pexp(rexp(1000)))
beta1 = 2; beta2 = -1
stem(pexp(rexp(10000)))
lambdaT = .002 # baseline hazard
</pre>
lambdaC = .004  # hazard of censoring
set.seed(1234)
x1 = rnorm(n,0)
x2 = rnorm(n,0)
# true event time
T = Vectorize(rweibull)(n=1, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2))  
# No censoring
event2 <- rep(1, length(T))
coxph(Surv(T, event2)~ x1 + x2)
#        coef exp(coef) se(coef)     z      p
# x1  1.99825  7.37613  0.01884 106.07 <2e-16
# x2 -1.00200  0.36715  0.01267 -79.08 <2e-16
#
# Likelihood ratio test=15556  on 2 df, p=< 2.2e-16
# n= 10000, number of events= 10000


# Censoring
Another example is from [https://github.com/faithghlee/SurvivalDataSimulation/blob/master/Simulation_Code.r simulating survival time]. Note that this is exactly [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.2059 Bender et al 2005] approach. See also the [https://cran.rstudio.com/web/packages/simsurv/index.html simsurv] (newer) and [https://cran.rstudio.com/web/packages/survsim/index.html survsim] (older) packages.
C = rweibull(n, shape=1, scale=lambdaC)  #censoring time
time = pmin(T,C)  #observed time is min of censored and true
event = time==T  # set to 1 if event is observed
coxph(Surv(time, event)~ x1 + x2)
#        coef exp(coef) se(coef)      z      p
# x1  2.01039  7.46622  0.02250  89.33 <2e-16
# x2 -0.99210  0.37080  0.01552 -63.95 <2e-16
#
# Likelihood ratio test=11321  on 2 df, p=< 2.2e-16
# n= 10000, number of events= 6002
mean(event)
# [1] 0.6002
</pre> </li>
<li>https://stats.stackexchange.com/a/135129 (Bender's inverse probability method). Let <math>h_0(t)=\lambda \rho t^{\rho - 1} </math> where shape 𝜌>0 and scale 𝜆>0. Following the inverse probability method, a realisation of 𝑇∼𝑆(⋅|𝐱) is obtained by computing <math> t = \left( - \frac{\log(v)}{\lambda \exp(x' \beta)} \right) ^ {1/\rho} </math> with 𝑣 a uniform variate on (0,1). Using results on transformations of random variables, one may notice that 𝑇 has a conditional Weibull distribution (given 𝐱) with shape 𝜌 and scale 𝜆exp(𝐱′𝛽).
{{Pre}}
{{Pre}}
# N = sample size   
set.seed(100)  
# lambda = scale parameter in h0()
# rho = shape parameter in h0()
# beta = fixed effect parameter
# rateC = rate parameter of the exponential distribution of censoring variable C


simulWeib <- function(N, lambda, rho, beta, rateC)
#Define the following parameters outlined in the step:
{
n = 1000
  # covariate --> N Bernoulli trials
beta_0 = 0.5
  x <- sample(x=c(0, 1), size=N, replace=TRUE, prob=c(0.5, 0.5))
beta_1 = -1
beta_2 = 1
 
b = 1.6 #This will be changed later as mentioned in Step 5 of documentation
 
#Step 1
x_1<-rbinom(n, 1, 0.25)
x_2<-rbinom(n, 1, 0.7)


  # Weibull latent event times
#Step 2
  v <- runif(n=N)
U<-runif(n, 0,1)
  Tlat <- (- log(v) / (lambda * exp(x * beta)))^(1 / rho)
T<-(-log(U)*exp(-(beta_0+beta_1*x_1+beta_2*x_2))) #Eqn (5)  


  # censoring times
Fn <- ecdf(T) # https://stat.ethz.ch/R-manual/R-devel/library/stats/html/ecdf.html
  C <- rexp(n=N, rate=rateC)
# verify F(T) or 1-F(T) ~ U(0, 1)
hist(Fn(T))
# look at the plot of survival probability vs time
plot(T, 1 - Fn(T))
</pre>


  # follow-up times and event indicators
== Simulate survival data ==
  time <- pmin(Tlat, C)
Note that status = 1 means an event (e.g. death) happened; Ti <= Ci. That is, the status variable used in R/Splus means the death indicator.
  status <- as.numeric(Tlat <= C)


  # data set
<ul>
  data.frame(id=1:N,
<li>http://www.bioconductor.org/packages/release/bioc/manuals/genefilter/man/genefilter.pdf#page=4
            time=time,
{{Pre}}
            status=status,
y <- rexp(10)
            x=x)
cen <- runif(10)
}
status <- ifelse(cen < .7, 1, 0)
# Test
</pre>
set.seed(1234)
</li>
betaHat <- rate <- rep(NA, 1e3)
<li>[https://amstat.tandfonline.com/doi/abs/10.1080/01621459.2020.1740096?journalCode=uasa20 Inference on Selected Subgroups in Clinical Trials] <math>\lambda(t) = \lambda_0(t) e^{\beta_i D}</math> for subgroup ''i=1,2'', respectively where ''D'' is the treatment indicator and <math>\lambda_0(t)</math> is the baseline hazard function of Weibull(1,1). The subjects fall into one of the two subgroups with probability 0.5, and the treatment assignment is also random with equal probability. The response generated from the above model is then censored randomly from the right by a censoring variable C, where log(C) follows the uniform distribution on (-1.25, 1.00). The censoring rate is about 40% across different choices of <math>\beta_i</math> considered in this study. </li>
for(k in 1:1e3)
<li>[http://www.ms.uky.edu/~mai/Rsurv.pdf#page=10 How much power/accuracy is lost by using the Cox model instead of Weibull model when both model are correct?] <math>h(t|x)=\lambda=e^{3x+1} = h_0(t)e^{\beta x}</math> where <math>h_0(t)=e^1, \beta=3</math>.
{
: '''Note that''' for the '''exponential''' distribution, larger rate/<math>\lambda</math> corresponds to a smaller mean. This relation matches with the Cox regression where a large covariate corresponds to a smaller survival time. So the coefficient 3 in myrates in the below example has the same sign as the coefficient (2.457466 for censored data) in the output of the Cox model fitting.
  dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001)
{{Pre}}
  fit <- coxph(Surv(time, status) ~ x, data=dat)
n <- 30
  rate[k] <- mean(dat$status == 0)
x <- scale(1:n, TRUE, TRUE) # create covariates (standardized)
  betaHat[k] <- fit$coef
                            # the original example does not work on large 'n'
}
myrates <- exp(3*x+1)
mean(rate)
set.seed(1234)
# [1] 0.12287
y <- rexp(n, rate = myrates) # generates the r.v.
mean(betaHat)
cen <- rexp(n, rate = 0.5 )  # E(cen)=1/rate
# [1] -0.6085473
ycen <- pmin(y, cen)
</pre> </li>
di <- as.numeric(y <= cen)
<li>[https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.2059 Generating survival times to simulate Cox proportional hazards models] Bender et al 2005
survreg(Surv(ycen, di)~x, dist="weibull")$coef[2]  # -3.080125
<math>T=H_0^{-1}[-\log(U) \exp(\beta' x)]
# library(flexsurvreg); flexsurvreg(Surv(ycen, di)~x, dist="weibull")
</math> [[:File:Bender2005.png|Bender2005.png]], [[:File:Bender2005table2.png|Bender2005table2.png]]
coxph(Surv(ycen, di)~x)$coef  # 2.457466
* [https://cran.r-project.org/web/packages/survsim/index.html survsim] package and the [https://www.jstatsoft.org/article/view/v059i02 paper] on JSS. See [http://justanotherdatablog.blogspot.com/2015/08/survival-analysis-1.html this post]. [https://rviews.rstudio.com/2020/11/02/simulating-biologically-plausible-survival-data/ Biologically Plausible Fake Survival Data]
* [https://cran.rstudio.com/web/packages/simsurv/index.html simsurv] package (new, 2 vignettes).
* [https://stats.stackexchange.com/questions/65005/get-a-desired-percentage-of-censored-observations-in-a-simulation-of-cox-ph-mode Get a desired percentage of censored observations in a simulation of Cox PH Model]. The answer is based on Bender et al 2005. [http://onlinelibrary.wiley.com/doi/10.1002/sim.2059/epdf Generating survival times to simulate Cox proportional hazards models]. Statistics in Medicine 24: 1713–1723. The censoring time is fixed and the distribution of the censoring indicator is following the binomial. In fact, when we simulate survival data with a predefined censoring rate, we can pretend the survival time is already censored and only care about the censoring/status variable to make sure the censoring rate is controlled.
* (Search github) [https://github.com/faithghlee/SurvivalDataSimulation Using inverse CDF] <math> \lambda = exp(\beta' x), \; S(t)= \exp(-\lambda t) = \exp(-t e^{\beta' x}) \sim Unif(0,1) </math>
* [https://arxiv.org/pdf/1611.03063.pdf#page=17 Prediction Accuracy Measures for a Nonlinear Model and for Right-Censored Time-to-Event Data] Li and Wang </li>
<li>Simple example from [https://www.rdocumentation.org/packages/glmnet/versions/3.0-2/topics/Cindex glmnet]
<pre>
set.seed(10101)
N = 1000
p = 30
nzc = p/3
x = matrix(rnorm(N * p), N, p)
beta = rnorm(nzc)
fx = x[, seq(nzc)] %*% beta/3
hx = exp(fx)
ty = rexp(N, hx)
tcens = rbinom(n = N, prob = 0.3, size = 1)  # censoring indicator
y = cbind(time = ty, status = 1 - tcens)  # y=Surv(ty,1-tcens) with library(survival)
fit = glmnet(x, y, family = "cox")
pred = predict(fit, newx = x)
Cindex(pred, y)
</pre>
<li>A non-standard baseline hazard function <math>\lambda_0(t)=(t - .5)^2</math> from the paper: [https://www.sciencedirect.com/science/article/pii/S0167947317302219 A new nonparametric screening method for ultrahigh-dimensional survival data] Liu 2018. The censoring time <math>C = \widetilde{C} \wedge \tau</math>, where <math>\widetilde{C}</math> was generated from Unif (0, <math>\tau + 2</math>) where <math>\tau</math> was chosen to yield the desirable censoring rates of 20% and 40%, respectively. </li>
<li>[https://web.stanford.edu/~hastie/Papers/v39i05.pdf#page=8 Regularization paths for Cox's proportional hazards model via coordinate descent. J Stat Software] Simon et al 2011. [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2656-1#Sec8 Gsslasso Cox]: a Bayesian hierarchical model for predicting survival and detecting associated genes by incorporating pathway information by Tang 2019. See also Tian 2014 JASA p1525. X ~ standard Gaussian. True survival time exp(beta X + k · Z). Z ~ N(0,1), and k is chosen so that the signal-to-noise ratio is 3.0 or to induce a certain censoring rate. Censoring time C = exp (k · Z). The observed survival time T = min{Y, C}. </li>
<li>[https://cran.r-project.org/web/packages/survParamSim/index.html survParamSim]: Parametric Survival Simulation with Parameter Uncertainty </li>
<li>[https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-3478-x vivaGen – a survival data set generator for software testing] BMC Bioinformatics 2020 </li>
<li>[https://www.r-bloggers.com/2022/02/simulating-survival-outcomes-setting-the-parameters-for-the-desired-distribution/ Simulating survival outcomes: setting the parameters for the desired distribution]. [https://cran.r-project.org/web/packages/simstudy/index.html simstudy], [https://www.rdatagen.net/post/2022-02-22-follow-up-simstudy-function-for-generating-parameters-for-survival-distribution/ Follow-up: simstudy function for generating parameters for survival distribution] package was used. </li>
</ul>


=== Warning on multiple rates ===
# no censor
Search Vectorize() function in this page.
survreg(Surv(y,rep(1, n))~x,dist="weibull")$coef[2]  # -3.137603
survreg(Surv(y,rep(1, n))~x,dist="exponential")$coef[2]  # -3.143095
coxph(Surv(y,rep(1, n))~x)$coef  # 2.717794


<pre>
# See the pdf note for the rest of code
mean(rexp(1000, rate=2) )
</pre> </li>
# [1] 0.5258078
<li>Intercept in survreg for the exponential distribution. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=25.
mean(rexp(1000, rate=1) )
: <math>
# [1] 0.9712124
\begin{align}
\lambda = exp(-intercept)
\end{align}
</math>
{{Pre}}
> futime <- rexp(1000, 5)
> survreg(Surv(futime,rep(1,1000))~1,dist="exponential")$coef
(Intercept)  
  -1.618263
> exp(1.618263)
[1] 5.044321
</pre> </li>
<li>Intercept and scale in survreg for a Weibull distribution. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=28.
: <math>
\begin{align}
\gamma &= 1/scale \\
  \alpha &= exp(-(Intercept)*\gamma)
\end{align}
</math>
{{Pre}}
> survreg(Surv(futime,rep(1,1000))~1,dist="weibull")
Call:
survreg(formula = Surv(futime, rep(1, 1000)) ~ 1, dist = "weibull")
 
Coefficients:
(Intercept)
  -1.639469


z = rexp(1000, rate=c(1, 2))
Scale= 1.048049
mean(z[seq(1, 1000, by=2)])
# [1] 1.041969
mean(z[seq(2, 1000, by=2)])
# [1] 0.5079594
</pre>


=== Markov model ===
Loglik(model)= 620.1  Loglik(intercept only)= 620.1
[https://rviews.rstudio.com/2020/10/08/fake-data-for-the-illness-death-model/ Fake Survival Data for the Disease Progression Model]
n= 1000
 
</pre> </li>
=== Non-proportional hazards ===
<li>rsurv() function from the [https://cran.r-project.org/web/packages/ipred/index.html ipred] package </li>
[https://www.rdatagen.net/post/2022-03-29-simulating-non-proportional-hazards/ Simulating time-to-event outcomes with non-proportional hazards]
<li>[http://people.stat.sfu.ca/~raltman/stat402/402L32.pdf#page=4 Use Weibull distribution to model survival data]. We assume the shape is constant across subjects.  We then allow the scale to vary across subjects. For subject <math>i</math> with covariate <math>X_i</math>, <math>\log(scale_i)</math> = <math>\beta ' X_i</math>. Note that if we want to make the <math>\beta</math> sign to be consistent with the Cox model, we want to use <math>\log(scale_i)</math> = <math>-\beta ' X_i</math> instead. </li>
<li>http://sas-and-r.blogspot.com/2010/03/example-730-simulate-censored-survival.html. Assuming shape=1 in the Weibull distribution, then the [[#Weibull_and_Exponential_model_to_Cox_model|hazard function]] can be expressed as a proportional hazard model
: <math>
h(t|x) = 1/scale = \frac{1}{\lambda/e^{\beta 'x}} = \frac{e^{\beta ' x}}{\lambda} = h_0(t) \exp(\beta' x)
</math>
{{Pre}}
n = 10000
beta1 = 2; beta2 = -1
lambdaT = .002 # baseline hazard
lambdaC = .004  # hazard of censoring
set.seed(1234)
x1 = rnorm(n,0)
x2 = rnorm(n,0)
# true event time
T = Vectorize(rweibull)(n=1, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2))
# No censoring
event2 <- rep(1, length(T))
coxph(Surv(T, event2)~ x1 + x2)
#        coef exp(coef) se(coef)      z      p
# x1  1.99825  7.37613  0.01884 106.07 <2e-16
# x2 -1.00200  0.36715  0.01267 -79.08 <2e-16
#
# Likelihood ratio test=15556  on 2 df, p=< 2.2e-16
# n= 10000, number of events= 10000


== Standardize covariates ==
# Censoring
coxph() does not have an option to standardize covariates but glmnet() does.
C = rweibull(n, shape=1, scale=lambdaC)  #censoring time
<pre>
time = pmin(T,C)  #observed time is min of censored and true
library(glmnet)
event = time==T  # set to 1 if event is observed
library(survival)
coxph(Surv(time, event)~ x1 + x2)
#        coef exp(coef) se(coef)      z      p
# x1  2.01039  7.46622  0.02250  89.33 <2e-16
# x2 -0.99210  0.37080  0.01552 -63.95 <2e-16
#
# Likelihood ratio test=11321  on 2 df, p=< 2.2e-16
# n= 10000, number of events= 6002
mean(event)
# [1] 0.6002
</pre> </li>
<li>https://stats.stackexchange.com/a/135129 (Bender's inverse probability method). Let <math>h_0(t)=\lambda \rho t^{\rho - 1} </math> where shape 𝜌>0 and scale 𝜆>0. Following the inverse probability method, a realisation of 𝑇∼𝑆(⋅|𝐱) is obtained by computing <math> t = \left( - \frac{\log(v)}{\lambda \exp(x' \beta)} \right) ^ {1/\rho} </math> with 𝑣 a uniform variate on (0,1). Using results on transformations of random variables, one may notice that 𝑇 has a conditional Weibull distribution (given 𝐱) with shape 𝜌 and scale 𝜆exp(𝐱′𝛽).
{{Pre}}
# N = sample size   
# lambda = scale parameter in h0()
# rho = shape parameter in h0()
# beta = fixed effect parameter
# rateC = rate parameter of the exponential distribution of censoring variable C


N=1000;p=30
simulWeib <- function(N, lambda, rho, beta, rateC)
nzc=p/3
{
beta <- c(rep(1, 5), rep(-1, 5))
  # covariate --> N Bernoulli trials
  x <- sample(x=c(0, 1), size=N, replace=TRUE, prob=c(0.5, 0.5))


set.seed(1234)
  # Weibull latent event times
   x=matrix(rnorm(N*p),N,p)
   v <- runif(n=N)
   x[, 1:5] <- x[, 1:5]*2
   Tlat <- (- log(v) / (lambda * exp(x * beta)))^(1 / rho)
  x[, 6:10] <- x[, 6:10] + 2


   fx=x[,seq(nzc)] %*% beta
   # censoring times
  hx=exp(fx)
   C <- rexp(n=N, rate=rateC)
  ty=rexp(N,hx)
   tcens <- rep(0,N)
  y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)


coxph(Surv(ty, 1-tcens) ~ x) %>% coef %>% head(10)
  # follow-up times and event indicators
#        x1        x2        x3        x4        x5        x6        x7
  time <- pmin(Tlat, C)
#  0.6076146  0.6359927  0.6346022  0.6469274  0.6152082 -0.6614930 -0.5946101
  status <- as.numeric(Tlat <= C)
#        x8        x9        x10
# -0.6726081 -0.6275205 -0.7073704


xscale <- scale(x, TRUE, TRUE) # halve the covariate values
  # data set
coxph(Surv(ty, 1-tcens) ~ xscale) %>% coef %>% head(10) # double the coef
  data.frame(id=1:N,
#    xscale1    xscale2    xscale3    xscale4    xscale5    xscale6    xscale7
            time=time,
#  1.2119940  1.2480628  1.2848646  1.2857796  1.1959619 -0.6431946 -0.5941309
            status=status,
#    xscale8    xscale9   xscale10
            x=x)
# -0.6723137 -0.6188384 -0.6793313
}
 
# Test
  set.seed(1)
set.seed(1234)
   fit=cv.glmnet(x,y,family="cox", nfolds=10, standardize = TRUE)
betaHat <- rate <- rep(NA, 1e3)
   as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
for(k in 1:1e3)
# [1] 0.9351341  0.9394696  0.9187242  0.9418540  0.9111623 -0.9303783
{
# [7] -0.9271438 -0.9597583 -0.9493759 -0.9386065
   dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001)
 
   fit <- coxph(Surv(time, status) ~ x, data=dat)
  set.seed(1)
  rate[k] <- mean(dat$status == 0)
  fit=cv.glmnet(x,y,family="cox", nfolds=10, standardize = FALSE)
   betaHat[k] <- fit$coef
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
}
# [1]  0.9357171  0.9396877  0.9200247  0.9420215  0.9118803 -0.9257406
mean(rate)
# [7] -0.9232813 -0.9554017 -0.9448827 -0.9356009 
# [1] 0.12287
 
mean(betaHat)
  set.seed(1)
# [1] -0.6085473
  fit=cv.glmnet(xscale,y,family="cox", nfolds=10, standardize = TRUE)
</pre> </li>
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
<li>[https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.2059 Generating survival times to simulate Cox proportional hazards models] Bender et al 2005
# [1]  1.8652889  1.8436015 1.8601198 1.8719515  1.7712951 -0.9046420
<math>T=H_0^{-1}[-\log(U) \exp(\beta' x)]
# [7] -0.9263966 -0.9593383 -0.9362407 -0.9014015
</math> [[:File:Bender2005.png|Bender2005.png]], [[:File:Bender2005table2.png|Bender2005table2.png]]
 
* [https://cran.r-project.org/web/packages/survsim/index.html survsim] package and the [https://www.jstatsoft.org/article/view/v059i02 paper] on JSS. See [http://justanotherdatablog.blogspot.com/2015/08/survival-analysis-1.html this post]. [https://rviews.rstudio.com/2020/11/02/simulating-biologically-plausible-survival-data/ Biologically Plausible Fake Survival Data]
  set.seed(1)
* [https://cran.rstudio.com/web/packages/simsurv/index.html simsurv] package (new, 2 vignettes).
  fit=cv.glmnet(xscale,y,family="cox", nfolds=10, standardize = FALSE)
* [https://stats.stackexchange.com/questions/65005/get-a-desired-percentage-of-censored-observations-in-a-simulation-of-cox-ph-mode Get a desired percentage of censored observations in a simulation of Cox PH Model]. The answer is based on Bender et al 2005. [http://onlinelibrary.wiley.com/doi/10.1002/sim.2059/epdf Generating survival times to simulate Cox proportional hazards models]. Statistics in Medicine 24: 1713–1723. The censoring time is fixed and the distribution of the censoring indicator is following the binomial. In fact, when we simulate survival data with a predefined censoring rate, we can pretend the survival time is already censored and only care about the censoring/status variable to make sure the censoring rate is controlled.
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
* (Search github) [https://github.com/faithghlee/SurvivalDataSimulation Using inverse CDF] <math> \lambda = exp(\beta' x), \; S(t)= \exp(-\lambda t) = \exp(-t e^{\beta' x}) \sim Unif(0,1) </math>
# [1]  1.8652889  1.8436015  1.8601198  1.8719515  1.7712951 -0.9046420
* [https://arxiv.org/pdf/1611.03063.pdf#page=17 Prediction Accuracy Measures for a Nonlinear Model and for Right-Censored Time-to-Event Data] Li and Wang </li>
# [7] -0.9263966 -0.9593383 -0.9362407 -0.9014015
<li>Simple example from [https://www.rdocumentation.org/packages/glmnet/versions/3.0-2/topics/Cindex glmnet]
<pre>
set.seed(10101)
N = 1000
p = 30
nzc = p/3
x = matrix(rnorm(N * p), N, p)
beta = rnorm(nzc)
fx = x[, seq(nzc)] %*% beta/3
hx = exp(fx)
ty = rexp(N, hx)
tcens = rbinom(n = N, prob = 0.3, size = 1) # censoring indicator
y = cbind(time = ty, status = 1 - tcens) # y=Surv(ty,1-tcens) with library(survival)
fit = glmnet(x, y, family = "cox")
pred = predict(fit, newx = x)
Cindex(pred, y)
</pre>
</pre>
<li>A non-standard baseline hazard function <math>\lambda_0(t)=(t - .5)^2</math> from the paper: [https://www.sciencedirect.com/science/article/pii/S0167947317302219 A new nonparametric screening method for ultrahigh-dimensional survival data] Liu 2018. The censoring time <math>C = \widetilde{C} \wedge \tau</math>, where <math>\widetilde{C}</math> was generated from Unif (0, <math>\tau + 2</math>) where <math>\tau</math> was chosen to yield the desirable censoring rates of 20% and 40%, respectively. </li>
<li>[https://web.stanford.edu/~hastie/Papers/v39i05.pdf#page=8 Regularization paths for Cox's proportional hazards model via coordinate descent. J Stat Software] Simon et al 2011. [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2656-1#Sec8 Gsslasso Cox]: a Bayesian hierarchical model for predicting survival and detecting associated genes by incorporating pathway information by Tang 2019. See also Tian 2014 JASA p1525. X ~ standard Gaussian. True survival time exp(beta X + k · Z). Z ~ N(0,1), and k is chosen so that the signal-to-noise ratio is 3.0 or to induce a certain censoring rate. Censoring time C = exp (k · Z). The observed survival time T = min{Y, C}. </li>
<li>[https://cran.r-project.org/web/packages/survParamSim/index.html survParamSim]: Parametric Survival Simulation with Parameter Uncertainty </li>
<li>[https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-3478-x vivaGen – a survival data set generator for software testing] BMC Bioinformatics 2020 </li>
<li>[https://www.r-bloggers.com/2022/02/simulating-survival-outcomes-setting-the-parameters-for-the-desired-distribution/ Simulating survival outcomes: setting the parameters for the desired distribution]. [https://cran.r-project.org/web/packages/simstudy/index.html simstudy], [https://www.rdatagen.net/post/2022-02-22-follow-up-simstudy-function-for-generating-parameters-for-survival-distribution/ Follow-up: simstudy function for generating parameters for survival distribution] package was used. </li>
</ul>
=== Age + gene expression ===
Simulate a data such as gene is significant in ~ age + gene model, but insignificant in ~ gene model.
<pre>
# Set seed for reproducibility
set.seed(123)
# Simulate data
n <- 200
age <- rnorm(n, mean = 50, sd = 10)  # Continuous variable for age
gene_expression <- rnorm(n, mean = 0, sd = 1)  # Continuous variable for gene expression


== Predefined censoring rates ==
# Simulate survival data with a moderate effect of gene expression
[http://onlinelibrary.wiley.com/doi/10.1002/sim.7178/full Simulating survival data with predefined censoring rates for proportional hazards models]
time <- rexp(n, rate = 0.1 + 0.01 * age + 0.06 * gene_expression)
status <- sample(0:1, n, replace = TRUE, prob = c(0.3, 0.7))  # Censored status
 
# Create data frame
df <- data.frame(time, status, age, gene_expression)
 
# Fit Cox models
cox_model_1 <- coxph(Surv(time, status) ~ gene_expression, data = df)
cox_model_2 <- coxph(Surv(time, status) ~ age + gene_expression, data = df)


== Cross validation ==
summary(cox_model_1)  # p(gene)=0.0675
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.4780122407/epdf Cross validation in survival analysis] by Verweij & van Houwelingen, Stat in medicine 1993.
summary(cox_model_2) # p(gene)=0.0361, p(age)=0.0329
* [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.2353 Cross- Validated Cox Regression on Microarray Gene Expression Data] van Houwelingen HC, Bruinsma T, Hart AAM, van’t Veer LJ, Wessels LFA (2006). Statistics in Medicine, 25, 3201–3216
</pre>
* Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data. Simon et al, Brief Bioinform. 2011
 
** [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2837029/#B5 Testing the additional predictive value of high-dimensional molecular data]. '''the cross-validated probabilities are not independent from each other.'''
To use Kaplan-Meier curves to show the relationship between gene expression and survival while adjusting for age.  
** [https://projecteuclid.org/download/pdfview_1/euclid.aoas/1215118532 A study of pre-validation]. Annals of Applied Statistics. 2008
<ul>
<li>CVPL (cross-validated partial likelihood)
<ul>
<li>https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/cvpl (lower is better)</li>
<li>https://rdrr.io/cran/dynpred/man/CVPL.html. [https://github.com/cran/dynpred/blob/master/R/CVPL.r source code]. 1. it does LOOCV so no need to set a random seed. 2. it seems the function does not include lasso/glmnet 3. the formula on pages 173-174 of the book [https://www.lumc.nl/org/bds/research/medische-statistiek/survival-analysis/MethodologicalResearch/dynamic-prediction/ Dynamic Prediction in Clinical Survival Analysis] says the partial log likelihood should include the penalty term. 4. concordance measures like Harrell’s C-index are not appropriate because they only measure the discrimination and not the calibration. PS: I downloaded and looked at the chapter source code. It uses optL1() function from the penalized package to obtain cross validated '''log''' partial likelihood.
<pre>
<pre>
R> library(dynpred)
# Categorize age into two groups
R> data(ova)
df$age_group <- ifelse(df$age > median(df$age), "Older", "Younger")
R> CVPL(Surv(tyears, d) ~ 1, data = ova)
 
[1] NA
# Categorize gene expression into two groups
R> CVPL(Surv(tyears, d) ~ Karn + Broders + FIGO + Ascites + Diam,
df$gene_group <- ifelse(df$gene_expression > median(df$gene_expression), "High", "Low")
  data = ova)
 
[1] -1652.169
install.packages("survival")
R> coxph(Surv(tyears, d) ~ Karn + Broders + FIGO + Ascites + Diam, data = ova)$loglik[2] # No CV
install.packages("survminer")
[1] -1374.717
library(survival)
</pre></li>
library(survminer)
<li>[https://www.rdocumentation.org/packages/penalized/versions/0.9-51/topics/Cross-validation%20in%20penalized%20generalized%20linear%20models optL1()] from the [https://cran.r-project.org/web/packages/penalized/ penalized] package. It seems the penalized package has its own sequence of lambdas and these lambdas are totally different from glmnet() has created though the CV plot from each package shows a convex shape.
 
<li>[https://pubmed.ncbi.nlm.nih.gov/30813883/ Gsslasso paper]. CVPL does not include the penalty term.</li>
# KM
<li>https://web.stanford.edu/~hastie/Papers/v39i05.pdf#page=10 (larger is better) </li>
# Assuming 'gene_high' is a binary variable for high/low gene expression
</ul>
km_fit <- survfit(Surv(time, status) ~ gene_group + age_group, data = df)
</ul>
ggsurvplot(km_fit, data = df, pval = TRUE, risk.table = TRUE,
          legend.title = "Gene Expression & Age Group")
# Enhance readability
df$group <- with(df, interaction(gene_group, age_group))
levels(df$group) <- c("Low/Young", "Low/Old", "High/Young", "High/Old")
km_fit <- survfit(Surv(time, status) ~ group, data = df)
p <- ggsurvplot(km_fit, data = df, pval = TRUE, risk.table = FALSE,
          legend.title = "Groups",
          legend.labs = c("Low/Young", "Low/Old", "High/Young", "High/Old"))
p$plot <- p$plot + guides(colour = guide_legend(nrow = 4)) +
          theme(legend.position = "right")
p


== Competing risk ==
# Cox regression
* https://www.mailman.columbia.edu/research/population-health-methods/competing-risk-analysis
cox_fit <- coxph(Surv(time, status) ~ gene_high + age_group, data = df)
* Page 61 of Klein and Moeschberger "Survival Analysis"
ggsurvplot(survfit(cox_fit), data = df, pval = TRUE, legend.title = "Adjusted for Age")
* [https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html Survival Analysis in R] Emily Zabor
</pre>


== [https://en.wikipedia.org/wiki/Survival_rate Survival rate] terminology ==
=== Warning on multiple rates ===
* How is the overall survival measured?
Search Vectorize() function in this page.
** The length of time from either the date of '''diagnosis''' or the start of '''treatment''' for a disease, such as cancer, that patients diagnosed with the disease are still '''alive'''. In a clinical trial, measuring the overall survival is one way to see how well a new treatment works. [https://www.cancer.gov/publications/dictionaries/cancer-terms/def/overall-survival NCI Dictionary of Cancer Terms]
** Overall survival, or OS, or sometimes just “survival” is the percentage of people in a group who are alive after a length of time—usually a number of years.
* How is progression-free survival measured?
** The length of time during and after the '''treatment''' of a disease, such as cancer, that a patient lives with the disease but '''it does not get worse'''. In a clinical trial, measuring the progression-free survival is one way to see how well a new treatment works. [https://www.cancer.gov/publications/dictionaries/cancer-terms/def/progression-free-survival NCI Dictionary of Cancer Terms]
** Progression-free survival (PFS) was measured as the interval between the initiation of '''treatment''' until either disease recurrence or last documented follow-up of the patient if he/she remains disease-free.
* OS vs PFS
** [https://onbiostatistics.blogspot.com/2015/09/understanding-endpoint-in-oncology.html Understanding the endpoints in oncology: overall survival, progression free survival, hazard ratio, censored value]
** [https://www.jcancer.org/v10p3717.htm#B18 Relationship between Progression-free Survival and Overall Survival in Randomized Clinical Trials of Targeted and Biologic Agents in Oncology]
* [https://www.cancer.gov/publications/dictionaries/cancer-terms?cdrid=44023 Disease-free survival (DFS)]: the period after curative treatment ['''disease eliminated'''] when no disease can be detected
* [https://en.wikipedia.org/wiki/Progression-free_survival Progression-free survival (PFS), overall survival (OS)].
** PFS is the length of time during and after the '''treatment''' of a disease, such as cancer, that a patient lives with the '''disease but it does not get worse'''. See an use at [https://www.cancer.gov/about-cancer/treatment/clinical-trials/nci-supported/nci-match NCI-MATCH] trial.
** [https://vigortip.com/are-disease-free-survival-and-progression-free-survival-the-same/ What Is The Difference Between PFS And DFS?] Disease-free survival (DFS), also known as relapse-free survival (RFS), is often used as the primary endpoint in phase III trials of adjuvant therapy. Progression-free survival (PFS) is commonly used as the primary endpoint in phase III trials evaluating the treatment of metastatic cancer.
* Time to progression: The length of time from the date of diagnosis or the start of treatment for a disease until the disease starts to get worse or spread to other parts of the body. In a clinical trial, measuring the time to progression is one way to see how well a new treatment works. Also called TTP.
* Metastasis-free survival (MFS) time: the period until metastasis is detected
* Event free survival (EFS)
* [http://www.cancer.net/navigating-cancer-care/cancer-basics/understanding-statistics-used-guide-prognosis-and-evaluate-treatment Understanding Statistics Used to Guide Prognosis and Evaluate Treatment] (DFS & PFS rate)
* '''Distant recurrence''' means the cancer has come back in another part of the body, [https://www.cancer.org/treatment/survivorship-during-and-after-treatment/long-term-health-concerns/recurrence/what-is-cancer-recurrence.html Three types of recurrence?], [https://www.cancer.gov/publications/dictionaries/cancer-terms/def/distant-recurrence NCI dictionaries]
* [https://www.researchgate.net/post/Are-the-disease-free-survival-and-recurrence-free-survival-the-same-definitions-in-oncology-studies Are the disease-free survival (DFS) and recurrence-free survival (RFS) the same definitions in oncology studies?]. [https://www.cancer.gov/publications/dictionaries/cancer-terms/def/relapse-free-survival Yes].
* [https://pubmed.ncbi.nlm.nih.gov/32955103/ What is the difference between overall survival (OS), recurrence-free survival (RFS) and time-to-recurrence?]
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7487279/ redictors of Recurrence, and Progression-Free and Overall Survival following Open versus Robotic Radical Cystectomy: Analysis from the RAZOR Trial with a 3-Year Followup]


== Time-dependent covariates ==
<pre>
* [https://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf Using Time Dependent Covariates and Time Dependent Coefficients in the Cox Mode]
mean(rexp(1000, rate=2) )
* [http://www.erikdrysdale.com/td_elnet/ Building an Elastic-Net Cox Model with Time-Dependent covariates]
# [1] 0.5258078
* [https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html Survival Analysis in R] Emily Zabor
mean(rexp(1000, rate=1) )
# [1] 0.9712124


== Books ==
z = rexp(1000, rate=c(1, 2))
* [http://www.springer.com/us/book/9781441966452 Survival Analysis, A Self-Learning Text] by Kleinbaum, David G., Klein, Mitchel
mean(z[seq(1, 1000, by=2)])
* [http://www.springer.com/us/book/9783319312439 Applied Survival Analysis Using R] by Moore, Dirk F.
# [1] 1.041969
* [http://www.springer.com/us/book/9783319194240 Regression Modeling Strategies] by Harrell, Frank
mean(z[seq(2, 1000, by=2)])
* [http://www.springer.com/us/book/9781461413523 Regression Methods in Biostatistics] by Vittinghoff, E., Glidden, D.V., Shiboski, S.C., McCulloch, C.E.
# [1] 0.5079594
* https://tbrieder.org/epidata/course_reading/e_tableman.pdf
</pre>
* [https://www.wiley.com/en-us/Survival+Analysis%3A+Models+and+Applications-p-9780470977156 Survival Analysis: Models and Applications] by Xian Liu


=== Class notes ===
=== Markov model ===
* https://myweb.uiowa.edu/pbreheny/7210/f15/notes.html
[https://rviews.rstudio.com/2020/10/08/fake-data-for-the-illness-death-model/ Fake Survival Data for the Disease Progression Model]
* http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf


= [https://en.wikipedia.org/wiki/Proportional_hazards_model Cox proportional hazards model] and the partial log-likelihood function =
=== Non-proportional hazards ===
[https://www.rdatagen.net/post/2022-03-29-simulating-non-proportional-hazards/ Simulating time-to-event outcomes with non-proportional hazards]


Let ''Y''<sub>''i''</sub> denote the observed time (either censoring time or event time) for subject ''i'', and let ''C''<sub>''i''</sub> be the indicator that the time corresponds to an event (i.e. if ''C''<sub>''i''</sub>&nbsp;=&nbsp;1 the event occurred and if ''C''<sub>''i''</sub>&nbsp;=&nbsp;0 the time is a censoring time).  The hazard function for the Cox proportional hazard model has the form
== Standardize covariates ==
coxph() does not have an option to standardize covariates but glmnet() does.
<pre>
library(glmnet)
library(survival)


<math>
N=1000;p=30
\lambda(t|X) = \lambda_0(t)\exp(\beta_1X_1 + \cdots + \beta_pX_p) = \lambda_0(t)\exp(X \beta^\prime).
nzc=p/3
</math>
beta <- c(rep(1, 5), rep(-1, 5))


This expression gives the hazard at time ''t'' for an individual with covariate vector (explanatory variables) ''X''. Based on this hazard function, a '''partial likelihood''' (defined on hazard function) can be constructed from the datasets as
set.seed(1234)
  x=matrix(rnorm(N*p),N,p)
  x[, 1:5] <- x[, 1:5]*2
  x[, 6:10] <- x[, 6:10] + 2


<math>
  fx=x[,seq(nzc)] %*% beta
L(\beta) = \prod\limits_{i:C_i=1}\frac{\theta_i}{\sum_{j:Y_j\ge Y_i}\theta_j},
  hx=exp(fx)
</math>
  ty=rexp(N,hx)
  tcens <- rep(0,N)
  y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)


where ''θ''<sub>''j''</sub>&nbsp;=&nbsp;exp(''X''<sub>''j'' </sub>''β''<sup>''′''</sup>) and ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub> are the covariate vectors for the ''n'' independently sampled individuals in the dataset (treated here as column vectors). [http://psfaculty.ucdavis.edu/bsjjones/coxslides.pdf This pdf] or [http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=12 this note] give a toy example
coxph(Surv(ty, 1-tcens) ~ x) %>% coef %>% head(10)
#        x1        x2        x3        x4        x5        x6        x7
#  0.6076146  0.6359927  0.6346022  0.6469274  0.6152082 -0.6614930 -0.5946101
#        x8        x9        x10
# -0.6726081 -0.6275205 -0.7073704


The corresponding log partial likelihood is
xscale <- scale(x, TRUE, TRUE) # halve the covariate values
coxph(Surv(ty, 1-tcens) ~ xscale) %>% coef %>% head(10) # double the coef
#    xscale1    xscale2    xscale3    xscale4    xscale5    xscale6    xscale7
#  1.2119940  1.2480628  1.2848646  1.2857796  1.1959619 -0.6431946 -0.5941309
#    xscale8    xscale9  xscale10
# -0.6723137 -0.6188384 -0.6793313


<math>
  set.seed(1)
\ell(\beta) = \sum_{i:C_i=1} \left(X_i \beta^\prime - \log \sum_{j:Y_j\ge Y_i}\theta_j\right).
  fit=cv.glmnet(x,y,family="cox", nfolds=10, standardize = TRUE)
</math>
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1]  0.9351341  0.9394696  0.9187242  0.9418540  0.9111623 -0.9303783
# [7] -0.9271438 -0.9597583 -0.9493759 -0.9386065


This function can be maximized over ''β'' to produce maximum partial likelihood estimates of the model parameters.
  set.seed(1)
  fit=cv.glmnet(x,y,family="cox", nfolds=10, standardize = FALSE)
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1]  0.9357171  0.9396877  0.9200247  0.9420215  0.9118803 -0.9257406
# [7] -0.9232813 -0.9554017 -0.9448827 -0.9356009 


The partial [[Score (statistics)|score function]] is
  set.seed(1)
<math>
  fit=cv.glmnet(xscale,y,family="cox", nfolds=10, standardize = TRUE)
\ell^\prime(\beta) = \sum_{i:C_i=1} \left(X_i - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j}{\sum_{j:Y_j\ge Y_i}\theta_j}\right),
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
</math>
# [1]  1.8652889  1.8436015  1.8601198  1.8719515  1.7712951 -0.9046420
# [7] -0.9263966 -0.9593383 -0.9362407 -0.9014015


and the [[Hessian matrix]] of the partial log likelihood is
  set.seed(1)
  fit=cv.glmnet(xscale,y,family="cox", nfolds=10, standardize = FALSE)
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1] 1.8652889  1.8436015  1.8601198  1.8719515  1.7712951 -0.9046420
# [7] -0.9263966 -0.9593383 -0.9362407 -0.9014015
</pre>


<math>
== Predefined censoring rates ==
\ell^{\prime\prime}(\beta) = -\sum_{i:C_i=1} \left(\frac{\sum_{j:Y_j\ge Y_i}\theta_jX_jX_j^\prime}{\sum_{j:Y_j\ge Y_i}\theta_j} - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j\times \sum_{j:Y_j\ge Y_i}\theta_jX_j^\prime}{[\sum_{j:Y_j\ge Y_i}\theta_j]^2}\right).
[http://onlinelibrary.wiley.com/doi/10.1002/sim.7178/full Simulating survival data with predefined censoring rates for proportional hazards models]
</math>


Using this score function and Hessian matrix, the partial likelihood can be maximized using the [[Newton's method|Newton-Raphson]] algorithm. The inverse of the Hessian matrix, evaluated at the estimate of ''β'', can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate [[standard error]]s for the regression coefficients.
== Cross validation ==
 
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.4780122407/epdf Cross validation in survival analysis] by Verweij & van Houwelingen, Stat in medicine 1993.
If X is age, then the coefficient is likely >0. If X is some treatment, then the coefficient is likely <0.
* [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.2353 Cross- Validated Cox Regression on Microarray Gene Expression Data] van Houwelingen HC, Bruinsma T, Hart AAM, van’t Veer LJ, Wessels LFA (2006). Statistics in Medicine, 25, 3201–3216
 
* Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data. Simon et al, Brief Bioinform. 2011
=== Get the partial likelihood of a Cox PH Model with new data ===
** [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2837029/#B5 Testing the additional predictive value of high-dimensional molecular data]. '''the cross-validated probabilities are not independent from each other.'''
offset was used. See https://stackoverflow.com/questions/26721551/is-there-a-way-to-get-the-partial-likelihood-of-a-cox-ph-model-with-new-data-and
** [https://projecteuclid.org/download/pdfview_1/euclid.aoas/1215118532 A study of pre-validation]. Annals of Applied Statistics. 2008
 
<ul>
[https://stats.stackexchange.com/a/187339 How to compute partial log-likelihood function in Cox proportional hazards model?]
<li>CVPL (cross-validated partial likelihood)
<ul>
<li>https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/cvpl (lower is better)</li>
<li>https://rdrr.io/cran/dynpred/man/CVPL.html. [https://github.com/cran/dynpred/blob/master/R/CVPL.r source code]. 1. it does LOOCV so no need to set a random seed. 2. it seems the function does not include lasso/glmnet 3. the formula on pages 173-174 of the book [https://www.lumc.nl/org/bds/research/medische-statistiek/survival-analysis/MethodologicalResearch/dynamic-prediction/ Dynamic Prediction in Clinical Survival Analysis] says the partial log likelihood should include the penalty term. 4. concordance measures like Harrell’s C-index are not appropriate because they only measure the discrimination and not the calibration. PS: I downloaded and looked at the chapter source code. It uses optL1() function from the penalized package to obtain cross validated '''log''' partial likelihood.
<pre>
<pre>
set.seed(1)
R> library(dynpred)
n <- 1000
R> data(ova)
t <- rexp(100)
R> CVPL(Surv(tyears, d) ~ 1, data = ova)
c <- rbinom(100, 1, .2) ## censoring indicator (independent process)
[1] NA
x <- rbinom(100, 1, exp(-t)) ## some arbitrary relationship btn x and t
R> CVPL(Surv(tyears, d) ~ Karn + Broders + FIGO + Ascites + Diam,
betamax <- coxph(Surv(t, c) ~ x)
  data = ova)
beta1 <- coxph(Surv(t, c) ~ x, init = c(1), control=coxph.control(iter.max=0))
[1] -1652.169
R> coxph(Surv(tyears, d) ~ Karn + Broders + FIGO + Ascites + Diam, data = ova)$loglik[2] # No CV
[1] -1374.717
</pre></li>
<li>[https://www.rdocumentation.org/packages/penalized/versions/0.9-51/topics/Cross-validation%20in%20penalized%20generalized%20linear%20models optL1()] from the [https://cran.r-project.org/web/packages/penalized/ penalized] package. It seems the penalized package has its own sequence of lambdas and these lambdas are totally different from glmnet() has created though the CV plot from each package shows a convex shape.
<li>[https://pubmed.ncbi.nlm.nih.gov/30813883/ Gsslasso paper]. CVPL does not include the penalty term.</li>
<li>https://web.stanford.edu/~hastie/Papers/v39i05.pdf#page=10 (larger is better) </li>
</ul>
</ul>


betamax$loglik[2]  # [1]=initial, [2]=final
== Competing risk and cumulative incidence ==
# [1] -52.81476
* https://www.mailman.columbia.edu/research/population-health-methods/competing-risk-analysis. Subjects can potentially experience '''more than one type of a certain event'''. For instance, if mortality is of research interest, then our observations – senior patients at an oncology department, could possibly die from heart attack or breast cancer, or even traffic accident. When only one of these different types of event can occur, we refer to these events as '''“competing events”''', in a sense that they compete with each other to deliver the event of interest, and the occurrence of one type of event will prevent the occurrence of the others.
beta1$loglik[2]
* Page 61 of Klein and Moeschberger "Survival Analysis"
# [1] -52.85067
* [https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html Survival Analysis in R] Emily Zabor
</pre>
* [https://argoshare.is.ed.ac.uk/healthyr_book/competing-risks-regression.html 10.9 Competing risks regression] from the ebook [https://argoshare.is.ed.ac.uk/healthyr_book/ R for Health Data Science].
* [https://www.nature.com/articles/1705727 Competing risk analysis using R: an easy guide for clinicians] 2007
* [https://www.nature.com/articles/bmt2009359 Regression modeling of competing risk using R: an in depth guide for clinicians] 2010


=== Implementing the Cox model ===
== [https://en.wikipedia.org/wiki/Survival_rate Survival rate] terminology ==
[https://medium.com/analytics-vidhya/implementing-the-cox-model-in-r-b1292d6ab6d2 Implementing the Cox model in R]
* How is the overall survival measured?
** The length of time from either the date of '''diagnosis''' or the start of '''treatment''' for a disease, such as cancer, that patients diagnosed with the disease are still '''alive'''. In a clinical trial, measuring the overall survival is one way to see how well a new treatment works. [https://www.cancer.gov/publications/dictionaries/cancer-terms/def/overall-survival NCI Dictionary of Cancer Terms]
** Overall survival, or OS, or sometimes just “survival” is the percentage of people in a group who are alive after a length of time—usually a number of years.
* How is progression-free survival measured?
** The length of time during and after the '''treatment''' of a disease, such as cancer, that a patient lives with the disease but '''it does not get worse'''. In a clinical trial, measuring the progression-free survival is one way to see how well a new treatment works. [https://www.cancer.gov/publications/dictionaries/cancer-terms/def/progression-free-survival NCI Dictionary of Cancer Terms]
** Progression-free survival (PFS) was measured as the interval between the initiation of '''treatment''' until either disease recurrence or last documented follow-up of the patient if he/she remains disease-free.
* OS vs PFS
** [https://onbiostatistics.blogspot.com/2015/09/understanding-endpoint-in-oncology.html Understanding the endpoints in oncology: overall survival, progression free survival, hazard ratio, censored value]
** [https://www.jcancer.org/v10p3717.htm#B18 Relationship between Progression-free Survival and Overall Survival in Randomized Clinical Trials of Targeted and Biologic Agents in Oncology]


=== Optimization ===
* [https://en.wikipedia.org/wiki/Progression-free_survival Progression-free survival (PFS), overall survival (OS)].
[https://www.joshua-entrop.com/post/optim_cox/ Optimisation of a Cox proportional hazard model using Optimx()]
** PFS is the length of time during and after the '''treatment''' of a disease, such as cancer, that a patient lives with the '''disease but it does not get worse'''. See an use at [https://www.cancer.gov/about-cancer/treatment/clinical-trials/nci-supported/nci-match NCI-MATCH] trial.


== Compare the partial likelihood to the full likelihood ==
* [https://www.cancer.gov/publications/dictionaries/cancer-terms?cdrid=44023 Disease-free survival (DFS)]: the period after curative treatment ['''disease eliminated'''] when no disease can be detected
http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=10
** DFS stands for disease-free survival, which measures the length of time that a patient survives without any signs or symptoms of the disease or cancer recurrence. It is calculated from the '''date of treatment initiation''' to the date of '''disease recurrence''' or death from any cause. DFS is often used as a secondary endpoint in clinical trials, especially in early-stage cancers where the primary goal of treatment is to achieve long-term remission.
** [https://vigortip.com/are-disease-free-survival-and-progression-free-survival-the-same/ What Is The Difference Between PFS And DFS?] Disease-free survival (DFS), also known as relapse-free survival (RFS), is often used as the primary endpoint in phase III trials of adjuvant therapy. Progression-free survival (PFS) is commonly used as the primary endpoint in phase III trials evaluating the treatment of metastatic cancer.
** The main difference between PFS and DFS is that PFS measures the time until the '''cancer progresses''', whereas DFS measures the time until the '''cancer recurs''' or returns after treatment. PFS is generally considered a more '''sensitive''' measure of treatment efficacy than DFS because it accounts for any disease progression, not just a recurrence. However, '''DFS may be more appropriate for patients with early-stage cancer who are at lower risk of disease progression but have a higher risk of disease recurrence'''.


== z-column (Wald statistic) from R's coxph() ==
* Time to progression: The length of time from the date of diagnosis or the start of treatment for a disease until the disease starts to get worse or spread to other parts of the body. In a clinical trial, measuring the time to progression is one way to see how well a new treatment works. Also called TTP.
* https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Cox-Regression.pdf#page=6 The ratio  of each  regression  coefficient  to its standard error, a Wald statistic which is asymptotically standard normal under the hypothesis that the corresponding β is 0.
* Metastasis-free survival (MFS) time: the period until metastasis is detected
* http://dni-institute.in/blogs/cox-regression-interpret-result-and-predict/
* Event free survival (EFS)
* [http://www.cancer.net/navigating-cancer-care/cancer-basics/understanding-statistics-used-guide-prognosis-and-evaluate-treatment Understanding Statistics Used to Guide Prognosis and Evaluate Treatment] (DFS & PFS rate)
* '''Distant recurrence''' means the cancer has come back in another part of the body, [https://www.cancer.org/treatment/survivorship-during-and-after-treatment/long-term-health-concerns/recurrence/what-is-cancer-recurrence.html Three types of recurrence?], [https://www.cancer.gov/publications/dictionaries/cancer-terms/def/distant-recurrence NCI dictionaries]
* [https://www.researchgate.net/post/Are-the-disease-free-survival-and-recurrence-free-survival-the-same-definitions-in-oncology-studies Are the disease-free survival (DFS) and recurrence-free survival (RFS) the same definitions in oncology studies?]. [https://www.cancer.gov/publications/dictionaries/cancer-terms/def/relapse-free-survival Yes].
* [https://pubmed.ncbi.nlm.nih.gov/32955103/ What is the difference between overall survival (OS), recurrence-free survival (RFS) and time-to-recurrence?]
* Recurrence-free survival (RFS) vs progression-free survival (PFS)
** Recurrence-Free: This term is used when cancer has been eliminated or reduced to an undetectable level, and then returns after a period of time. Typically starts at a specific point such as '''the date of diagnosis''' or the '''end of treatment'''. The end time is usually the date of disease '''recurrence''' or the '''end of the follow-up period'''.
** Progression-Free: This term is used to describe the period of time during and after treatment when the disease does not get worse. It’s often used in situations where a tumor is present, as demonstrated by laboratory testing, radiologic testing, or clinically. Start Time is usually the time point when a patient starts their treatment. It could also be the date of diagnosis or randomization. End Time is the time point when disease progression is detected or death occurs. If neither of these events occur, then the end time might be the last follow-up visit or contact with the patient.
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7487279/ redictors of Recurrence, and Progression-Free and Overall Survival following Open versus Robotic Radical Cystectomy: Analysis from the RAZOR Trial with a 3-Year Followup]


== How exactly can the Cox-model ignore exact times? ==
== Time-dependent covariates ==
[https://stats.stackexchange.com/q/94025 The Cox model does not depend on the times itself, instead it only needs an ordering of the events].
<ul>
<li>[https://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf Using Time Dependent Covariates and Time Dependent Coefficients in the Cox Mode]
<li>[http://www.erikdrysdale.com/td_elnet/ Building an Elastic-Net Cox Model with Time-Dependent covariates]
<li>[https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html Survival Analysis in R] Emily Zabor
<li>Difference of Time-dependent covariate and time-independent covariate: The difference between time-dependent and time-independent covariates in the context of a Cox model is indeed in how the Surv() function is used.
<pre>
# Time-independent covariate
Surv(time, status)


{{Pre}}
# Time-dependent covariate
library(survival)
Surv(start, stop, status)
survfit(Surv(time, status) ~ x, data = aml)
</pre>
fit <- coxph(Surv(time, status) ~ x, data = aml)
Here, '''start and stop define an interval of time during which the covariates are assumed to be constant'''. This allows the covariates to change over time, as each subject can have multiple rows in the data corresponding to different time intervals.
coef(fit) # 0.9155326
<li>Example: Let’s say we’re studying the effect of a treatment on survival time in patients with a certain disease. We have a covariate that changes over time: the dosage of the treatment, which can be increased or decreased at different times for each patient. Our data might look something like this:
min(diff(sort(unique(aml$time)))) # 1
{| class="wikitable"
|-
! Patient ID !! Start Time !! Stop Time !! Status !! Dosage
|-
| 1 || 0 || 3 || 0 || 10
|-
| 1 || 3 || 6 || 1 || 20
|-
| 2 || 0 || 2 || 0 || 10
|-
| 2 || 2 || 5 || 0 || 15
|-
| 2 || 5 || 8 || 1 || 15
|}
Here, each row represents a time interval for a patient. The Start Time and Stop Time columns represent the beginning and end of the interval. The Status column indicates whether the event of interest (e.g., death) occurred at the end of the interval (1 if the event occurred, 0 otherwise). The Dosage column is our time-dependent covariate.
<li>A time-dependent covariate in a Cox model becomes a time-independent covariate under the special case where the covariate does not change over the duration of the study '''for any subject'''. In other words, if the value of the covariate is constant for each individual across all time points, it can be treated as a time-independent covariate. For example, consider a study investigating the effect of gender (a binary variable: male or female) on survival time. Since an individual’s gender does not change over time, it is a time-independent covariate. On the other hand, a variable like blood pressure, which can change at different time points for the same individual, would typically be considered a time-dependent covariate.
</ul>


# Shift survival time for some obs but keeps the same order
== Books ==
# make sure we choose obs (n=20 not works but n=21 works) with twins
* [http://www.springer.com/us/book/9781441966452 Survival Analysis, A Self-Learning Text] by Kleinbaum, David G., Klein, Mitchel
rbind(order(aml$time), sort(aml$time), aml$time[order(aml$time)])
* [http://www.springer.com/us/book/9783319312439 Applied Survival Analysis Using R] by Moore, Dirk F.
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16]
* [http://www.springer.com/us/book/9783319194240 Regression Modeling Strategies] by Harrell, Frank
# [1,]   12  13  14  15    1  16    2    3  17    4    5    18    19    6    20    7
* [http://www.springer.com/us/book/9781461413523 Regression Methods in Biostatistics] by Vittinghoff, E., Glidden, D.V., Shiboski, S.C., McCulloch, C.E.
# [2,]    5    5    8    8    9  12  13  13  16    18    23    23    27    28    30    31
* https://tbrieder.org/epidata/course_reading/e_tableman.pdf
# [3,]   5    5    8    8    9  12  13  13  16    18    23    23    27    28    30    31
* [https://www.wiley.com/en-us/Survival+Analysis%3A+Models+and+Applications-p-9780470977156 Survival Analysis: Models and Applications] by Xian Liu
# [,17] [,18] [,19] [,20] [,21] [,22] [,23]
* (online) [https://bookdown.org/mpfoley1973/survival/ Survival Analysis in R] Michael Foley. survminer ::ggsurvplot(). Landmark analysis. Time-Dependent Covariates.
# [1,]    21    8    22    9    23    10    11
# [2,]   33    34    43    45    45    48  161
# [3,]   33    34    43    45    45    48  161


aml$time2 <- aml$time
=== Class notes ===
aml$time2[order(aml$time)[1:21]] <- aml$time[order(aml$time)[1:21]] - .9
* https://myweb.uiowa.edu/pbreheny/7210/f15/notes.html
fit2 <- coxph(Surv(time2, status) ~ x, data = aml); fit2
* http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf
coef(fit2) #      0.9155326
coef(fit) == coef(fit2) # TRUE


aml$time3 <- aml$time
= [https://en.wikipedia.org/wiki/Proportional_hazards_model Cox proportional hazards model] and the partial log-likelihood function =
aml$time3[order(aml$time)[1:20]] <- aml$time[order(aml$time)[1:20]] - .9
fit3 <- coxph(Surv(time3, status) ~ x, data = aml); fit3
coef(fit3) #      0.8891567
coef(fit) == coef(fit3) # FALSE
</pre>


== Partial likelihood when there are ties; hypothesis testing: Likelihood Ratio Test, Wald Test & Score Test ==
Let ''Y''<sub>''i''</sub> denote the observed time (either censoring time or event time) for subject ''i'', and let ''C''<sub>''i''</sub> be the indicator that the time corresponds to an event (i.e. if ''C''<sub>''i''</sub>&nbsp;=&nbsp;1 the event occurred and if ''C''<sub>''i''</sub>&nbsp;=&nbsp;0 the time is a censoring time). The hazard function for the Cox proportional hazard model has the form
http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=29


In R's coxph(): Nearly all Cox regression programs use the ''Breslow'' method by default, but not this one. The '' '''Efron approximation''' '' is used as the default here, it is more accurate when dealing with tied death times, and is as efficient computationally.
<math>
\lambda(t|X) = \lambda_0(t)\exp(\beta_1X_1 + \cdots + \beta_pX_p) = \lambda_0(t)\exp(X \beta^\prime).
</math>


http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xaghtmlnode28.html (include the case when there is a partition of parameters). The formulas for 3 tests are also available on  Appendix B of Klein book.
This expression gives the hazard at time ''t'' for an individual with covariate vector (explanatory variables) ''X''. Based on this hazard function, a '''partial likelihood''' (defined on hazard function) can be constructed from the datasets as
 
<math>
L(\beta) = \prod\limits_{i:C_i=1}\frac{\theta_i}{\sum_{j:Y_j\ge Y_i}\theta_j},
</math>


The following code does not test for models. But since there is only one coefficient, the results are the same. If there is more than one variable, we can use anova(model1, model2) to run LRT.
where ''θ''<sub>''j''</sub>&nbsp;=&nbsp;exp(''X''<sub>''j'' </sub>''β''<sup>''′''</sup>) and ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub> are the covariate vectors for the ''n'' independently sampled individuals in the dataset (treated here as column vectors). [http://psfaculty.ucdavis.edu/bsjjones/coxslides.pdf This pdf] or [http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=12 this note] give a toy example
{{Pre}}
library(KMsurv)
# No ties. Section 8.2
data(btrial)
str(btrial)
# 'data.frame': 45 obs. of  3 variables:
# $ time : int  19 25 30 34 37 46 47 51 56 57 ...
# $ death: int  1 1 1 1 1 1 1 1 1 1 ...
# $ im  : int  1 1 1 1 1 1 1 1 1 1 ...
table(subset(btrial, death == 1)$time)
# death time is unique
coxph(Surv(time, death) ~ im, data = btrial)
#    coef exp(coef) se(coef)    z    p
# im 0.980    2.665    0.435 2.25 0.024
# Likelihood ratio test=4.45  on 1 df, p=0.03
# n= 45, number of events= 24


# Ties, Section 8.3
The corresponding log partial likelihood is
data(kidney)
str(kidney)
# 'data.frame': 119 obs. of  3 variables:
# $ time : num  1.5 3.5 4.5 4.5 5.5 8.5 8.5 9.5 10.5 11.5 ...
# $ delta: int  1 1 1 1 1 1 1 1 1 1 ...
# $ type : int  1 1 1 1 1 1 1 1 1 1 ...
table(subset(kidney, delta == 1)$time)
# 0.5  1.5  2.5  3.5  4.5  5.5  6.5  8.5  9.5 10.5 11.5 15.5 16.5 18.5 23.5 26.5
# 6    1    2    2    2    1    1    2    1    1    1    2    1    1    1    1


# Default: Efron method
<math>
coxph(Surv(time, delta) ~ type, data = kidney)
\ell(\beta) = \sum_{i:C_i=1} \left(X_i \beta^\prime - \log \sum_{j:Y_j\ge Y_i}\theta_j\right).
# coef exp(coef) se(coef)    z    p
</math>
# type -0.613    0.542    0.398 -1.54 0.12
# Likelihood ratio test=2.41  on 1 df, p=0.1
# n= 119, number of events= 26
summary(coxph(Surv(time, delta) ~ type, data = kidney))
# n= 119, number of events= 26
# coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6126    0.5420  0.3979 -1.539    0.124
#
# exp(coef) exp(-coef) lower .95 upper .95
# type    0.542      1.845    0.2485    1.182
#
# Concordance= 0.497  (se = 0.056 )
# Rsquare= 0.02  (max possible= 0.827 )
# Likelihood ratio test= 2.41  on 1 df,  p=0.1
# Wald test            = 2.37  on 1 df,  p=0.1
# Score (logrank) test = 2.44  on 1 df,  p=0.1


# Breslow method
This function can be maximized over ''β'' to produce maximum partial likelihood estimates of the model parameters.
summary(coxph(Surv(time, delta) ~ type, data = kidney, ties = "breslow"))
# n= 119, number of events= 26
#        coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6182    0.5389  0.3981 -1.553    0.12
#
#      exp(coef) exp(-coef) lower .95 upper .95
# type    0.5389      1.856    0.247    1.176
#
# Concordance= 0.497  (se = 0.056 )
# Rsquare= 0.02  (max possible= 0.827 )
# Likelihood ratio test= 2.45  on 1 df,  p=0.1
# Wald test            = 2.41  on 1 df,  p=0.1
# Score (logrank) test = 2.49  on 1 df,  p=0.1
 
# Discrete/exact method
summary(coxph(Surv(time, delta) ~ type, data = kidney, ties = "exact"))
#        coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6294    0.5329  0.4019 -1.566    0.117
#
#      exp(coef) exp(-coef) lower .95 upper .95
# type    0.5329      1.877    0.2424    1.171
#
# Rsquare= 0.021  (max possible= 0.795 )
# Likelihood ratio test= 2.49  on 1 df,  p=0.1
# Wald test            = 2.45  on 1 df,  p=0.1
# Score (logrank) test = 2.53  on 1 df,  p=0.1
</pre>
 
== Hazard (function) and survival function ==
A hazard is the rate at which events happen, so that the probability of an event happening in a short time interval is the length of time multiplied by the hazard.


The partial [[Score (statistics)|score function]] is
<math>
<math>
h(t) = \lim_{\Delta t \to 0} \frac{P(t \leq T < t+\Delta t|T \geq t)}{\Delta t} = \frac{f(t)}{S(t)} = -\partial{ln[S(t)]}
\ell^\prime(\beta) = \sum_{i:C_i=1} \left(X_i - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j}{\sum_{j:Y_j\ge Y_i}\theta_j}\right),
</math>
</math>


Therefore
and the [[Hessian matrix]] of the partial log likelihood is


<math>
<math>
H(x) = \int_0^x h(u) d(u) = -ln[S(x)].
\ell^{\prime\prime}(\beta) = -\sum_{i:C_i=1} \left(\frac{\sum_{j:Y_j\ge Y_i}\theta_jX_jX_j^\prime}{\sum_{j:Y_j\ge Y_i}\theta_j} - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j\times \sum_{j:Y_j\ge Y_i}\theta_jX_j^\prime}{[\sum_{j:Y_j\ge Y_i}\theta_j]^2}\right).
</math>
</math>


or
Using this score function and Hessian matrix, the partial likelihood can be maximized using the [[Newton's method|Newton-Raphson]] algorithm. The inverse of the Hessian matrix, evaluated at the estimate of ''β'', can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate [[standard error]]s for the regression coefficients.


<math>
If X is age, then the coefficient is likely >0. If X is some treatment, then the coefficient is likely <0.
S(x) = e^{-H(x)}
</math>


Hazards (or probability of hazards) may vary with time, while the assumption in proportional hazard models for survival is that the hazard is a constant proportion.
=== Get the partial likelihood of a Cox PH Model with new data ===
offset was used. See https://stackoverflow.com/questions/26721551/is-there-a-way-to-get-the-partial-likelihood-of-a-cox-ph-model-with-new-data-and


Examples:
[https://stats.stackexchange.com/a/187339 How to compute partial log-likelihood function in Cox proportional hazards model?]
* If h(t)=c, S(t) is exponential. f(t) = c exp(-ct). The mean is 1/c.
<pre>
* If <math>\log h(t) = c + \rho t</math>, S(t) is  Gompertz distribution.
set.seed(1)
* If <math>\log h(t)=c + \rho \log (t)</math>, S(t) is Weibull distribution.
n <- 1000
* For Cox regression, the [http://www.math.ucsd.edu/~rxu/math284/slect6.pdf survival function can be shown] to be  <math>S(t|X) = S_0(t) ^ {\exp(X\beta)}</math>.
t <- rexp(100)
: <math>
c <- rbinom(100, 1, .2) ## censoring indicator (independent process)
\begin{align}
x <- rbinom(100, 1, exp(-t)) ## some arbitrary relationship btn x and t
S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\
betamax <- coxph(Surv(t, c) ~ x)
  &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\
beta1 <- coxph(Surv(t, c) ~ x, init = c(1), control=coxph.control(iter.max=0))
  &= e^{-\int_0^t h_0(u) du \cdot exp(X \beta)} \\
  &= S_0(t)^{exp(X \beta)}
\end{align}
</math>
Alternatively,
: <math>
\begin{align}
S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\
  &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\
  &= e^{-H_0(t) \cdot exp(X \beta)}
\end{align}
</math>
where the cumulative baseline hazard at time t, <math>H_0(t)</math>, is commonly estimated through the non-parametric Breslow estimator.


== How to assess Cox model fit ==
betamax$loglik[2]  # [1]=initial, [2]=final
* [https://www.coursera.org/lecture/survival-analysis-r-public-health/how-to-assess-cox-model-fit-gm9Hr Survival Analysis in R for Public Health] from Coursera
# [1] -52.81476
* Evaluating goodness-of-fit in comparison to a null model
beta1$loglik[2]
* [https://www.jstor.org/stable/2288942 A Graphical Method for Assessing Goodness of Fit in Cox's Proportional Hazards Model] Arjas JASA 1988
# [1] -52.85067
* [https://scholarcommons.sc.edu/cgi/viewcontent.cgi?article=5318&context=etd Evaluation of Goodness-of-fit Tests for the Cox Proportional Hazards Model with Time-Varying Covariates]
</pre>
* [https://pubmed.ncbi.nlm.nih.gov/30815387/ Assessment of the fitness of Cox and parametric regression models of survival distribution for Iranian breast cancer patients' data]


== Check the proportional hazard (constant HR over time) assumption by cox.zph() - Schoenfeld Residuals ==
=== Implementing the Cox model ===
<ul>
[https://medium.com/analytics-vidhya/implementing-the-cox-model-in-r-b1292d6ab6d2 Implementing the Cox model in R]
<li>It seems to be predictor specific.
 
<li>[https://rstudio-pubs-static.s3.amazonaws.com/300535_2a8382af47714d0aaa3f4cce9a7645a3.html Survival Analysis Tutorial] by Jacob Lindell and Joe Berry.
=== Optimization ===
<li>[https://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_4.pdf Log-log Kaplan-Meier curves] and other methods.
[https://www.joshua-entrop.com/post/optim_cox/ Optimisation of a Cox proportional hazard model using Optimx()]
<li>https://stats.idre.ucla.edu/other/examples/asa2/testing-the-proportional-hazard-assumption-in-cox-models/. If the predictor satisfy the proportional hazard assumption then the graph of the survival function versus the survival time should results in a graph with parallel curves, similarly the graph of the log(-log(survival)) versus log of survival time graph should result in parallel lines if the predictor is proportional.  This method does not work well for continuous predictor or categorical predictors that have many levels because the graph becomes to “cluttered”.
 
<li>[http://www.sthda.com/english/wiki/cox-model-assumptions Methods to evaluate the validity of the Cox model assumptions], [https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Assessing_proportional_hazards Assessing proportional hazards]
== Compare the partial likelihood to the full likelihood ==
<pre>
http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=10
 
== z-column (Wald statistic) from R's coxph() ==
* https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Cox-Regression.pdf#page=6 The  ratio  of each  regression  coefficient  to  its standard error, a Wald statistic which is asymptotically standard normal under the hypothesis that the corresponding β is 0.
* http://dni-institute.in/blogs/cox-regression-interpret-result-and-predict/
 
== How exactly can the Cox-model ignore exact times? ==
[https://stats.stackexchange.com/q/94025 The Cox model does not depend on the times itself, instead it only needs an ordering of the events].
 
{{Pre}}
library(survival)
library(survival)
res.cox <- coxph(Surv(time, status) ~ age + sex + wt.loss, data = lung)
survfit(Surv(time, status) ~ x, data = aml)
fit <- coxph(Surv(time, status) ~ x, data = aml)
coef(fit) # 0.9155326
min(diff(sort(unique(aml$time)))) # 1


# A hypothesis test of whether the effect of each covariate differs according to time,
# Shift survival time for some obs but keeps the same order
# and a global test of all covariates at once.
# make sure we choose obs (n=20 not works but n=21 works) with twins
cz <- cox.zph(res.cox); cz
rbind(order(aml$time), sort(aml$time), aml$time[order(aml$time)])
plot(cz) # it will draw 3 plots; one for each variable
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16]
        # y-axis is beta(t), x = time
# [1,]  12  13  14  15    1  16    2    3  17    4    5    18    19    6    20    7
plot(cz[3]]) # 3rd variable
# [2,]    5    5    8    8    9  12  13  13  16    18    23    23    27    28    30    31
# [3,]    5    5    8    8    9  12  13  13  16    18    23    23    27    28    30    31
# [,17] [,18] [,19] [,20] [,21] [,22] [,23]
# [1,]    21    8    22    9    23    10    11
# [2,]    33    34    43    45    45    48  161
# [3,]    33    34    43    45    45    48  161
 
aml$time2 <- aml$time
aml$time2[order(aml$time)[1:21]] <- aml$time[order(aml$time)[1:21]] - .9
fit2 <- coxph(Surv(time2, status) ~ x, data = aml); fit2
coef(fit2) #     0.9155326
coef(fit) == coef(fit2) # TRUE
 
aml$time3 <- aml$time
aml$time3[order(aml$time)[1:20]] <- aml$time[order(aml$time)[1:20]] - .9
fit3 <- coxph(Surv(time3, status) ~ x, data = aml); fit3
coef(fit3) #      0.8891567
coef(fit) == coef(fit3) # FALSE
</pre>
</pre>
<li>'''Complementary log-log survival curve''' . [https://rdrr.io/cran/survival/man/plot.survfit.html ?plot(survfit(formula), fun)]. For example fun='''"cloglog"''' will create a complimentary log-log survival plot (log(-log(S(t))) along with log(t) in the x-axis).
* If we don't like the log scale for the x-axis, we can write our own function; see [https://stackoverflow.com/q/22422687 R survival package; plotting log(-log(survival)) against log(time)]
* [https://bookdown.org/sestelo/sa_financial/how-to-evaluate-the-ph-assumption.html 3.6 How to evaluate the PH assumption?] from "A short course on Survival Analysis applied to the Financial Industry"
* [https://stat.ethz.ch/pipermail/r-help/2009-May/390854.html Use strata() in coxph()] to stratify on one covariate and plot the two baseline hazards.
<li>[https://stats.stackexchange.com/a/547137 Schoenfeld residuals - Plain English explanation, please!]  If the plot is reasonably '''flat''' over time, the PH assumption holds.
* The Schoenfeld Residuals Test is analogous to testing whether the '''slope of scaled residuals on time is zero or not'''. If the plot of Schoenfeld residuals against time shows a '''non-random''' pattern, the PH assumption has been violated. See [https://www.mbaskool.com/business-concepts/statistics/8766-schoenfeld-residuals-test.html Schoenfeld Residuals Test - Meaning & Definition]
</li>
</ul>
* survival package
** [https://rdrr.io/cran/survival/man/cox.zph.html cox.zph()]
** [https://rdrr.io/cran/survival/man/residuals.coxph.html residuals()]
* [https://cran.r-project.org/web/packages/timereg/index.html timereg] package. Flexible Regression Models for Survival Data.
** [https://rdrr.io/cran/timereg/man/cum.residuals.html timereg::cum.residuals()]
* Draw a hazard rate plot. <math>
\begin{align}
\log(−\log(𝑆(𝑡))) = \log(−\log(S_0(𝑡))) + \beta X
\end{align}
</math>
** [https://stats.stackexchange.com/a/34092 How to calculate predicted hazard rates from a Cox PH model?]. The hazard ratio should be constant; h(t|Z) / h(t|Z*) is independent of t.
** See an example of [https://www.theanalysisfactor.com/assumptions-cox-regression/ non-proportional hazards] where the '''KM curves cross'''.
** More graphical examples where log-log survival curves are not parallel. [https://bookdown.org/sestelo/sa_financial/how-to-evaluate-the-ph-assumption.html A short course on Survival Analysis applied to the Financial Industry].
** Two plots where one shows the assumption is violated. [https://stats.stackexchange.com/a/256696 Proportionality assumption in Cox Regression Model]
** log cumulative hazard plot. [https://influentialpoints.com/Training/coxs_proportional_hazards_regression_model-principles-properties-assumptions.htm An example] where PH is not satisfied.
* Looking at the Kaplan-Meier curves (Survival probability vs time). If the (discrete) predictor satisfy the proportional hazard assumption then the graph of the survival function versus the survival time should results in a graph with parallel curves, similarly the graph of the log(-log(survival)) versus log of survival time graph should result in parallel lines if the predictor is proportional. [https://stats.oarc.ucla.edu/other/examples/asa2/testing-the-proportional-hazard-assumption-in-cox-models/ UCLA].
* [https://doi.org/10.1111/biom.13137 An online updating approach for testing the proportional hazards assumption with streams of survival data] Xue 2019
* https://www.rdocumentation.org/packages/gof/versions/0.9.1/topics/cumres.coxph
* http://rstudio-pubs-static.s3.amazonaws.com/5043_145684af0d364175bf5e5e6bb792ca28.html
* [https://myweb.uiowa.edu/pbreheny/7210/f15/notes/11-10.pdf Residuals and model diagnostics] from the lecture notes of Patrick Breheny
* [https://mathweb.ucsd.edu/~rxu/math284/slect9.pdf Assessing the Fit of the Cox Model] from the lecture notes of Ronghui Xu
* Cumulative martingale residuals (Lin et al Biometrika 1993)
** [https://www4.stat.ncsu.edu/~lu/ST790/homework/Biometrika-1993-LIN-557-72.pdf Paper]
** http://publicifsv.sund.ku.dk/~pka/abgk04/ts-gof.pdf#page=10
** http://www.math.ucsd.edu/~rxu/math284/slect9.pdf#page=20
** http://biostat.mc.vanderbilt.edu/wiki/pub/Main/QingxiaChen/Ch11.pdf
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8762368/ Violations of proportional hazard assumption in Cox regression model of transcriptomic data in TCGA pan-cancer cohorts] 2022


== Strata, Stratification ==
== Partial likelihood when there are ties; hypothesis testing: Likelihood Ratio Test, Wald Test & Score Test ==  
* [https://myweb.uiowa.edu/pbreheny/7210/f15/notes/11-17.pdf Stratification in the Cox model] Patrick Breheny
http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=29
* [https://stats.stackexchange.com/a/256256 stratification in cox model]. In a Cox model, stratification allows for as many different hazard functions as there are strata. Beta coefficients (hazard ratios) optimized for all strata are then fitted.
 
* [https://courses.washington.edu/b515/l17.pdf#page=16 Stratification example]
In R's coxph(): Nearly all Cox regression programs use the ''Breslow'' method by default, but not this one. The '' '''Efron approximation''' '' is used as the default here, it is more accurate when dealing with tied death times, and is as efficient computationally.
<pre>
 
bladder1 <- bladder[bladder$enum < 5, ]
http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xaghtmlnode28.html (include the case when there is a partition of parameters). The formulas for 3 tests are also available on  Appendix B of Klein book.
o <- coxph(Surv(stop, event) ~ rx + size + number + strata(enum) , bladder1)
 
# the strata will not be a term in covariate in the model fitting
The following code does not test for models. But since there is only one coefficient, the results are the same. If there is more than one variable, we can use anova(model1, model2) to run LRT.
anova(o)
{{Pre}}
</pre>
library(KMsurv)
# No ties. Section 8.2
data(btrial)
str(btrial)
# 'data.frame': 45 obs. of  3 variables:
# $ time : int  19 25 30 34 37 46 47 51 56 57 ...
# $ death: int  1 1 1 1 1 1 1 1 1 1 ...
# $ im  : int  1 1 1 1 1 1 1 1 1 1 ...
table(subset(btrial, death == 1)$time)
# death time is unique
coxph(Surv(time, death) ~ im, data = btrial)
#    coef exp(coef) se(coef)    z    p
# im 0.980    2.665    0.435 2.25 0.024
# Likelihood ratio test=4.45  on 1 df, p=0.03
# n= 45, number of events= 24
 
# Ties, Section 8.3
data(kidney)
str(kidney)
# 'data.frame': 119 obs. of  3 variables:
# $ time : num  1.5 3.5 4.5 4.5 5.5 8.5 8.5 9.5 10.5 11.5 ...
# $ delta: int  1 1 1 1 1 1 1 1 1 1 ...
# $ type : int  1 1 1 1 1 1 1 1 1 1 ...
table(subset(kidney, delta == 1)$time)
# 0.5  1.5  2.5  3.5  4.5  5.5  6.5  8.5  9.5 10.5 11.5 15.5 16.5 18.5 23.5 26.5  
# 6    1    2    2    2    1    1    2    1    1    1    2    1    1    1    1
 
# Default: Efron method
coxph(Surv(time, delta) ~ type, data = kidney)
# coef exp(coef) se(coef)    z    p
# type -0.613    0.542    0.398 -1.54 0.12
# Likelihood ratio test=2.41  on 1 df, p=0.1
# n= 119, number of events= 26
summary(coxph(Surv(time, delta) ~ type, data = kidney))
# n= 119, number of events= 26
# coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6126    0.5420  0.3979 -1.539    0.124
#
# exp(coef) exp(-coef) lower .95 upper .95
# type    0.542      1.845    0.2485    1.182
#
# Concordance= 0.497  (se = 0.056 )
# Rsquare= 0.02  (max possible= 0.827 )
# Likelihood ratio test= 2.41  on 1 df,  p=0.1
# Wald test            = 2.37  on 1 df,  p=0.1
# Score (logrank) test = 2.44  on 1 df,  p=0.1


== Sample size calculators ==
# Breslow method
* [http://powerandsamplesize.com/Calculators/Test-Time-To-Event-Data/Cox-PH-Equivalence Calculate Sample Size Needed to Test Time-To-Event Data: Cox PH, Equivalence] including a reference
summary(coxph(Surv(time, delta) ~ type, data = kidney, ties = "breslow"))
* http://www.sample-size.net/sample-size-survival-analysis/ including a reference
# n= 119, number of events= 26
* [https://youtu.be/v18f-Jsqi4c?t=1309 Evolution of survival sample size methods] demonstrated by nQuery software. '''Sample size refers the number of events; status=1 (not the number of observations)'''
#        coef exp(coef) se(coef)      z Pr(>|z|)
* http://www.icssc.org/Documents/AdvBiosGoa/Tab%2026.00_SurvSS.pdf no reference
# type -0.6182    0.5389  0.3981 -1.553    0.12
* [https://cran.r-project.org/web/packages/powerSurvEpi powerSurvEpi] R package
#
* [https://cran.r-project.org/web/packages/NPHMC/index.html NPHMC] R package (based on the Proportional Hazards Mixture Cure Model) and the [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3859312/ paper]
#      exp(coef) exp(-coef) lower .95 upper .95
* [http://r.789695.n4.nabble.com/Power-calculation-for-survival-analysis-td3830031.html Hmisc::cpower()] function.
# type    0.5389      1.856    0.247    1.176
#
# Concordance= 0.497  (se = 0.056 )
# Rsquare= 0.02  (max possible= 0.827 )
# Likelihood ratio test= 2.45  on 1 df,  p=0.1
# Wald test            = 2.41  on 1 df,  p=0.1
# Score (logrank) test = 2.49  on 1 df,  p=0.1


=== How many events are required to fit the Cox regression reliably? ===
# Discrete/exact method
If we have only 1 covariate and the covariate is continuous, we need at least 2 events (one for the baseline hazard and one for beta).
summary(coxph(Surv(time, delta) ~ type, data = kidney, ties = "exact"))
 
#        coef exp(coef) se(coef)     z Pr(>|z|)
If the covariate is discrete, we need at least one event from (each of) two groups in order to fit the Cox regression reliably. For example, if status=(0,0,0,1,0,1) and x=(0,0,1,1,2,2) works fine.  
# type -0.6294    0.5329  0.4019 -1.566    0.117
{{Pre}}
#
library(survival)
#      exp(coef) exp(-coef) lower .95 upper .95
head(ovarian)
# type    0.5329      1.877    0.2424     1.171
#  futime fustat    age resid.ds rx ecog.ps
#
# 1    59      1 72.3315        2  1      1
# Rsquare= 0.021  (max possible= 0.795 )
# 2   115      1 74.4932        2 1       1
# Likelihood ratio test= 2.49 on 1 df,  p=0.1
# 3    156      1 66.4658        2 1       2
# Wald test            = 2.45 on 1 df,  p=0.1
# 4    421      0 53.3644        2  2      1
# Score (logrank) test = 2.53 on 1 df,  p=0.1
# 5    431      1 50.3397        2 1       1
</pre>
# 6    448      0 56.4301        1 1      2


ova <- ovarian # n=26
== Hazard (function) and survival function ==
ova$time <- ova$futime
A hazard is the rate at which events happen, so that the probability of an event happening in a short time interval is the length of time multiplied by the hazard.
ova$status <- 0
ova$status[1:4] <- 1
coxph(Surv(time, status) ~ rx, data = ova) # OK
summary(survfit(Surv(time, status) ~ rx, data =ova))
#                rx=1
time n.risk n.event survival std.err lower 95% CI upper 95% CI
#    59    13      1    0.923  0.0739        0.789            1
#  115    12      1    0.846  0.1001        0.671            1
#  156    11      1    0.769  0.1169        0.571            1
#                rx=2
#    time n.risk  n.event  survival  std.err lower 95% CI upper 95% CI
# 421.0000 10.0000  1.0000    0.9000  0.0949      0.7320      1.0000


# Suspicious Cox regression result due to 0 sample size in one group
<math>
ova$status <- 0
h(t) = \lim_{\Delta t \to 0} \frac{P(t \leq T < t+\Delta t|T \geq t)}{\Delta t} = \frac{f(t)}{S(t)} = -\partial{ln[S(t)]}
ova$status[1:3] <- 1
</math>
coxph(Surv(time, status) ~ rx, data = ova)
#        coef exp(coef) se(coef) z p
# rx -2.13e+01  5.67e-10  2.32e+04 0 1
#
# Likelihood ratio test=4.41  on 1 df, p=0.04
# n= 26, number of events= 3
# Warning message:
# In fitter(X, Y, strats, offset, init, control, weights = weights,  :
#  Loglik converged before variable  1 ; beta may be infinite.


summary(survfit(Surv(time, status) ~ rx, data = ova))
Therefore
#                rx=1
# time n.risk n.event survival std.err lower 95% CI upper 95% CI
#  59    13      1    0.923  0.0739        0.789            1
#  115    12      1    0.846  0.1001        0.671            1
#  156    11      1    0.769  0.1169        0.571            1
#                rx=2
# time n.risk n.event survival std.err lower 95% CI upper 95% CI
</pre>


== Extract p-values ==
<math>
{{Pre}}
H(x) = \int_0^x h(u) d(u) = -ln[S(x)].
fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
</math>


# method 1:
or
beta <- coef(fit)
se <- sqrt(diag(vcov(fit)))
1 - pchisq((beta/se)^2, 1)


# method 2: https://www.biostars.org/p/65315/
<math>
coef(summary(fit))[, "Pr(>|z|)"]
S(x) = e^{-H(x)}
</pre>
</math>
[https://www.r-bloggers.com/2016/12/cox-proportional-hazards-model/ More statistics] including the HR confidence intervals.
 
Hazards (or probability of hazards) may vary with time, while the assumption in proportional hazard models for survival is that the hazard is a constant proportion.


== Expectation of life & expected future lifetime ==
Examples:
* The average lifetime is the same as the area under the survival curve.
* If h(t)=c, S(t) is exponential. f(t) = c exp(-ct). The mean is 1/c.
* If <math>\log h(t) = c + \rho t</math>, S(t) is  Gompertz distribution.
* If <math>\log h(t)=c + \rho \log (t)</math>, S(t) is Weibull distribution.
* For Cox regression, the [http://www.math.ucsd.edu/~rxu/math284/slect6.pdf survival function can be shown] to be  <math>S(t|X) = S_0(t) ^ {\exp(X\beta)}</math>.
: <math>
\begin{align}
S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\
  &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\
  &= e^{-\int_0^t h_0(u) du \cdot exp(X \beta)} \\
  &= S_0(t)^{exp(X \beta)}
\end{align}
</math>
Alternatively,
: <math>
: <math>
\begin{align}
\begin{align}
\mu &= \int_0^\infty t f(t) dt \\
S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\
   &= \int_0^\infty S(t) dt
   &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\
  &= e^{-H_0(t) \cdot exp(X \beta)}
\end{align}
\end{align}
</math>
</math>
by integrating by parts making use of the fact that -f(t) is the derivative of S(t), which has limits S(0)=1 and <math>S(\infty)=0</math>. [https://stats.stackexchange.com/questions/186497/calculating-life-time-expectancy The average lifetime may not be bounded] if you have censored data, there's censored observations that last beyond your last recorded death.
where the cumulative baseline hazard at time t, <math>H_0(t)</math>, is commonly estimated through the non-parametric Breslow estimator.
* The [https://en.wikipedia.org/wiki/Survival_analysis#Quantities_derived_from_the_survival_distribution expected future lifetime at a given time <math>t_0</math>]
:<math>\frac{1}{S(t_0)} \int_0^{\infty} t\,f(t_0+t)\,dt = \frac{1}{S(t_0)} \int_{t_0}^{\infty} S(t)\,dt,</math>


== Hazard Ratio (exp^beta) vs Relative Risk ==
== How to assess Cox model fit ==
# https://en.wikipedia.org/wiki/Hazard_ratio
* [https://www.coursera.org/lecture/survival-analysis-r-public-health/how-to-assess-cox-model-fit-gm9Hr Survival Analysis in R for Public Health] from Coursera
# '''Hazard''' represents the '''instantaneous event rate''', which means the probability that an individual would experience an event (e.g. death/relapse) at a particular given point in time after the intervention, assuming that this individual has survived to that particular point of time without experiencing any event. See an example [https://www.accessdata.fda.gov/drugsatfda_docs/nda/2014/204886Orig1s000MedR.pdf#page=15 here].
* Evaluating goodness-of-fit in comparison to a null model
# '''Hazard ratio''' is a measure of '''an effect''' of '''an intervention''' of '''an outcome''' of interest ''over time''. The hazard ratio is not computed at any one time point. See an example [https://www.accessdata.fda.gov/drugsatfda_docs/nda/2014/204886Orig1s000MedR.pdf#page=15 here].
* [https://www.jstor.org/stable/2288942 A Graphical Method for Assessing Goodness of Fit in Cox's Proportional Hazards Model] Arjas JASA 1988
# Since there is only one hazard ratio reported, it can can only be interpreted if you assume that the population hazard ratio is consistent over time, and that any differences are due to random sampling.  If two survival curves cross, the hazard ratios are certainly not consistent. See [https://www.graphpad.com/support/faq/hazard-ratio-from-survival-analysis/ Hazard ratio from survival analysis] including how the hazard ratio is computed.  
* [https://scholarcommons.sc.edu/cgi/viewcontent.cgi?article=5318&context=etd Evaluation of Goodness-of-fit Tests for the Cox Proportional Hazards Model with Time-Varying Covariates]
# Hazard ratio = hazard in the intervention group / Hazard in the control group
* [https://pubmed.ncbi.nlm.nih.gov/30815387/ Assessment of the fitness of Cox and parametric regression models of survival distribution for Iranian breast cancer patients' data]
# A hazard ratio is often reported as a “reduction in risk of death or progression” – This '''risk reduction''' is calculated as '''1 minus the Hazard Ratio (exp^beta)''', e.g., HR of 0.84 is equal to a 16% reduction in risk. See this video [https://youtu.be/z1b2hFzXsrU Interpreting Hazard Ratios] and [http://stats.stackexchange.com/questions/70741/how-to-interpret-a-hazard-ratio-from-a-continuous-variable-unit-of-difference stackexchange.com].
# Hazard ratio and its confidence can be obtained in R by using the '''summary()''' method; e.g. '''fit <- coxph(Surv(time, status) ~ x); summary(fit)$conf.int; confint(fit)'''
# The coefficient beta represents the expected change in '''log hazard''' if X changes by one unit and all other variables are held constant in Cox models. See [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5969114/ Variable selection – A review and recommendations for the practicing statistician] by Heinze et al 2018.
# [https://onbiostatistics.blogspot.com/2015/09/understanding-endpoint-in-oncology.html Understanding the endpoints in oncology: overall survival, progression free survival, hazard ratio, censored value]


Another [https://socialsciences.mcmaster.ca/jfox/Books/Companion-1E/appendix-cox-regression.pdf example] (John Fox, Cox Proportional-Hazards Regression for Survival Data) is assuming Y ~ age + prio + others.  
== Check the proportional hazard (constant HR over time) assumption by cox.zph() - Schoenfeld Residuals ==
* If exp(beta_age) = 0.944. It means an additional year of age '''reduces the hazard by a factor''' of .944 on average, or (1-.944)*100 = 5.6 '''percent'''.  
<ul>
* If exp(beta_prio) = 1.096, it means each prior conviction '''increases the hazard by a factor''' of 1.096, or 9.6 '''percent'''.
<li>It seems to be predictor specific.
<li>[https://rstudio-pubs-static.s3.amazonaws.com/300535_2a8382af47714d0aaa3f4cce9a7645a3.html Survival Analysis Tutorial] by Jacob Lindell and Joe Berry.
<li>[https://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_4.pdf Log-log Kaplan-Meier curves] and other methods.
<li>https://stats.idre.ucla.edu/other/examples/asa2/testing-the-proportional-hazard-assumption-in-cox-models/. If the predictor satisfy the proportional hazard assumption then the graph of the survival function versus the survival time should results in a graph with parallel curves, similarly the graph of the log(-log(survival)) versus log of survival time graph should result in parallel lines if the predictor is proportional.  This method does not work well for continuous predictor or categorical predictors that have many levels because the graph becomes to “cluttered”.
<li>[http://www.sthda.com/english/wiki/cox-model-assumptions Methods to evaluate the validity of the Cox model assumptions], [https://www.emilyzabor.com/tutorials/survival_analysis_in_r_tutorial.html#Assessing_proportional_hazards Assessing proportional hazards]
<pre>
library(survival)
res.cox <- coxph(Surv(time, status) ~ age + sex + wt.loss, data =  lung)


[https://www.quora.com/How-do-you-explain-the-difference-between-hazard-ratio-and-relative-risk-to-a-layman How do you explain the difference between hazard ratio and '''relative risk''' to a layman?] from Quora.
# A hypothesis test of whether the effect of each covariate differs according to time,
 
# and a global test of all covariates at once.
See [http://a-little-book-of-r-for-biomedical-statistics.readthedocs.io/en/latest/src/biomedicalstats.html Using R for Biomedical Statistics] for relative risk, odds ratio, et al.
cz <- cox.zph(res.cox); cz
 
plot(cz) # it will draw 3 plots; one for each variable
[https://www.stat-d.si/mz/mz13.1/p4.pdf Odds Ratio, Hazard Ratio and Relative Risk] by Janez Stare
        # y-axis is beta(t), x = time
 
plot(cz[3]]) # 3rd variable
For two groups that differ only in treatment condition, the ratio of the hazard functions is given by <math>e^\beta</math>, where <math>\beta</math> is the estimate of treatment effect derived from the regression model. See [https://en.wikipedia.org/wiki/Hazard_ratio#Definition_and_derivation here].
 
[http://stats.stackexchange.com/questions/26408/what-is-the-difference-between-a-hazard-ratio-and-the-ecoef-of-a-cox-equation?rq=1 Compute ratio ratios from coxph()] in R (Hint: exp(beta)).
 
'''Prognostic index''' is defined on http://www.math.ucsd.edu/~rxu/math284/slect6.pdf#page=2.
 
[http://www.sthda.com/english/wiki/cox-proportional-hazards-model#basics-of-the-cox-proportional-hazards-model Basics of the Cox proportional hazards model]. Good prognostic factor (b<0 or HR<1) and bad prognostic factor (b>0 or HR>1).
 
Variable selection: variables were retained in the prediction models if they had a hazard ratio of <0.85 or >1.15 (for binary variables) and were statistically significant at the 0.01 level. see [http://www.bmj.com/content/357/bmj.j2497 Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study].
 
<pre>
library(KMsurv)
# No ties. Section 8.2
data(btrial)
coxph(Surv(time, death) ~ im, data = btrial)
summary(coxph(Surv(time, death) ~ im, data = btrial))$conf.int
#    exp(coef) exp(-coef) lower .95 upper .95
# im  2.664988  0.3752362  1.136362  6.249912
</pre>
</pre>
So the hazard ratio and its 95% ci can be obtained from the 1st, 3rd and 4th element in ''conf.int''.
<li>'''Complementary log-log survival curve''' . [https://rdrr.io/cran/survival/man/plot.survfit.html ?plot(survfit(formula), fun)]. For example fun='''"cloglog"''' will create a complimentary log-log survival plot (log(-log(S(t))) along with log(t) in the x-axis).
 
* If we don't like the log scale for the x-axis, we can write our own function; see [https://stackoverflow.com/q/22422687 R survival package; plotting log(-log(survival)) against log(time)]
=== Hazard Ratio, confidence interval, Table 1 ===
* [https://bookdown.org/sestelo/sa_financial/how-to-evaluate-the-ph-assumption.html 3.6 How to evaluate the PH assumption?] from "A short course on Survival Analysis applied to the Financial Industry"
<ul>
* [https://stat.ethz.ch/pipermail/r-help/2009-May/390854.html Use strata() in coxph()] to stratify on one covariate and plot the two baseline hazards.
<li>Google image: survival data cox model hazard ratio table 1
<li>[https://stats.stackexchange.com/a/547137 Schoenfeld residuals - Plain English explanation, please!]  If the plot is reasonably '''flat''' over time, the PH assumption holds.
<li>To get the 95% CI, use the summary() function
* The Schoenfeld Residuals Test is analogous to testing whether the '''slope of scaled residuals on time is zero or not'''. If the plot of Schoenfeld residuals against time shows a '''non-random''' pattern, the PH assumption has been violated. See [https://www.mbaskool.com/business-concepts/statistics/8766-schoenfeld-residuals-test.html Schoenfeld Residuals Test - Meaning & Definition]
<pre>
</li>
> mod = coxph(Surv(time,status) ~ x, data = aml)
</ul>
> summary(mod)
* survival package
  n= 23, number of events= 18
** [https://rdrr.io/cran/survival/man/cox.zph.html cox.zph()]
** [https://rdrr.io/cran/survival/man/residuals.coxph.html residuals()]
* [https://cran.r-project.org/web/packages/timereg/index.html timereg] package. Flexible Regression Models for Survival Data.
** [https://rdrr.io/cran/timereg/man/cum.residuals.html timereg::cum.residuals()]
* Draw a hazard rate plot. <math>
\begin{align}
\log(−\log(𝑆(𝑡))) = \log(−\log(S_0(𝑡))) + \beta X
\end{align}
</math>
** [https://stats.stackexchange.com/a/34092 How to calculate predicted hazard rates from a Cox PH model?]. The hazard ratio should be constant; h(t|Z) / h(t|Z*) is independent of t.
** See an example of [https://www.theanalysisfactor.com/assumptions-cox-regression/ non-proportional hazards] where the '''KM curves cross'''.
** More graphical examples where log-log survival curves are not parallel. [https://bookdown.org/sestelo/sa_financial/how-to-evaluate-the-ph-assumption.html A short course on Survival Analysis applied to the Financial Industry].
** Two plots where one shows the assumption is violated. [https://stats.stackexchange.com/a/256696 Proportionality assumption in Cox Regression Model]
** log cumulative hazard plot. [https://influentialpoints.com/Training/coxs_proportional_hazards_regression_model-principles-properties-assumptions.htm An example] where PH is not satisfied.
* Looking at the Kaplan-Meier curves (Survival probability vs time). If the (discrete) predictor satisfy the proportional hazard assumption then the graph of the survival function versus the survival time should results in a graph with parallel curves, similarly the graph of the log(-log(survival)) versus log of survival time graph should result in parallel lines if the predictor is proportional. [https://stats.oarc.ucla.edu/other/examples/asa2/testing-the-proportional-hazard-assumption-in-cox-models/ UCLA].
* [https://doi.org/10.1111/biom.13137 An online updating approach for testing the proportional hazards assumption with streams of survival data] Xue 2019
* https://www.rdocumentation.org/packages/gof/versions/0.9.1/topics/cumres.coxph
* http://rstudio-pubs-static.s3.amazonaws.com/5043_145684af0d364175bf5e5e6bb792ca28.html
* [https://myweb.uiowa.edu/pbreheny/7210/f15/notes/11-10.pdf Residuals and model diagnostics] from the lecture notes of Patrick Breheny
* [https://mathweb.ucsd.edu/~rxu/math284/slect9.pdf Assessing the Fit of the Cox Model] from the lecture notes of Ronghui Xu
* Cumulative martingale residuals (Lin et al Biometrika 1993)
** [https://www4.stat.ncsu.edu/~lu/ST790/homework/Biometrika-1993-LIN-557-72.pdf Paper]
** http://publicifsv.sund.ku.dk/~pka/abgk04/ts-gof.pdf#page=10
** http://www.math.ucsd.edu/~rxu/math284/slect9.pdf#page=20
** http://biostat.mc.vanderbilt.edu/wiki/pub/Main/QingxiaChen/Ch11.pdf
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8762368/ Violations of proportional hazard assumption in Cox regression model of transcriptomic data in TCGA pan-cancer cohorts] 2022
* [https://www.tandfonline.com/doi/abs/10.1080/01621459.2022.2126362 Assumption-Lean Cox Regression] JASA 2022


                coef exp(coef) se(coef)    z Pr(>|z|) 
== Strata, Stratification ==
xNonmaintained 0.9155    2.4981  0.5119 1.788  0.0737 .
* [https://myweb.uiowa.edu/pbreheny/7210/f15/notes/11-17.pdf Stratification in the Cox model] Patrick Breheny
---
* [https://stats.stackexchange.com/a/256256 stratification in cox model]. In a Cox model, stratification allows for as many different hazard functions as there are strata. Beta coefficients (hazard ratios) optimized for all strata are then fitted.
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
* [https://courses.washington.edu/b515/l17.pdf#page=16 Stratification example]
 
<pre>
              exp(coef) exp(-coef) lower .95 upper .95
bladder1 <- bladder[bladder$enum < 5, ]
xNonmaintained    2.498    0.4003    0.9159    6.813
o <- coxph(Surv(stop, event) ~ rx + size + number + strata(enum) , bladder1)
 
# the strata will not be a term in covariate in the model fitting
Concordance= 0.619  (se = 0.063 )
anova(o)
Likelihood ratio test= 3.38  on 1 df,   p=0.07
Wald test            = 3.2  on 1 df,  p=0.07
Score (logrank) test = 3.42  on 1 df,  p=0.06
</pre>
</pre>
<li>To report the HR in table 1 for multiple variables, one must use Univariate Cox regression; for example [http://sthda.com/english/wiki/cox-proportional-hazards-model this one] uses lapply().
</ul>


=== Hazard Ratio and death probability ===
== Sample size calculators ==
https://en.wikipedia.org/wiki/Hazard_ratio#The_hazard_ratio_and_survival
* [http://powerandsamplesize.com/Calculators/Test-Time-To-Event-Data/Cox-PH-Equivalence Calculate Sample Size Needed to Test Time-To-Event Data: Cox PH, Equivalence] including a reference
* http://www.sample-size.net/sample-size-survival-analysis/ including a reference
* [https://youtu.be/v18f-Jsqi4c?t=1309 Evolution of survival sample size methods] demonstrated by nQuery software. '''Sample size refers the number of events; status=1 (not the number of observations)'''
* http://www.icssc.org/Documents/AdvBiosGoa/Tab%2026.00_SurvSS.pdf no reference
* [https://cran.r-project.org/web/packages/powerSurvEpi powerSurvEpi] R package
* [https://cran.r-project.org/web/packages/NPHMC/index.html NPHMC] R package (based on the Proportional Hazards Mixture Cure Model) and the [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3859312/ paper]
* [http://r.789695.n4.nabble.com/Power-calculation-for-survival-analysis-td3830031.html Hmisc::cpower()] function.


Suppose ''S''<sub>0</sub>(t)=.2 (20% survived at time t) and the hazard ratio (hr) is 2 (a group has twice the chance of dying than a comparison group), then (Cox model is assumed)
=== How many events are required to fit the Cox regression reliably? ===
# ''S''<sub>1</sub>(t)=''S''<sub>0</sub>(t)<sup>hr</sup> = .2<sup>2</sup> = .04 (4% survived at t)
* The recommended number of events to fit a Cox regression model for survival data is typically guided by a rule of thumb. This rule suggests having at least 10-20 events per predictor in the model; see [https://stats.stackexchange.com/a/559458 Survival analysis with rare events].
# The corresponding death probabilities are 0.8 and 0.96.
#  If a subject is exposed to twice the risk of a reference subject at every age, then the probability that the subject will be alive at any given age is the square of the probability that the reference subject (covariates = 0) would be alive at the same age. See [http://data.princeton.edu/pop509/ParametricSurvival.pdf#page=10 p10 of this lecture notes].
# exp(x*beta) is the relative risk associated with covariate value x.


=== Hazard Ratio Forest Plot ===
* If we have only 1 covariate and the covariate is continuous, we need at least 2 events (one for the baseline hazard and one for beta).  
The forest plot quickly summarizes the hazard ratio data across multiple variables –If the line crosses the 1.0 value, the hazard ratio is not significant and there is no clear advantage for either arm.


See also [[Ggplot2#Dot_plot_.26_forest_plot|ggplot2 forest plot]]. survminer::ggforest(), survivalAnalysis::forest_plot() and [https://cran.r-project.org/web/packages/forestmodel/readme/README.html forestmodel::forest_model()].
* If the covariate is discrete, we need at least one event from (each of) two groups in order to fit the Cox regression reliably. For example, if status=(0,0,0,1,0,1) and x=(0,0,1,1,2,2) works fine.  
 
{{Pre}}
<pre>
library(survival)
library(survival)
library(survminer)
head(ovarian)
data(cancer, package = 'survival') # load colon among others
#   futime fustat    age resid.ds rx ecog.ps
colon$sex <- factor(colon$sex)
# 1    59      1 72.3315        2  1      1
# 2    115      1 74.4932        2  1      1
# 3    156      1 66.4658        2  1      2
# 4    421      0 53.3644        2  2      1
# 5    431      1 50.3397        2  1      1
# 6    448      0 56.4301        1  1      2


tmp1 <- survival::colon %>%
ova <- ovarian # n=26
  analyse_multivariate(vars(time, status),
ova$time <- ova$futime
      vars(rx, sex, age, obstruct, perfor, nodes, differ, extent))  
ova$status <- 0
tmp1 %>% forest_plot()
ova$status[1:4] <- 1
coxph(Surv(time, status) ~ rx, data = ova) # OK
summary(survfit(Surv(time, status) ~ rx, data =ova))
#                rx=1
#  time n.risk n.event survival std.err lower 95% CI upper 95% CI
#    59    13      1    0.923  0.0739        0.789            1
#  115    12      1    0.846  0.1001        0.671            1
#  156    11      1    0.769  0.1169        0.571            1
#                rx=2
#    time  n.risk  n.event  survival  std.err lower 95% CI upper 95% CI
# 421.0000 10.0000  1.0000    0.9000  0.0949      0.7320      1.0000


tmp2 <- coxph(Surv(time, status) ~ rx + sex + age + obstruct +
# Suspicious Cox regression result due to 0 sample size in one group
    perfor + nodes + differ + extent, data=colon)
ova$status <- 0
survminer::ggforest(tmp2, data = colon)
ova$status[1:3] <- 1
 
coxph(Surv(time, status) ~ rx, data = ova)
# Note that the above is not quite right since it is not based on  
#        coef exp(coef)  se(coef) z p
# the univariate model
# rx -2.13e+01  5.67e-10  2.32e+04 0 1
coxph(Surv(time, status) ~ sex, data = colon)
#
# Likelihood ratio test=4.41  on 1 df, p=0.04
# n= 26, number of events= 3
# Warning message:
# In fitter(X, Y, strats, offset, init, control, weights = weights:
#  Loglik converged before variable  1 ; beta may be infinite.


# Even if all are continuous, fitting univariate and multivariate models
summary(survfit(Surv(time, status) ~ rx, data = ova))
# returns different results
#                rx=1
coxph(Surv(time, status) ~ obstruct, data = colon)
# time n.risk n.event survival std.err lower 95% CI upper 95% CI
coxph(Surv(time, status) ~ obstruct + perfor + age, data = colon)
#  59    13      1    0.923  0.0739        0.789            1
#  115    12      1    0.846  0.1001        0.671            1
#  156    11      1    0.769 0.1169        0.571            1
#                rx=2
# time n.risk n.event survival std.err lower 95% CI upper 95% CI
</pre>
</pre>


So the problem with '''survminer::ggforest()''' is it cannot run univariate Cox model for multiple variables. '''survivalAnalysis''' package can do that but I need to make sure the data looks correct (e.g. change 'unknown' to data that should be a continuous value). See the section "Multiple Univariate Analyses" in the [https://cran.r-project.org/web/packages/survivalAnalysis/vignettes/multivariate.html Multivariate Survival Analysis] vignette.
== Extract p-values ==
<pre>
{{Pre}}
df <- survival::lung %>%
fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
  mutate(sex=rename_factor(sex, `1` = "male", `2` = "female"))
 
# method 1:
beta <- coef(fit)
se <- sqrt(diag(vcov(fit)))
1 - pchisq((beta/se)^2, 1)


map(vars(age, sex, ph.ecog, wt.loss), function(by)
# method 2: https://www.biostars.org/p/65315/
{
coef(summary(fit))[, "Pr(>|z|)"]
  analyse_multivariate(df,
                      vars(time, status),
                      covariates = list(by), # covariates expects a list
                      covariate_name_dict = covariate_names)
}) %>%
  forest_plot(factor_labeller = covariate_names,
              endpoint_labeller = c(time="OS"),
              orderer = ~order(HR),
              labels_displayed = c("endpoint", "factor", "n"),
              ggtheme = ggplot2::theme_bw(base_size = 10))
</pre>
</pre>
[https://www.r-bloggers.com/2016/12/cox-proportional-hazards-model/ More statistics] including the HR confidence intervals.


=== Forest Plot for a Meta-analysis of Several Different Randomised Control Trials ===
== Expectation of life & expected future lifetime ==
[http://a-little-book-of-r-for-biomedical-statistics.readthedocs.io/en/latest/src/biomedicalstats.html Using R for Biomedical Statistics]
* The average lifetime is the same as the area under the survival curve.
: <math>
\begin{align}
\mu &= \int_0^\infty t f(t) dt \\
  &= \int_0^\infty S(t) dt
\end{align}
</math>
by integrating by parts making use of the fact that -f(t) is the derivative of S(t), which has limits S(0)=1 and <math>S(\infty)=0</math>. [https://stats.stackexchange.com/questions/186497/calculating-life-time-expectancy The average lifetime may not be bounded] if you have censored data, there's censored observations that last beyond your last recorded death.
* The [https://en.wikipedia.org/wiki/Survival_analysis#Quantities_derived_from_the_survival_distribution expected future lifetime at a given time <math>t_0</math>]
:<math>\frac{1}{S(t_0)} \int_0^{\infty} t\,f(t_0+t)\,dt = \frac{1}{S(t_0)} \int_{t_0}^{\infty} S(t)\,dt,</math>


=== Multivariate model ===
== Hazard Ratio (exp^beta) vs Relative Risk ==
<ul>
# https://en.wikipedia.org/wiki/Hazard_ratio
<li>Variables order does not change the hazard ratios or the p-value
# '''Hazard''' represents the '''instantaneous event rate''', which means the probability that an individual would experience an event (e.g. death/relapse) at a particular given point in time after the intervention, assuming that this individual has survived to that particular point of time without experiencing any event. See an example [https://www.accessdata.fda.gov/drugsatfda_docs/nda/2014/204886Orig1s000MedR.pdf#page=15 here].
<pre>
# '''Hazard ratio''' is a measure of '''an effect''' of '''an intervention''' of '''an outcome''' of interest ''over time''. The hazard ratio is not computed at any one time point. See an example [https://www.accessdata.fda.gov/drugsatfda_docs/nda/2014/204886Orig1s000MedR.pdf#page=15 here].
R> data(cancer, package = 'survival') # load colon among others
# Since there is only one hazard ratio reported, it can can only be interpreted if you assume that the population hazard ratio is consistent over time, and that any differences are due to random sampling.  If two survival curves cross, the hazard ratios are certainly not consistent. See [https://www.graphpad.com/support/faq/hazard-ratio-from-survival-analysis/ Hazard ratio from survival analysis] including how the hazard ratio is computed.
R> colon$sex <- factor(colon$sex)
# Hazard ratio = hazard in the intervention group / Hazard in the control group
R> tmp2 <- coxph(Surv(time, status) ~ rx + sex + age + obstruct +  
# A hazard ratio is often reported as a “reduction in risk of death or progression” – This '''risk reduction''' is calculated as '''1 minus the Hazard Ratio (exp^beta)''', e.g., HR of 0.84 is equal to a 16% reduction in risk. See this video [https://youtu.be/z1b2hFzXsrU Interpreting Hazard Ratios] and [http://stats.stackexchange.com/questions/70741/how-to-interpret-a-hazard-ratio-from-a-continuous-variable-unit-of-difference stackexchange.com].
                perfor + nodes + differ + extent, data=colon)
# If the hazard ratio for overall survival (OS) from initiation of therapy for patients with BRCAm vs BRCAwt is 0.812, this means that, at any given time point, the hazard of death (or event of interest) for patients with BRCAm is 0.81 times the hazard of death for patients with BRCAwt. In other words, patients with BRCAm have a 19% lower risk of death at any time point compared to patients with BRCAwt. [https://ascopubs.org/doi/pdf/10.1200/JCO.2022.40.16_suppl.e18802 Prevalence and prognosis of BRCAm, homologous recombination repair mutation (HRRm) or HR deficiency positive (HRD+) across tumor types].
R> tmp2
#* [https://ovarianresearch.biomedcentral.com/articles/10.1186/s13048-016-0227-x BRCA-tested patients had a lower risk of death versus untested (HR 0.35, 95 % CI 0.17, 0.68, p = 0.001)]. “BRCA-tested patients” does not necessarily mean these patients have a BRCA mutation. It simply means these patients have undergone testing BRCA mutations.
Call:
# Hazard ratio and its confidence can be obtained in R by using the '''summary()''' method; e.g. '''fit <- coxph(Surv(time, status) ~ x); summary(fit)$conf.int; confint(fit)'''
coxph(formula = Surv(time, status) ~ rx + sex + age + obstruct +
# The coefficient beta represents the expected change in '''log hazard''' if X changes by one unit and all other variables are held constant in Cox models. See [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5969114/ Variable selection – A review and recommendations for the practicing statistician] by Heinze et al 2018.
    perfor + nodes + differ + extent, data = colon)
# [https://onbiostatistics.blogspot.com/2015/09/understanding-endpoint-in-oncology.html Understanding the endpoints in oncology: overall survival, progression free survival, hazard ratio, censored value]
 
Another [https://socialsciences.mcmaster.ca/jfox/Books/Companion-1E/appendix-cox-regression.pdf example] (John Fox, Cox Proportional-Hazards Regression for Survival Data) is assuming Y ~ age + prio + others.
* If exp(beta_age) = 0.944. It means an additional year of age '''reduces the hazard by a factor''' of .944 on average, or (1-.944)*100 = 5.6 '''percent'''.
* If exp(beta_prio) = 1.096, it means each prior conviction '''increases the hazard by a factor''' of 1.096, or 9.6 '''percent'''.
 
Interpretation of Hazard Ratio for Progression-Free Survival
* Assuming females are the reference group
* If exp(beta_sex) = 1.5, it suggests that males have a 50% higher '''risk''' of '''disease progression or death''' (whichever comes first) at any given time compared to females. In other words, males are 1.5 times more likely to experience disease progression or death compared to females, assuming all other variables in the model are held constant.
* If HR = 0.7, it suggests that males have a 30% lower '''risk''' of '''disease progression or death''' at any given time compared to females.


              coef exp(coef)  se(coef)      z        p
Interpretation of Hazard Ratio for Overall Survival
rxLev    -0.072841  0.929749  0.079231 -0.919  0.3579
* Assuming females are the reference group
rxLev+5FU -0.450133  0.637543  0.085975 -5.236 1.64e-07
* If HR = 1.5, it suggests that males have a 50% higher '''risk''' of '''death''' at any given time compared to females.
sex1      -0.090141  0.913803  0.068075 -1.324  0.1855
* If HR = 0.7, it suggests that males have a 30% lower '''risk''' of '''death''' at any given time compared to females.
age        0.002164  1.002166  0.002874  0.753  0.4516
obstruct  0.202638  1.224629  0.084372  2.402  0.0163
perfor    0.149875  1.161689  0.182766  0.820  0.4122
nodes      0.081185  1.084571  0.006698 12.120  < 2e-16
differ    0.146674  1.157977  0.070095  2.093  0.0364
extent    0.467536  1.596057  0.081726  5.721 1.06e-08


Likelihood ratio test=212.6  on 9 df, p=< 2.2e-16
[https://www.quora.com/How-do-you-explain-the-difference-between-hazard-ratio-and-relative-risk-to-a-layman How do you explain the difference between hazard ratio and '''relative risk''' to a layman?] from Quora.
n= 1776, number of events= 876
  (82 observations deleted due to missingness)


# Move 'nodes' to the last term
See [http://a-little-book-of-r-for-biomedical-statistics.readthedocs.io/en/latest/src/biomedicalstats.html Using R for Biomedical Statistics] for relative risk, odds ratio, et al.
R> tmp3 <- coxph(Surv(time, status) ~ rx + sex + age + obstruct +
                perfor + differ + extent + nodes, data=colon)
R> tmp3
Call:
coxph(formula = Surv(time, status) ~ rx + sex + age + obstruct +
    perfor + differ + extent + nodes, data = colon)


              coef exp(coef)  se(coef)      z        p
[https://www.stat-d.si/mz/mz13.1/p4.pdf Odds Ratio, Hazard Ratio and Relative Risk] by Janez Stare
rxLev    -0.072841  0.929749  0.079231 -0.919  0.3579
rxLev+5FU -0.450133  0.637543  0.085975 -5.236 1.64e-07
sex1      -0.090141  0.913803  0.068075 -1.324  0.1855
age        0.002164  1.002166  0.002874  0.753  0.4516
obstruct  0.202638  1.224629  0.084372  2.402  0.0163
perfor    0.149875  1.161689  0.182766  0.820  0.4122
differ    0.146674  1.157977  0.070095  2.093  0.0364
extent    0.467536  1.596057  0.081726  5.721 1.06e-08
nodes      0.081185  1.084571  0.006698 12.120  < 2e-16


Likelihood ratio test=212.6  on 9 df, p=< 2.2e-16
For two groups that differ only in treatment condition, the ratio of the hazard functions is given by <math>e^\beta</math>, where <math>\beta</math> is the estimate of treatment effect derived from the regression model. See [https://en.wikipedia.org/wiki/Hazard_ratio#Definition_and_derivation here].
n= 1776, number of events= 876
  (82 observations deleted due to missingness)
</pre>
<li>Univariate model and multivariate model result diff
<pre>
R> coxph(Surv(time, status) ~ perfor, data = colon)
Call:
coxph(formula = Surv(time, status) ~ perfor, data = colon)


        coef exp(coef) se(coef)     z    p
[http://stats.stackexchange.com/questions/26408/what-is-the-difference-between-a-hazard-ratio-and-the-ecoef-of-a-cox-equation?rq=1 Compute ratio ratios from coxph()] in R (Hint: exp(beta)).
perfor 0.2644    1.3026  0.1800 1.469 0.142


Likelihood ratio test=1.99  on 1 df, p=0.1583
'''Prognostic index''' is defined on http://www.math.ucsd.edu/~rxu/math284/slect6.pdf#page=2.
n= 1858, number of events= 920
R> coxph(Surv(time, status) ~ age + perfor, data = colon)
Call:
coxph(formula = Surv(time, status) ~ age + perfor, data = colon)


            coef exp(coef)  se(coef)      z    p
[http://www.sthda.com/english/wiki/cox-proportional-hazards-model#basics-of-the-cox-proportional-hazards-model Basics of the Cox proportional hazards model]. Good prognostic factor (b<0 or HR<1) and bad prognostic factor (b>0 or HR>1).
age    -0.002325  0.997678  0.002797 -0.831 0.406
perfor  0.259370  1.296113  0.180067  1.440 0.150


Likelihood ratio test=2.68  on 2 df, p=0.2621
Variable selection: variables were retained in the prediction models if they had a hazard ratio of <0.85 or >1.15 (for binary variables) and were statistically significant at the 0.01 level. see [http://www.bmj.com/content/357/bmj.j2497 Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study].
n= 1858, number of events= 920
</pre>
</ul>


=== Restricted mean survival time ===
<pre>
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-13-152 Restricted mean survival time: an alternative to the hazard ratio for the design and analysis of randomized trials with a time-to-event outcome] Royston 2013
library(KMsurv)
* [https://onbiostatistics.blogspot.com/2019/04/the-use-of-restricted-mean-survival.html The Use of Restricted Mean Survival Time (RMST) Method When Proportional Hazards Assumption is in Doubt]
# No ties. Section 8.2
** To estimate treatment effect for time to event, Hazard Ratio (HR) is commonly used.
data(btrial)
** HR is often assumed to be constant over time (i.e., proportional hazard assumption).
coxph(Surv(time, death) ~ im, data = btrial)
** Recently, we have some doubt about this assumption.
summary(coxph(Surv(time, death) ~ im, data = btrial))$conf.int
** If the PH assumption does not hold, the interpretation of HR can be difficult.
#    exp(coef) exp(-coef) lower .95 upper .95
* RMST is defined as the area under the survival curve up to t* ('''truncated time''' or '''horizon'''), which should be pre-specified for a randomized trial. Uno 2014
# im  2.664988  0.3752362  1.136362  6.249912
</pre>
So the hazard ratio and its 95% ci can be obtained from the 1st, 3rd and 4th element in ''conf.int''.
 
=== Hazard Ratio, confidence interval, Table 1 ===
<ul>
<ul>
<li>[https://rdrr.io/cran/survival/man/print.survfit.html survival::print.survfit()]. [https://stackoverflow.com/a/43173569 How to compute the mean survival time].
<li>Google image: survival data cox model hazard ratio table 1
<li>To get the 95% CI, use the summary() function
<pre>
<pre>
print(km, print.rmean=TRUE) # assume the longest survival time is the horizon
> mod = coxph(Surv(time,status) ~ x, data = aml)
print(km, print.rmean=TRUE, rmean=250)
> summary(mod)
  n= 23, number of events= 18
 
                coef exp(coef) se(coef)    z Pr(>|z|) 
xNonmaintained 0.9155    2.4981  0.5119 1.788  0.0737 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
 
              exp(coef) exp(-coef) lower .95 upper .95
xNonmaintained    2.498    0.4003    0.9159    6.813
 
Concordance= 0.619  (se = 0.063 )
Likelihood ratio test= 3.38  on 1 df,   p=0.07
Wald test            = 3.2  on 1 df,   p=0.07
Score (logrank) test = 3.42  on 1 df,  p=0.06
</pre>
</pre>
</li>
Naive method (wrong) to calculate the hazard ratio
<li>[https://cran.r-project.org/web/packages/survRM2/vignettes/survRM2-vignette3-2.html survRM2] package </li>
<pre>
<li>[https://cran.r-project.org/web/packages/PWEALL/index.html PWEALL:: rmsth()]
> with(aml, table(x, status))
              status
x                0  1
  Maintained    4  7
  Nonmaintained  1 11
> (11/12) / (7/11)  # hazard from the 2nd group / hazard from the 1st group
[1] 1.440476
</pre>
 
<li>To report the HR in table 1 for multiple variables, one must use Univariate Cox regression; for example [http://sthda.com/english/wiki/cox-proportional-hazards-model this one] uses lapply().
 
<li>finalfit package. [https://cran.r-project.org/web/packages/finalfit/vignettes/survival.html Time-to-event (Survival)] vignette.
<pre>
<pre>
R> library(survRM2)
library(finalfit) # finalfit()
R> D = rmst2.sample.data()
library(survival)
R> nrow(D)
library(forcats) # fct_recode()
[1] 312
R> head(D[,1:3])
      time status arm
1  1.095140      1  1
2 12.320329      0  1
3  2.770705      1  1
4  5.270363      1  1
5  4.117728      0  0
6  6.852841      1  0
R> time  = D$time
R> status = D$status
R> arm    = D$arm
R> rmst2(time, status, arm, tau=10)


The truncation time: tau = 10  was specified.
melanoma = boot::melanoma #F1 here for help page with data dictionary


Restricted Mean Survival Time (RMST) by arm
melanoma = melanoma %>%
              Est.    se lower .95 upper .95
  mutate(
RMST (arm=1) 7.146 0.283     6.592    7.701
    # Overall survival
RMST (arm=0) 7.283 0.295    6.704    7.863
    status_os = ifelse(status == 2, 0, # "still alive"
            1), # "died of melanoma" or "died of other causes"
    sex = factor(sex) %>%
        fct_recode("Male" = "1",
                  "Female" = "0"),
     ulcer = factor(ulcer) %>%
        fct_recode("No" = "0",
                  "Yes" = "1")
  )


dependent_os = "Surv(time, status_os)"
explanatory = c("age", "sex", "thickness", "ulcer")


Restricted Mean Time Lost (RMTL) by arm
mykable = function(x){
              Est.   se lower .95 upper .95
    knitr::kable(x, row.names = FALSE, align = c("l", "l", "r", "r", "r", "r", "r", "r", "r"))
RMTL (arm=1) 2.854 0.283    2.299    3.408
}
RMTL (arm=0) 2.717 0.295    2.137    3.296


univariate_results <- melanoma %>%
    finalfit(dependent_os, explanatory)
univariate_results2 <- univariate_results[, -5] # exclude multivariate column


Between-group contrast
# Output to CSV
                      Est. lower .95 upper .95    p
write.csv(univariate_results, file = "univariate_results.csv", row.names = FALSE)
RMST (arm=1)-(arm=0) -0.137    -0.939    0.665 0.738
 
RMST (arm=1)/(arm=0) 0.981    0.878    1.096 0.738
# Install and load required packages
RMTL (arm=1)/(arm=0) 1.050    0.787    1.402 0.738
library(flextable)
library(officer)
 
# Convert to flextable
ft <- flextable::flextable(univariate_results2)


R> library(PWEALL)
# Adjust the table style (optional)
R> PWEALL::rmsth(time, status, tcut=10)
ft <- ft %>%
$tcut
  flextable::theme_booktabs() %>%
[1] 10
  flextable::autofit()
$rmst
[1] 7.208579
$var
[1] 13.00232
$vadd
[1] 3.915123


R> PWEALL::rmsth(time[arm == 0], status[arm ==0], tcut=10)
# Save as Word document
$tcut
doc <- read_docx()
[1] 10
doc <- body_add_flextable(doc, value = ft)
$rmst
print(doc, target = "univariate_results.docx")
[1] 7.283416
$var
[1] 13.30564
$vadd
[1] 3.73545


R> PWEALL::rmsth(time[arm == 1], status[arm ==1], tcut=10)
# Hazard ratio plot
$tcut
melanoma %>%
[1] 10
    hr_plot(dependent_os, explanatory)
$rmst
[1] 7.146493
$var
[1] 12.49073
$vadd
[1] 3.967705
</pre>
</pre>
</li>
<li>[https://cran.r-project.org/web/packages/surv2sampleComp/index.html surv2sampleComp], [https://r-statistics-fan.hatenablog.com/entry/2014/08/04/225135 生存曲線下面積RMST(Restricted mean survival time)] </li>
<li>[https://onlinelibrary.wiley.com/doi/abs/10.1002/bimj.202200002 Clustered restricted mean survival time regression] Chen, 2022 </li>
</ul>
</ul>


== Piece-wise constant baseline hazard model, Poisson model and Breslow estimate ==
=== Hazard Ratio and death probability ===
* https://en.wikipedia.org/wiki/Proportional_hazards_model#Relationship_to_Poisson_models
https://en.wikipedia.org/wiki/Hazard_ratio#The_hazard_ratio_and_survival
* [https://en.wikipedia.org/wiki/Poisson_regression Poisson regression]
 
* http://data.princeton.edu/wws509/notes/c7s4.html
Suppose ''S''<sub>0</sub>(t)=.2 (20% survived at time t) and the hazard ratio (hr) is 2 (a group has twice the chance of dying than a comparison group), then (Cox model is assumed)
* It has been implemented in the biospear package ([https://github.com/cran/biospear/blob/master/R/poissonize.R poissonize.R]) with the 'grplasso' package for group-lasso method. ''We implemented a Poisson model over two-month intervals, corresponding to a piecewise constant hazard model which approximates rather well the Breslow estimator in the Cox model''.
# ''S''<sub>1</sub>(t)=''S''<sub>0</sub>(t)<sup>hr</sup> = .2<sup>2</sup> = .04 (4% survived at t)
* http://r.789695.n4.nabble.com/exponential-proportional-hazard-model-td805536.html
# The corresponding death probabilities are 0.8 and 0.96.
* https://www.demogr.mpg.de/papers/technicalreports/tr-2010-003.pdf
#  If a subject is exposed to twice the risk of a reference subject at every age, then the probability that the subject will be alive at any given age is the square of the probability that the reference subject (covariates = 0) would be alive at the same age. See [http://data.princeton.edu/pop509/ParametricSurvival.pdf#page=10 p10 of this lecture notes].
* [https://stats.stackexchange.com/q/8117 Does Cox Regression have an underlying Poisson distribution?]
# exp(x*beta) is the relative risk associated with covariate value x.
** [https://stats.stackexchange.com/questions/115479/calculate-incidence-rates-using-poisson-model-relation-to-hazard-ratio-from-cox/116083#116083 Calculate incidence rates using poisson model: relation to hazard ratio from Cox PH model] R code verification is included.
 
* [https://cran.r-project.org/web/packages/glmnet/vignettes/glmnetFamily.pdf Glm family functions in glmnet 4.0] also mentions Cox model (and multinomial regression, and multi-response Gaussian) is a special case of the Poisson GLM.
=== Hazard Ratio Forest Plot ===
* https://rdrr.io/cran/JM/man/piecewiseExp.ph.html
The forest plot quickly summarizes the hazard ratio data across multiple variables –If the line crosses the 1.0 value, the hazard ratio is not significant and there is no clear advantage for either arm.
* https://rdrr.io/cran/pch/man/pchreg.html
 
* [https://statmd.wordpress.com/2012/10/05/survival-analysis-via-hazard-based-modeling-and-generalized-linear-models/ Survival Analysis via Hazard Based Modeling and Generalized Linear Models]
See also [[Ggplot2#Dot_plot_.26_forest_plot|ggplot2 forest plot]]. survminer::ggforest(), survivalAnalysis::forest_plot() and [https://cran.r-project.org/web/packages/forestmodel/readme/README.html forestmodel::forest_model()].
* https://www.rdocumentation.org/packages/mgcv/versions/1.8-23/topics/cox.pht
* [https://www.joshua-entrop.com/post/optim_pois_reg/ Optimisation of a Poisson survival model using Optimx in R]


== Estimate baseline hazard <math>h_0(t)</math>, Breslow cumulative baseline hazard <math>H_0(t)</math>, baseline survival function <math>S_0(t)</math> and the survival function <math>S(t)</math> ==
<pre>
Google: how to estimate baseline hazard rate
library(survival)
* survfit.object has print(), plot(), lines(), and points() methods. It returns a list with components
library(survivalAnalysis)
** n
library(survminer)
** time
data(cancer, package = 'survival') # load colon among others
** n.risk
colon$sex <- factor(colon$sex)
** n.event
** n.censor
** surv [S_0(t)]
** cumhaz [ same as -log(surv)]
** upper
** lower
** n.all
* Terry Therneau: [http://r.789695.n4.nabble.com/Is-the-output-of-survfit-coxph-survival-or-baseline-survival-td3861919.html The ''baseline survival'', which is the survival for a hypothetical subject with all covariates=0, may be useful mathematical shorthand when writing a book but I cannot think of a single case where the resulting curve would be of any practical interest in medical data.]
* http://www.math.ucsd.edu/~rxu/math284/slect6.pdf#page=4 '''Breslow''' Estimator for '''cumulative''' baseline hazard at a time t and '''Kalbfleisch/Prentice''' Estimator
* When there are no covariates, the Breslow’s estimate reduces to the Fleming-Harrington (Nelson-Aalen) estimate, and K/P reduces to KM.
* [http://stats.stackexchange.com/questions/68737/how-to-estimate-baseline-hazard-function-in-cox-model-with-r stackexchange.com] and [https://stats.stackexchange.com/questions/36015/prediction-in-cox-regression/36077#36077 '''cumulative''' and non-cumulative baseline hazard]
* [http://grokbase.com/t/r/r-help/012p93znnh/r-newbie-cox-baseline-hazard (newbie) Cox Baseline Hazard] ''There are two methods of calculating the baseline survival, the default one gives the baseline hazard estimator you want. It is attributed to Aalen, Breslow, or Peto (see the next item).'' An example: https://stats.idre.ucla.edu/r/examples/asa/r-applied-survival-analysis-ch-2/.
* [https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/survfit.coxph survfit.coxph](formula, newdata, type, ...)
** newdata: '''Default is the mean of the covariates used in the coxph fit'''
** type = "aalen", "efron", or "kalbfleisch-prentice". The default is to match the computation used in the Cox model. The Nelson-Aalen-Breslow estimate for ties='breslow', the Efron estimate for ties='efron' and the Kalbfleisch-Prentice estimate for a discrete time model ties='exact'. Variance estimates are the Aalen-Link-Tsiatis, Efron, and Greenwood. The default will be the Efron estimate for ties='efron' and the '''Aalen estimate''' otherwise.
<ul>
<li>[http://grokbase.com/t/r/r-help/04a5ydyst0/r-nelson-aalen-estimator-in-r Nelson-Aalen estimator in R]. The easiest way to get the Nelson-Aalen estimator is
{{Pre}}
basehaz(coxph(Surv(time,status)~1,data=aml))
</pre>
because the (Breslow) hazard estimator for a Cox model reduces to the Nelson-Aalen estimator when there are no covariates. You can also compute it from information returned by survfit().
{{Pre}}
fit <- survfit(Surv(time, status) ~ 1, data = aml)
cumsum(fit$n.event/fit$n.risk) # the Nelson-Aalen estimator for the times given by fit$times
-log(fit$surv)  # cumulative hazard
</pre>
</li>
</ul>


=== Manually compute ===
tmp1 <- survival::colon %>%
'''Breslow estimator of the baseline cumulative hazard rate''' reduces to the '''Nelson-Aalen''' estimator <math>\sum_{t_i \le t} \frac{d_i}{Y_i}</math> (<math>Y_i</math> is the number at risk at time <math>t_i</math>) when there are no covariates present; see p283 of Klein 2003.
  analyse_multivariate(vars(time, status),
: <math>
      vars(rx, sex, age, obstruct, perfor, nodes, differ, extent))  
\begin{align}
tmp1 %>% forest_plot()
\hat{H}_0(t) &= \sum_{t_i \le t} \frac{d_i}{W(t_i;b)}, \\
W(t_i;b) &= \sum_{j \in R(t_i)} \exp(b' z_j)
\end{align}
</math>
where <math> t_1 < t_2 < \cdots < t_D</math> denotes the distinct death times and <math>d_i</math> be the number of deaths at time <math>t_i</math>. The estimator of the baseline survival function <math>S_0(t) = \exp [-H_0(t)]</math> is given by <math>\hat{S}_0(t) = \exp [-\hat{H}_0(t)]</math>.


<ul>
tmp2 <- coxph(Surv(time, status) ~ rx + sex + age + obstruct +
<li>Below we use the formula to compute the cumulative hazard (and survival function) and compare them with the result obtained using R's built-in functions. The following code is a modification of the snippet from the post [https://stats.stackexchange.com/questions/46532/cox-baseline-hazard Breslow cumulative hazard and basehaz()].
    perfor + nodes + differ + extent, data=colon)
{{Pre}}
survminer::ggforest(tmp2, data = colon)
bhaz <- function(beta, time, status, x) {
  # time can be duplicated
  # x (covariate) must be continuous
  data <- data.frame(time,status,x)
  data <- data[order(data$time), ]
  dt  <- unique(data$time)
  k    <- length(dt)
  risk <- exp(data.matrix(data[,-c(1:2)]) %*% beta)
  h    <- rep(0,k)
 
  for(i in 1:k) {
    h[i] <- sum(data$status[data$time==dt[i]]) / sum(risk[data$time>=dt[i]])         
  }
 
  return(data.frame(h, dt))
}


# Example 1 'ovarian' which has unique survival time
# Note that the above is not quite right since it is not based on
all(table(ovarian$futime) == 1) # TRUE
# the univariate model
coxph(Surv(time, status) ~ sex, data  = colon)


fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
# Even if all are continuous, fitting univariate and multivariate models
# 1. compute the cumulative baseline hazard
# returns different results
# 1.1 manually using Breslow estimator at the observed time
coxph(Surv(time, status) ~ obstruct, data = colon)
h0 <- bhaz(fit$coef, ovarian$futime, ovarian$fustat, ovarian$age)
coxph(Surv(time, status) ~ obstruct + perfor + age, data  = colon)
H0 <- cumsum(h0$h)
</pre>
# 1.2 use basehaz (always compute at the observed time)
# since we consider the baseline, we need to use centered=FALSE
H0.bh <- basehaz(fit, centered = FALSE)
cbind(H0, h0$dt, H0.bh)
range(abs(H0 - H0.bh$hazard)) # [1] 6.352747e-22 5.421011e-20


# 2. compute the estimation of the survival function
So the problem with '''survminer::ggforest()''' is it cannot run univariate Cox model for multiple variables. '''survivalAnalysis''' package can do that but I need to make sure the data looks correct (e.g. change 'unknown' to data that should be a continuous value). See the section "Multiple Univariate Analyses" in the [https://cran.r-project.org/web/packages/survivalAnalysis/vignettes/multivariate.html Multivariate Survival Analysis] vignette.
# 2.1 manually using Breslow estimator at t = observed time (one dim, sorted)
<pre>
#    and observed age (another dim):
df <- survival::lung %>%
# S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
  mutate(sex=rename_factor(sex, `1` = "male", `2` = "female"))
S1 <- outer(exp(-H0),  exp(coef(fit) * ovarian$age), "^")
dim(S1) # row = times, col = age
# 2.2 How about considering times not at observed (e.g. h0$dt - 10)?
# Let's look at the hazard rate
newtime <- h0$dt - 10
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
S2 <- outer(exp(-H0), exp(coef(fit) * ovarian$age), "^")
dim(S2) # row = newtime, col = age


# 2.3 use summary() and survfit() to obtain the estimation of the survival
map(vars(age, sex, ph.ecog, wt.loss), function(by)
S3 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = h0$dt)$surv
{
dim(S3)  # row = times, col = age
  analyse_multivariate(df,
range(abs(S1 - S3)) # [1] 2.117244e-17 6.544321e-12
                      vars(time, status),
# 2.4 How about considering times not at observed (e.g. h0$dt - 10)?
                      covariates = list(by), # covariates expects a list
# Note that we cannot put times larger than the observed
                      covariate_name_dict = covariate_names)
S4 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = newtime)$surv
}) %>%
range(abs(S2 - S4)) # [1] 0.000000e+00 6.544321e-12
  forest_plot(factor_labeller = covariate_names,
              endpoint_labeller = c(time="OS"),
              orderer = ~order(HR),
              labels_displayed = c("endpoint", "factor", "n"),
              ggtheme = ggplot2::theme_bw(base_size = 10))
</pre>
</pre>


{{Pre}}
Other examples:
# Example 2 'kidney' which has duplicated time
* Forest Plot for a Meta-analysis of Several Different Randomised Control Trials. See [http://a-little-book-of-r-for-biomedical-statistics.readthedocs.io/en/latest/src/biomedicalstats.html Using R for Biomedical Statistics]
fit <- coxph(Surv(time, status) ~ age, data = kidney)
* [https://www.nature.com/articles/s41698-023-00402-y/figures/5 Predictors for efficacy in BRCA1/2 wild-type participants]
# manually compute the breslow cumulative baseline hazard
#  at the observed time
h0 <- with(kidney, bhaz(fit$coef, time, status, age))
H0 <- cumsum(h0$h)
# use basehaz (always compute at the observed time)
# since we consider the baseline, we need to use centered=FALSE
H0.bh <- basehaz(fit, centered = FALSE)
head(cbind(H0, h0$dt, H0.bh))
range(abs(H0 - H0.bh$hazard)) # [1] 0.000000000 0.005775414


# manually compute the estimation of the survival function
=== Multivariate model ===
# at t = observed time (one dim, sorted) and observed age (another dim):
<ul>
# S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
<li>Variables order does not change the hazard ratios or the p-value
S1 <- outer(exp(-H0),  exp(coef(fit) * kidney$age), "^")
<pre>
dim(S1) # row = times, col = age
R> data(cancer, package = 'survival') # load colon among others
# How about considering times not at observed (h0$dt - 1)?
R> colon$sex <- factor(colon$sex)
# Let's look at the hazard rate
R> tmp2 <- coxph(Surv(time, status) ~ rx + sex + age + obstruct +
newtime <- h0$dt - 1
                perfor + nodes + differ + extent, data=colon)
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
R> tmp2
S2 <- outer(exp(-H0), exp(coef(fit) * kidney$age), "^")
dim(S2) # row = newtime, col = age
 
# use summary() and survfit() to obtain the estimation of the survival
S3 <- summary(survfit(fit, data.frame(age = kidney$age)), times = h0$dt)$surv
dim(S3)  # row = times, col = age
range(abs(S1 - S3)) # [1] 0.000000000 0.002783715
# How about considering times not at observed (h0$dt - 1)?
# We cannot put times larger than the observed
S4 <- summary(survfit(fit, data.frame(age = kidney$age)), times = newtime)$surv
range(abs(S2 - S4)) # [1] 0.000000000 0.002783715
</pre>
 
<li>[https://stat.ethz.ch/R-manual/R-devel/library/survival/html/basehaz.html basehaz()] (an alias for survfit) from [http://stats.stackexchange.com/questions/25317/how-to-calculate-predicted-hazard-rates-from-a-cox-ph-model stackexchange.com] and [http://r.789695.n4.nabble.com/breslow-estimator-for-cumulative-hazard-function-td795277.html here]. basehaz() has a parameter ''centered'' which by default is TRUE. Actually basehaz() gives '''cumulative hazard H(t)'''. See [http://r.789695.n4.nabble.com/Baseline-survival-estimate-td965389.html here]. That is, exp(-basehaz(fit)$hazard) is the same as summary(survfit(fit))$surv. basehaz() function is not useful.
{{Pre}}
fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)  
> fit
Call:
Call:
coxph(formula = Surv(futime, fustat) ~ age, data = ovarian)
coxph(formula = Surv(time, status) ~ rx + sex + age + obstruct +
    perfor + nodes + differ + extent, data = colon)


      coef exp(coef) se(coef)   z      p
              coef exp(coef) se(coef)     z       p
age 0.1616    1.1754   0.0497 3.25 0.0012
rxLev    -0.072841  0.929749  0.079231 -0.919  0.3579
rxLev+5FU -0.450133  0.637543  0.085975 -5.236 1.64e-07
sex1     -0.090141  0.913803  0.068075 -1.324  0.1855
age       0.002164  1.002166  0.002874  0.753  0.4516
obstruct  0.202638  1.224629  0.084372  2.402  0.0163
perfor    0.149875  1.161689  0.182766  0.820  0.4122
nodes      0.081185  1.084571  0.006698 12.120  < 2e-16
differ    0.146674  1.157977  0.070095  2.093   0.0364
extent    0.467536  1.596057  0.081726  5.721 1.06e-08


Likelihood ratio test=14.3 on 1 df, p=0.000156
Likelihood ratio test=212.6 on 9 df, p=< 2.2e-16
n= 26, number of events= 12
n= 1776, number of events= 876
  (82 observations deleted due to missingness)


# Note the default 'centered = TRUE' for basehaz()
# Move 'nodes' to the last term
> exp(-basehaz(fit)$hazard) # exp(-cumulative hazard)
R> tmp3 <- coxph(Surv(time, status) ~ rx + sex + age + obstruct +
[1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
                perfor + differ + extent + nodes, data=colon)
[8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
R> tmp3
[15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
Call:
[22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
coxph(formula = Surv(time, status) ~ rx + sex + age + obstruct +
> dim(ovarian)
    perfor + differ + extent + nodes, data = colon)
[1] 26  6
> exp(-basehaz(fit)$hazard)[ovarian$fustat == 1]
[1] 0.9880206 0.9738738 0.9545899 0.8973620 0.8243117 0.8243117 0.7750981
[8] 0.7750981 0.5204807 0.5204807 0.5204807 0.5204807
> summary(survfit(fit))$surv
[1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
[8] 0.7750981 0.7244924 0.6734146 0.5962187 0.5204807
> summary(survfit(fit, data.frame(age=mean(ovarian$age))),  
          time=ovarian$futime[ovarian$fustat == 1])$surv
# Same result as above
> summary(survfit(fit, data.frame(age=mean(ovarian$age))),
                    time=ovarian$futime)$surv
[1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
[8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
[15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
[22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
</pre>
<li>[https://stats.stackexchange.com/a/288419 Calculating survival probability per person at time (t) from Cox PH]
</ul>


== Predicted survival probability in Cox model: survfit.coxph(), plot.survfit() & summary.survfit( , times) ==
              coef exp(coef) se(coef)     z        p
For theory, see section 8.6 Estimation of the survival function in Klein & Moeschberger. See the formula in [https://stats.stackexchange.com/a/36077 Prediction in Cox regression].
rxLev    -0.072841  0.929749  0.079231 -0.919  0.3579
rxLev+5FU -0.450133  0.637543  0.085975 -5.236 1.64e-07
sex1      -0.090141  0.913803  0.068075 -1.324  0.1855
age        0.002164  1.002166  0.002874  0.753  0.4516
obstruct  0.202638  1.224629  0.084372  2.402  0.0163
perfor    0.149875  1.161689  0.182766  0.820  0.4122
differ    0.146674  1.157977  0.070095  2.093  0.0364
extent    0.467536  1.596057  0.081726  5.721 1.06e-08
nodes      0.081185  1.084571  0.006698 12.120  < 2e-16


For R, see [https://stackoverflow.com/questions/26641178/extract-survival-probabilities-in-survfit-by-groups Extract survival probabilities in Survfit by groups]
Likelihood ratio test=212.6  on 9 df, p=< 2.2e-16
n= 1776, number of events= 876
  (82 observations deleted due to missingness)
</pre>
<li>Univariate model and multivariate model result diff
<pre>
R> coxph(Surv(time, status) ~ perfor, data = colon)
Call:
coxph(formula = Surv(time, status) ~ perfor, data = colon)


[https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/plot.survfit plot.survfit()]. fun="log" to plot log survival curve, fun="event" plot cumulative events, fun="cumhaz" plots cumulative hazard (f(y) = -log(y)).
        coef exp(coef) se(coef)    z    p
perfor 0.2644    1.3026  0.1800 1.469 0.142


The plot function below will draw 4 curves: <math>S_0(t)^{\exp(\hat{\beta}_{age}*60)}</math>, <math>S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageII})}</math>, <math>S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIII})}</math>, <math>S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIV})}</math>.
Likelihood ratio test=1.99  on 1 df, p=0.1583
{{Pre}}
n= 1858, number of events= 920
library(KMsurv) # Data package for Klein & Moeschberge
R> coxph(Surv(time, status) ~ age + perfor, data = colon)
data(larynx)
Call:
larynx$stage <- factor(larynx$stage)
coxph(formula = Surv(time, status) ~ age + perfor, data = colon)
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)


# Figure 8.3 from Section 8.6
            coef exp(coef)  se(coef)      z    p
plot(survfit(coxobj, newdata = data.frame(age=rep(60, 4), stage=factor(1:4))), lty = 1:4)
age    -0.002325  0.997678  0.002797 -0.831 0.406
perfor  0.259370  1.296113  0.180067  1.440 0.150


# Estimated probability for a 60-year old for different stage patients
Likelihood ratio test=2.68  on 2 df, p=0.2621
out <- summary(survfit(coxobj, data.frame(age = rep(60, 4), stage=factor(1:4))), times = 5)
n= 1858, number of events= 920
out$surv
#  time n.risk n.event survival1 survival2 survival3 survival4
#    5    34      40    0.702    0.665      0.51    0.142
sum(larynx$time >=5) # n.risk
# [1] 34
sum(larynx$delta[larynx$time <=5]) # n.event
# [1] 40
sum(larynx$time >5) # Wrong
# [1] 31
sum(larynx$delta[larynx$time <5]) # Wrong
# [1] 39
 
# 95% confidence interval
out$lower
# 0.8629482 0.9102532 0.7352413 0.548579
out$upper
# 0.5707952 0.4864903 0.3539527 0.03691768
</pre>
</pre>
</ul>


We need to pay attention when the number of covariates is large (and we don't want to specify each covariates in the formula). The key is to create a data frame and use dot (.) in the formula. This is to fix a warning message: '' 'newdata' had XXX rows but variables found have YYY rows'' from running '''survfit(, newdata)'''.
=== Infinity HR ===
[https://twitter.com/GSCollins/status/1680846753573089283 Monotone likelihood] and [https://cran.r-project.org/web/packages/coxphf/index.html coxphf] package.


Another way is to use [https://stackoverflow.com/questions/25313897/r-survival-analysis-coxph-call-multiple-column as.formula()] if we don't want to create a new object.
== Piece-wise constant baseline hazard model, Poisson model and Breslow estimate ==
{{Pre}}
* https://en.wikipedia.org/wiki/Proportional_hazards_model#Relationship_to_Poisson_models
xsub <- data.frame(xtrain)
* [https://en.wikipedia.org/wiki/Poisson_regression Poisson regression]
colnames(xsub) <- paste0("x", 1:ncol(xsub))
* http://data.princeton.edu/wws509/notes/c7s4.html
 
* It has been implemented in the biospear package ([https://github.com/cran/biospear/blob/master/R/poissonize.R poissonize.R]) with the 'grplasso' package for group-lasso method. ''We implemented a Poisson model over two-month intervals, corresponding to a piecewise constant hazard model which approximates rather well the Breslow estimator in the Cox model''.
coxobj <- coxph(Surv(ytrain[, "time"], ytrain[, "status"]) ~ ., data = xsub)
* http://r.789695.n4.nabble.com/exponential-proportional-hazard-model-td805536.html
* https://www.demogr.mpg.de/papers/technicalreports/tr-2010-003.pdf
* [https://stats.stackexchange.com/q/8117 Does Cox Regression have an underlying Poisson distribution?]
** [https://stats.stackexchange.com/questions/115479/calculate-incidence-rates-using-poisson-model-relation-to-hazard-ratio-from-cox/116083#116083 Calculate incidence rates using poisson model: relation to hazard ratio from Cox PH model] R code verification is included.
* [https://cran.r-project.org/web/packages/glmnet/vignettes/glmnetFamily.pdf Glm family functions in glmnet 4.0] also mentions Cox model (and multinomial regression, and multi-response Gaussian) is a special case of the Poisson GLM.
* https://rdrr.io/cran/JM/man/piecewiseExp.ph.html
* https://rdrr.io/cran/pch/man/pchreg.html
* [https://statmd.wordpress.com/2012/10/05/survival-analysis-via-hazard-based-modeling-and-generalized-linear-models/ Survival Analysis via Hazard Based Modeling and Generalized Linear Models]
* https://www.rdocumentation.org/packages/mgcv/versions/1.8-23/topics/cox.pht
* [https://www.joshua-entrop.com/post/optim_pois_reg/ Optimisation of a Poisson survival model using Optimx in R]


newdata <- data.frame(xtest)
== Estimate baseline hazard <math>h_0(t)</math>, Breslow cumulative baseline hazard <math>H_0(t)</math>, baseline survival function <math>S_0(t)</math> and the survival function <math>S(t)</math> ==
colnames(newdata) <- paste0("x", 1:ncol(newdata))
Google: how to estimate baseline hazard rate
 
* survfit.object has print(), plot(), lines(), and points() methods. It returns a list with components
survprob <- summary(survfit(coxobj, newdata=newdata),  
** n
                    times = 5)$surv[1, ]
** time
# since there is only 1 time point, we select the first row in surv (surv is a matrix with one row).
** n.risk
</pre>
** n.event
 
** n.censor
The [https://www.rdocumentation.org/packages/pec/versions/2018.07.26/topics/predictSurvProb predictSurvProb()] function from the [https://www.rdocumentation.org/packages/pec/versions/2018.07.26 pec] package can also be used to extract survival probability predictions from various modeling approaches.
** surv [S_0(t)]
 
** cumhaz [ same as -log(surv)]
=== Visualizing the estimated distribution of survival times ===
** upper
survminer::ggsurvplot(); see [http://www.sthda.com/english/wiki/cox-proportional-hazards-model#visualizing-the-estimated-distribution-of-survival-times here].
** lower
 
** n.all
=== Predicted survival probabilities from glmnet: c060/peperr, biospear packages and manual computation ===
* Terry Therneau: [http://r.789695.n4.nabble.com/Is-the-output-of-survfit-coxph-survival-or-baseline-survival-td3861919.html The ''baseline survival'', which is the survival for a hypothetical subject with all covariates=0, may be useful mathematical shorthand when writing a book but I cannot think of a single case where the resulting curve would be of any practical interest in medical data.]
* Terry Therneau: [http://r.789695.n4.nabble.com/Predict-in-glmnet-for-cox-family-td4706070.html The answer is that you cannot predict survival time, in general]
* http://www.math.ucsd.edu/~rxu/math284/slect6.pdf#page=4 '''Breslow''' Estimator for '''cumulative''' baseline hazard at a time t and '''Kalbfleisch/Prentice''' Estimator
* https://rdrr.io/cran/c060/man/predictProb.glmnet.html
* When there are no covariates, the Breslow’s estimate reduces to the Fleming-Harrington (Nelson-Aalen) estimate, and K/P reduces to KM.
* [http://stats.stackexchange.com/questions/68737/how-to-estimate-baseline-hazard-function-in-cox-model-with-r stackexchange.com] and [https://stats.stackexchange.com/questions/36015/prediction-in-cox-regression/36077#36077 '''cumulative''' and non-cumulative baseline hazard]
* [http://grokbase.com/t/r/r-help/012p93znnh/r-newbie-cox-baseline-hazard (newbie) Cox Baseline Hazard] ''There are two methods of calculating the baseline survival, the default one gives the baseline hazard estimator you want. It is attributed to Aalen, Breslow, or Peto (see the next item).'' An example: https://stats.idre.ucla.edu/r/examples/asa/r-applied-survival-analysis-ch-2/.
* [https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/survfit.coxph survfit.coxph](formula, newdata, type, ...)
** newdata: '''Default is the mean of the covariates used in the coxph fit'''
** type = "aalen", "efron", or "kalbfleisch-prentice". The default is to match the computation used in the Cox model. The Nelson-Aalen-Breslow estimate for ties='breslow', the Efron estimate for ties='efron' and the Kalbfleisch-Prentice estimate for a discrete time model ties='exact'. Variance estimates are the Aalen-Link-Tsiatis, Efron, and Greenwood. The default will be the Efron estimate for ties='efron' and the '''Aalen estimate''' otherwise.
<ul>
<li>[http://grokbase.com/t/r/r-help/04a5ydyst0/r-nelson-aalen-estimator-in-r Nelson-Aalen estimator in R]. The easiest way to get the Nelson-Aalen estimator is  
{{Pre}}
basehaz(coxph(Surv(time,status)~1,data=aml))
</pre>
because the (Breslow) hazard estimator for a Cox model reduces to the Nelson-Aalen estimator when there are no covariates. You can also compute it from information returned by survfit().
{{Pre}}
{{Pre}}
## S3 method for class 'glmnet'
fit <- survfit(Surv(time, status) ~ 1, data = aml)
predictProb(object, response, x, times, complexity, ...)
cumsum(fit$n.event/fit$n.risk) # the Nelson-Aalen estimator for the times given by fit$times
-log(fit$surv)   # cumulative hazard
</pre>
</li>
</ul>


set.seed(1234)
=== Manually compute ===
junk <- biospear::simdata(n=500, p=500, q.main = 10, q.inter = 0,
'''Breslow estimator of the baseline cumulative hazard rate''' reduces to the '''Nelson-Aalen''' estimator <math>\sum_{t_i \le t} \frac{d_i}{Y_i}</math> (<math>Y_i</math> is the number at risk at time <math>t_i</math>) when there are no covariates present; see p283 of Klein 2003.
                  prob.tt = .5, m0=1, alpha.tt=0,
: <math>
                  beta.main= -.5, b.corr = .7, b.corr.by=25,  
\begin{align}
                  wei.shape = 1, recr=3, fu=2, timefactor=1)
\hat{H}_0(t) &= \sum_{t_i \le t} \frac{d_i}{W(t_i;b)}, \\
summary(junk$time)
W(t_i;b) &= \sum_{j \in R(t_i)} \exp(b' z_j)
library(glmnet)
\end{align}
library(c060) # Error: object 'predictProb' not found
</math>
library(peperr)
where <math> t_1 < t_2 < \cdots < t_D</math> denotes the distinct death times and <math>d_i</math> be the number of deaths at time <math>t_i</math>. The estimator of the baseline survival function <math>S_0(t) = \exp [-H_0(t)]</math> is given by <math>\hat{S}_0(t) = \exp [-\hat{H}_0(t)]</math>.


y <- cbind(time=junk$time, status=junk$status)
<ul>
x <- cbind(1, junk[, "treat", drop = FALSE])
<li>Below we use the formula to compute the cumulative hazard (and survival function) and compare them with the result obtained using R's built-in functions. The following code is a modification of the snippet from the post [https://stats.stackexchange.com/questions/46532/cox-baseline-hazard Breslow cumulative hazard and basehaz()].
names(x) <- c("inter", "treat")
x <- as.matrix(x)
cvfit <- cv.glmnet(x, y, family = "cox")
obj <- glmnet(x, y, family = "cox")
xnew <- matrix(c(0,0), nr=1)
predictProb(obj, y, xnew, times=1, complexity = cvfit$lambda.min)
# Error in exp(lp[response[, 1] >= t.unique[i]]) :
#  non-numeric argument to mathematical function
# In addition: Warning message:
# In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL'
</pre>
* https://www.rdocumentation.org/packages/biospear/versions/1.0.1/topics/expSurv and manual computation (search bhaz)
<pre>
expSurv(res, traindata, method, ci.level = .95, boot = FALSE, nboot, smooth = TRUE,
  pct.group = 4, time, trace = TRUE, ncores = 1)
# S3 method for resexpSurv
predict(object, newdata, ...)
</pre>
{{Pre}}
{{Pre}}
# continue the example
bhaz <- function(beta, time, status, x) {
# BMsel() takes a little while
  # time can be duplicated
resBM <- biospear::BMsel(
  # x (covariate) must be continuous
    data = junk,  
  data <- data.frame(time,status,x)
    method = "lasso",  
  data <- data[order(data$time), ]
    inter = FALSE,  
  dt  <- unique(data$time)
     folds = 5)
  k    <- length(dt)
  risk <- exp(data.matrix(data[,-c(1:2)]) %*% beta)
  h    <- rep(0,k)
 
  for(i in 1:k) {
     h[i] <- sum(data$status[data$time==dt[i]]) / sum(risk[data$time>=dt[i]])        
  }
    
    
# Note: if we specify time =5 in expsurv(), we will get an error message
  return(data.frame(h, dt))
# 'time' is out of the range of the observed survival time.
}
# Note: if we try to specify more than 1 time point, we will get the following msg
# 'time' must be an unique value; no two values are allowed.
esurv <- biospear::expSurv(
    res = resBM,
    traindata = junk,
    boot = FALSE,
    time = median(junk$time),
    trace = TRUE)
# debug(biospear:::plot.resexpSurv)
plot(esurv, method = "lasso")
# This is equivalent to doing the following
xx <- attributes(esurv)$m.score[, "lasso"]
wc <- order(xx); wgr <- 1:nrow(esurv$surv)
p1 <- plot(x = xx[wc], y = esurv$surv[wgr, "lasso"][wc],
          xlab='prognostic score', ylab='survival prob')
# prognostic score beta*x in this cases.
# ignore treatment effect and interactions
xxmy <- as.vector(as.matrix(junk[, names(resBM$lasso)]) %*% resBM$lasso)
# See page4 of the paper. Scaled scores were used in the plot
range(abs(xx - (xxmy-quantile(xxmy, .025)) / (quantile(xxmy, .975) -  quantile(xxmy, .025))))
# [1] 1.500431e-09 1.465241e-06


h0 <- bhaz(resBM$lasso, junk$time, junk$status, junk[, names(resBM$lasso)])
# Example 1 'ovarian' which has unique survival time
newtime <- median(junk$time)
all(table(ovarian$futime) == 1) # TRUE
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
# newx <- junk[ , names(resBM$lasso)]
# Compute the estimate of the survival probability at training x and time = median(junk$time)
# using Breslow method
S2 <- outer(exp(-H0),  exp(xxmy), "^") # row = newtime, col = newx
range(abs(esurv$surv[wgr, "lasso"] - S2))
# [1] 6.455479e-18 2.459136e-06
# My implementation of the prognostic plot
#  Note that the x-axis on the plot is based on prognostic scores beta*x,
#  not on treatment modifying scores gamma*x as described in the paper.
#  Maybe it is because inter = FALSE in BMsel() we have used.
p2 <- plot(xxmy[wc], S2[wc], xlab='prognostic score', ylab='survival prob') # cf p1


> names(esurv)
fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
[1] "surv"  "lower" "upper"
# 1. compute the cumulative baseline hazard
> str(esurv$surv)
# 1.1 manually using Breslow estimator at the observed time
num [1:500, 1:2] 0.772 0.886 0.961 0.731 0.749 ...
h0 <- bhaz(fit$coef, ovarian$futime, ovarian$fustat, ovarian$age)
- attr(*, "dimnames")=List of 2
H0 <- cumsum(h0$h)
  ..$ : NULL
# 1.2 use basehaz (always compute at the observed time)
  ..$ : chr [1:2] "lasso" "oracle"
# since we consider the baseline, we need to use centered=FALSE
H0.bh <- basehaz(fit, centered = FALSE)
cbind(H0, h0$dt, H0.bh)
range(abs(H0 - H0.bh$hazard)) # [1] 6.352747e-22 5.421011e-20


esurv2 <- predict(esurv, newdata = junk)
# 2. compute the estimation of the survival function
esurv2$surv       # All zeros?
# 2.1 manually using Breslow estimator at t = observed time (one dim, sorted)
#    and observed age (another dim):
# S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
S1 <- outer(exp(-H0),  exp(coef(fit) * ovarian$age), "^")
dim(S1) # row = times, col = age
# 2.2 How about considering times not at observed (e.g. h0$dt - 10)?
# Let's look at the hazard rate
newtime <- h0$dt - 10
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
S2 <- outer(exp(-H0),  exp(coef(fit) * ovarian$age), "^")
dim(S2) # row = newtime, col = age
 
# 2.3 use summary() and survfit() to obtain the estimation of the survival
S3 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = h0$dt)$surv
dim(S3)  # row = times, col = age
range(abs(S1 - S3)) # [1] 2.117244e-17 6.544321e-12
# 2.4 How about considering times not at observed (e.g. h0$dt - 10)?
# Note that we cannot put times larger than the observed
S4 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = newtime)$surv
range(abs(S2 - S4)) # [1] 0.000000e+00 6.544321e-12
</pre>
</pre>
Bug from the sample data (interaction was considered here; inter = TRUE) ?
 
{{Pre}}
{{Pre}}
set.seed(123456)
# Example 2 'kidney' which has duplicated time
resBM <- BMsel(
fit <- coxph(Surv(time, status) ~ age, data = kidney)
  data = Breast,
# manually compute the breslow cumulative baseline hazard
  x = 4:ncol(Breast),
#  at the observed time
  y = 2:1,
h0 <- with(kidney, bhaz(fit$coef, time, status, age))
  tt = 3,
H0 <- cumsum(h0$h)
  inter = TRUE,
# use basehaz (always compute at the observed time)
  std.x = TRUE,
# since we consider the baseline, we need to use centered=FALSE
  folds = 5,
H0.bh <- basehaz(fit, centered = FALSE)
  method = c("lasso", "lasso-pcvl"))
head(cbind(H0, h0$dt, H0.bh))
range(abs(H0 - H0.bh$hazard)) # [1] 0.000000000 0.005775414


esurv <- expSurv(
# manually compute the estimation of the survival function
  res = resBM,
# at t = observed time (one dim, sorted) and observed age (another dim):
  traindata = Breast,
# S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
  boot = FALSE,
S1 <- outer(exp(-H0),  exp(coef(fit) * kidney$age), "^")
  smooth = TRUE,
dim(S1) # row = times, col = age
  time = 4,
# How about considering times not at observed (h0$dt - 1)?
  trace = TRUE
# Let's look at the hazard rate
)
newtime <- h0$dt - 1
Computation of the expected survival
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
Computation of analytical confidence intervals
S2 <- outer(exp(-H0),  exp(coef(fit) * kidney$age), "^")
Computation of smoothed B-splines
dim(S2) # row = newtime, col = age
Error in cobs(x = x, y = y, print.mesg = F, print.warn = F, method = "uniform",  :
 
  There is at least one pair of adjacent knots that contains no observation.
# use summary() and survfit() to obtain the estimation of the survival
S3 <- summary(survfit(fit, data.frame(age = kidney$age)), times = h0$dt)$surv
dim(S3)  # row = times, col = age
range(abs(S1 - S3)) # [1] 0.000000000 0.002783715
# How about considering times not at observed (h0$dt - 1)?
# We cannot put times larger than the observed
S4 <- summary(survfit(fit, data.frame(age = kidney$age)), times = newtime)$surv
range(abs(S2 - S4)) # [1] 0.000000000 0.002783715
</pre>
</pre>


== Plot predictor vs HR ==
<li>[https://stat.ethz.ch/R-manual/R-devel/library/survival/html/basehaz.html basehaz()] (an alias for survfit) from [http://stats.stackexchange.com/questions/25317/how-to-calculate-predicted-hazard-rates-from-a-cox-ph-model stackexchange.com] and [http://r.789695.n4.nabble.com/breslow-estimator-for-cumulative-hazard-function-td795277.html here]. basehaz() has a parameter ''centered'' which by default is TRUE. Actually basehaz() gives '''cumulative hazard H(t)'''. See [http://r.789695.n4.nabble.com/Baseline-survival-estimate-td965389.html here]. That is, exp(-basehaz(fit)$hazard) is the same as summary(survfit(fit))$surv. basehaz() function is not useful.
* https://github.com/tjbencomo/survival-talk-pntlab/blob/master/survival_talk.pdf
{{Pre}}
* https://www.imsbio.co.jp/RGM/R_rdfile?f=Greg/man/plotHR.Rd&d=R_CC which uses [https://rdrr.io/cran/Greg/man/plotHR.html Greg::plotHR()] function.
fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
* ggplot(Predict(modA_cph, age)) [https://thomaselove.github.io/432-notes/cox-regression-models-for-survival-data-example-2.html log relative hazard vs predictor]
> fit
Call:
coxph(formula = Surv(futime, fustat) ~ age, data = ovarian)


== Loglikelihood ==
      coef exp(coef) se(coef)   z      p
* fit$loglik is a vector of length 2 (initial model, fitted model). So deviance can be calculated by '''-2*fit$loglik[2]'''; see [https://github.com/nyiuab/BhGLM/blob/master/R/bcoxph.r#L402 here] for an example from BhGLM package.
age 0.1616    1.1754  0.0497 3.25 0.0012
* Use '''survival::anova()''' command to do a likelihood ratio test. Note this function does not work on ''glmnet'' object.
* [https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/residuals.coxph residuals.coxph] Calculates martingale, deviance, score or Schoenfeld residuals for a Cox proportional hazards model.
* No deviance() on coxph object!
* [https://stat.ethz.ch/R-manual/R-devel/library/survival/html/logLik.coxph.html logLik()] function will return fit$loglik[2]
* [http://www.erikdrysdale.com/cox_partiallikelihood/ Gradient descent for the elastic net Cox-PH model]


=== glmnet ===
Likelihood ratio test=14.3  on 1 df, p=0.000156
* It seems AIC does not require the assumption of nested models.
n= 26, number of events= 12
* https://en.wikipedia.org/wiki/Akaike_information_criterion, ([https://forvo.com/word/akaike/ akaike pronunciation in Japanese])
:<math>
\begin{align}
\mathrm{AIC} &= 2k - 2\ln(\hat L) \\
\mathrm{AICc} &= \mathrm{AIC} + \frac{2k^2 + 2k}{n - k - 1}
\end{align}
</math>
* [https://stats.stackexchange.com/questions/25817/is-it-possible-to-calculate-aic-and-bic-for-lasso-regression-models Is it possible to calculate AIC and BIC for lasso regression models?]. See the references about the degrees of freedom in Lasso regressions.
{{Pre}}
fit <- glmnet(x, y, family = "multinomial")


tLL <- fit$nulldev - deviance(fit) # ln(L)
# Note the default 'centered = TRUE' for basehaz()
k <- fit$df
> exp(-basehaz(fit)$hazard) # exp(-cumulative hazard)
n <- fit$nobs
[1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
AICc <- -tLL+2*k+2*k*(k+1)/(n-k-1)
[8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
AICc
[15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
[22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
> dim(ovarian)
[1] 26  6
> exp(-basehaz(fit)$hazard)[ovarian$fustat == 1]
[1] 0.9880206 0.9738738 0.9545899 0.8973620 0.8243117 0.8243117 0.7750981
[8] 0.7750981 0.5204807 0.5204807 0.5204807 0.5204807
> summary(survfit(fit))$surv
[1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
[8] 0.7750981 0.7244924 0.6734146 0.5962187 0.5204807
> summary(survfit(fit, data.frame(age=mean(ovarian$age))),
          time=ovarian$futime[ovarian$fustat == 1])$surv
# Same result as above
> summary(survfit(fit, data.frame(age=mean(ovarian$age))),
                    time=ovarian$futime)$surv
[1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
[8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
[15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
[22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
</pre>
</pre>
* https://github.com/nyiuab/BhGLM/blob/master/R/bmlasso.r#L143
<li>[https://stats.stackexchange.com/a/288419 Calculating survival probability per person at time (t) from Cox PH]
<pre>
</ul>
f <- glmnet(x = x, y = y, family = family)
 
f$aic <- deviance(f) + 2 * f$df
== Predicted survival probability in Cox model: survfit.coxph(), plot.survfit() & summary.survfit( , times) ==
</pre>
For theory, see section 8.6 Estimation of the survival function in Klein & Moeschberger. See the formula in [https://stats.stackexchange.com/a/36077 Prediction in Cox regression].
* For ''glmnet'' object, see [https://rdrr.io/cran/glmnet/man/deviance.glmnet.html ?deviance.glmnet], ?coxnet.deviance and [https://stackoverflow.com/questions/40920051/r-getting-aic-bic-likelihood-from-glmnet R: Getting AIC/BIC/Likelihood from GLMNet]. An example with all continuous variables
 
{{Pre}}
For R, see [https://stackoverflow.com/questions/26641178/extract-survival-probabilities-in-survfit-by-groups Extract survival probabilities in Survfit by groups]
set.seed(10101)
 
N=1000;p=6
[https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/plot.survfit plot.survfit()]. fun="log" to plot log survival curve, fun="event" plot cumulative events, fun="cumhaz" plots cumulative hazard (f(y) = -log(y)).
nzc=p/3
x=matrix(rnorm(N*p),N,p)
beta=rnorm(nzc)
fx=x[,seq(nzc)]%*%beta/3
hx=exp(fx)
ty=rexp(N,hx)
tcens=rbinom(n=N,prob=.3,size=1)# censoring indicator
y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)
coxobj <- coxph(Surv(ty, 1-tcens) ~ x)
coxobj_small <- coxph(Surv(ty, 1-tcens) ~1)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(ty, 1 - tcens)
# Model 1: ~ x
# Model 2: ~ 1
# loglik  Chisq Df P(>|Chi|) 
# 1 -4095.2                     
# 2 -4102.7 15.151  6  0.01911 *


fit2 <- glmnet(x,y,family="cox", lambda=0) # ridge regression
The plot function below will draw 4 curves: <math>S_0(t)^{\exp(\hat{\beta}_{age}*60)}</math>, <math>S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageII})}</math>, <math>S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIII})}</math>, <math>S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIV})}</math>.
deviance(fit2)                             # 2*(loglike_sat - loglike)
{{Pre}}
# [1] 8190.313
library(KMsurv) # Data package for Klein & Moeschberge
coxnet.deviance(x=x, y=y, beta=coef(fit2)) # 2*(loglike_sat - loglike)
data(larynx)
# [1] 8190.313 
larynx$stage <- factor(larynx$stage)
# https://github.com/cran/glmnet/blob/master/R/coxnet.deviance.R#L79
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)


assess.glmnet(fit2, x=x, y=y)     # returns deviance and c-index
# Figure 8.3 from Section 8.6
fit2$df
plot(survfit(coxobj, newdata = data.frame(age=rep(60, 4), stage=factor(1:4))), lty = 1:4)
# [1] 6
 
fit2$nulldev - deviance(fit2) # Log-Likelihood ratio statistic
# Estimated probability for a 60-year old for different stage patients
# [1] 15.15097
out <- summary(survfit(coxobj, data.frame(age = rep(60, 4), stage=factor(1:4))), times = 5)
1-pchisq(fit2$nulldev - deviance(fit2), fit2$df)
out$surv
# [1] 0.01911446
#  time n.risk n.event survival1 survival2 survival3 survival4
#    5    34      40    0.702    0.665      0.51    0.142
sum(larynx$time >=5) # n.risk
# [1] 34
sum(larynx$delta[larynx$time <=5]) # n.event
# [1] 40
sum(larynx$time >5) # Wrong
# [1] 31
sum(larynx$delta[larynx$time <5]) # Wrong
# [1] 39
 
# 95% confidence interval
out$lower
# 0.8629482 0.9102532 0.7352413 0.548579
out$upper
# 0.5707952 0.4864903 0.3539527 0.03691768
</pre>
</pre>
Here is another example including a factor covariate:
 
We need to pay attention when the number of covariates is large (and we don't want to specify each covariates in the formula). The key is to create a data frame and use dot (.) in the formula. This is to fix a warning message: '' 'newdata' had XXX rows but variables found have YYY rows'' from running '''survfit(, newdata)'''.
 
Another way is to use [https://stackoverflow.com/questions/25313897/r-survival-analysis-coxph-call-multiple-column as.formula()] if we don't want to create a new object.
{{Pre}}
{{Pre}}
library(KMsurv) # Data package for Klein & Moeschberge
xsub <- data.frame(xtrain)
data(larynx)
colnames(xsub) <- paste0("x", 1:ncol(xsub))
larynx$stage <- factor(larynx$stage)
 
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)
coxobj <- coxph(Surv(ytrain[, "time"], ytrain[, "status"]) ~ ., data = xsub)
coef(coxobj)
 
# age    stage2    stage3    stage4
newdata <- data.frame(xtest)
# 0.0190311 0.1400402 0.6423817 1.7059796
colnames(newdata) <- paste0("x", 1:ncol(newdata))
coxobj_small <- coxph(Surv(time, delta)~age, data = larynx)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(time, delta)
# Model 1: ~ age + stage
# Model 2: ~ age
# loglik  Chisq Df P(>|Chi|)  
# 1 -187.71                     
# 2 -195.55 15.681  3  0.001318 **
#  ---
#  Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1


# Now let's look at the glmnet() function.
survprob <- summary(survfit(coxobj, newdata=newdata),  
# It seems glmnet does not recognize factor covariates.
                    times = 5)$surv[1, ]
coxobj2 <- with(larynx, glmnet(cbind(age, stage), Surv(time, delta), family = "cox", lambda=0))
# since there is only 1 time point, we select the first row in surv (surv is a matrix with one row).
coxobj2$nulldev - deviance(coxobj2)  # Log-Likelihood ratio statistic
# [1] 15.72596
coxobj1 <- with(larynx, glmnet(cbind(1, age), Surv(time, delta), family = "cox", lambda=0))
deviance(coxobj1) - deviance(coxobj2)
# [1] 13.08457
1-pchisq(deviance(coxobj1) - deviance(coxobj2) , coxobj2$df-coxobj1$df)
# [1] 0.0002977376
</pre>
</pre>


== High dimensional data ==
The [https://www.rdocumentation.org/packages/pec/versions/2018.07.26/topics/predictSurvProb predictSurvProb()] function from the [https://www.rdocumentation.org/packages/pec/versions/2018.07.26 pec] package can also be used to extract survival probability predictions from various modeling approaches.
https://cran.r-project.org/web/views/Survival.html


== glmnet + Cox models ==
=== Visualizing the estimated distribution of survival times ===
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-017-0354-0 Robust estimation of the expected survival probabilities from high-dimensional Cox models with biomarker-by-treatment interactions in randomized clinical trials] by Nils Ternès, Federico Rotolo and Stefan Michiels, BMC Medical Research Methodology, 2017 (open review available). The corresponding software '''biospear''' on [https://cran.microsoft.com/web/packages/biospear/index.html cran] and  [https://www.rdocumentation.org/packages/biospear/versions/1.0.1 rdocumentation.org].
survminer::ggsurvplot(); see [http://www.sthda.com/english/wiki/cox-proportional-hazards-model#visualizing-the-estimated-distribution-of-survival-times here].
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-03618-y Accounting for grouped predictor variables or pathways in high-dimensional penalized Cox regression models] Belhechmi 2020
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-03791-0 Cancer prognosis prediction using somatic point mutation and copy number variation data: a comparison of gene-level and pathway-based models] Zheng 2020
* [http://r.789695.n4.nabble.com/Predict-in-glmnet-for-cox-family-td4706070.html Expected time of survival in glmnet for cox family]


=== Error in glmnet: x should be a matrix with 2 or more columns ===
=== Predicted survival probabilities from glmnet: c060/peperr, biospear packages and manual computation ===
https://stackoverflow.com/questions/29231123/why-cant-pass-only-1-coulmn-to-glmnet-when-it-is-possible-in-glm-function-in-r
* Terry Therneau: [http://r.789695.n4.nabble.com/Predict-in-glmnet-for-cox-family-td4706070.html The answer is that you cannot predict survival time, in general]
* https://rdrr.io/cran/c060/man/predictProb.glmnet.html
{{Pre}}
## S3 method for class 'glmnet'
predictProb(object, response, x, times, complexity, ...)


=== Error in coxnet: (list) object cannot be coerced to type 'double' ===
set.seed(1234)
Fix: do not use data.frame in X. Use cbind() instead.
junk <- biospear::simdata(n=500, p=500, q.main = 10, q.inter = 0,
                  prob.tt = .5, m0=1, alpha.tt=0,
                  beta.main= -.5, b.corr = .7, b.corr.by=25,
                  wei.shape = 1, recr=3, fu=2, timefactor=1)
summary(junk$time)
library(glmnet)
library(c060) # Error: object 'predictProb' not found
library(peperr)


= Prediction =
y <- cbind(time=junk$time, status=junk$status)
== Prognostic index/risk scores ==
x <- cbind(1, junk[, "treat", drop = FALSE])
* [https://en.wikipedia.org/wiki/International_Prognostic_Index International Prognostic Index]
names(x) <- c("inter", "treat")
* In R,
x <- as.matrix(x)
** [https://stat.ethz.ch/R-manual/R-devel/library/survival/html/predict.coxph.html coxph()] defines '''risk score''' as exp(linear predictor).
cvfit <- cv.glmnet(x, y, family = "cox")
** [https://github.com/cran/survC1/blob/master/R/FUN-cstat-ver003b.R#L220 survC1] package defines '''risk score''' as coxph's linear predictor; see his [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3079915/ paper] on Stat in Med 2011. Some medical papers (such as [https://www.bmj.com/content/351/bmj.h3868 this one]) also defines it in this way.
obj <- glmnet(x, y, family = "cox")
* Low scores correspond to the lowest predicted risk and high scores correspond to the greatest predicted risk.
xnew <- matrix(c(0,0), nr=1)
* The test data were first segregated into high-risk and low-risk groups by the median of training risk scores. [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis]
predictProb(obj, y, xnew, times=1, complexity = cvfit$lambda.min)
* On the paper "The C-index is not proper for the evaluation of t-year predicted risk" [https://academic.oup.com/biostatistics/advance-article/doi/10.1093/biostatistics/kxy006/4864363 Blanche et al 2018] defined the true '''t-year predicted risk''' by <math>P(T \le t | Z) = 1 - Survival</math>
# Error in exp(lp[response[, 1] >= t.unique[i]]) :
 
#  non-numeric argument to mathematical function
=== linear.predictors component in coxph object ===
# In addition: Warning message:
The $linear.predictors component is not <math>\beta' x</math>. It is <math>\beta' (x-\mu_x)</math>. See [http://r.789695.n4.nabble.com/coxph-linear-predictors-td3015784.html this post].
# In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL'
 
</pre>
=== predict.coxph, prognostic index & risk score ===
* https://www.rdocumentation.org/packages/biospear/versions/1.0.1/topics/expSurv and manual computation (search bhaz)
* [https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/predict.coxph predict.coxph()] Compute fitted values and regression terms for a model fitted by coxph. The Cox model is a relative risk model; predictions of type "linear predictor", "risk", and "terms" are all relative to the sample from which they came. By default, the reference value for each of these is the mean covariate within strata. The primary underlying reason is statistical: a Cox model only predicts relative risks between pairs of subjects within the same strata, and hence the addition of a constant to any covariate, either overall or only within a particular stratum, has no effect on the fitted results. '''Returned value''': a vector or matrix of predictions, or a list containing the predictions (element "fit") and their standard errors (element "se.fit") if the se.fit option is TRUE.  
<pre>
expSurv(res, traindata, method, ci.level = .95, boot = FALSE, nboot, smooth = TRUE,
  pct.group = 4, time, trace = TRUE, ncores = 1)
# S3 method for resexpSurv
predict(object, newdata, ...)
</pre>
{{Pre}}
{{Pre}}
predict(object, newdata,
# continue the example
     type=c("lp", "risk", "expected", "terms", "survival"),
# BMsel() takes a little while
     se.fit=FALSE, na.action=na.pass, terms=names(object$assign), collapse,
resBM <- biospear::BMsel(
     reference=c("strata", "sample"),  ...)
    data = junk,  
</pre> type:
     method = "lasso",  
** "lp": linear predictor
     inter = FALSE,  
** "risk": risk score exp(lp)
     folds = 5)
** "expected": the expected number of events given the covariates and follow-up time. The survival probability for a subject is equal to exp(-expected).
 
** "terms": the terms of the linear predictor.  
# Note: if we specify time =5 in expsurv(), we will get an error message
* http://stats.stackexchange.com/questions/44896/how-to-interpret-the-output-of-predict-coxph. The '''$linear.predictors''' component represents <math>\beta (x - \bar{x})</math>. The risk score (type='risk') corresponds to <math>\exp(\beta (x-\bar{x}))</math>. '''Factors are converted to dummy predictors as usual'''; see [https://stackoverflow.com/questions/14921805/convert-a-factor-to-indicator-variables model.matrix].
# 'time' is out of the range of the observed survival time.
* http://www.togaware.com/datamining/survivor/Lung1.html
# Note: if we try to specify more than 1 time point, we will get the following msg
{{Pre}}
# 'time' must be an unique value; no two values are allowed.
library(coxph)
esurv <- biospear::expSurv(
fit <- coxph(Surv(time, status) ~ age , lung)
    res = resBM,
fit
    traindata = junk,
#  Call:
    boot = FALSE,
#  coxph(formula = Surv(time, status) ~ age, data = lung)
    time = median(junk$time),
#       coef exp(coef) se(coef)   z    p
    trace = TRUE)
# age 0.0187      1.02  0.0092 2.03 0.042
# debug(biospear:::plot.resexpSurv)
#
plot(esurv, method = "lasso")
# Likelihood ratio test=4.24  on 1 df, p=0.0395  n= 228, number of events= 165
# This is equivalent to doing the following
fit$means
xx <- attributes(esurv)$m.score[, "lasso"]
#      age
wc <- order(xx); wgr <- 1:nrow(esurv$surv)
# 62.44737
p1 <- plot(x = xx[wc], y = esurv$surv[wgr, "lasso"][wc],
 
          xlab='prognostic score', ylab='survival prob')
# type = "lr" (Linear predictor)
# prognostic score beta*x in this cases.
as.numeric(predict(fit,type="lp"))[1:10]  
# ignore treatment effect and interactions
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518  0.21626733
xxmy <- as.vector(as.matrix(junk[, names(resBM$lasso)]) %*% resBM$lasso)
# [7]  0.10394626  0.16010680 -0.17685643 -0.02709500
# See page4 of the paper. Scaled scores were used in the plot
0.0187 * (lung$age[1:10] - fit$means)
range(abs(xx - (xxmy-quantile(xxmy, .025)) / (quantile(xxmy, .975) quantile(xxmy, .025))))
# [1]  0.21603421  0.10383421 -0.12056579 -0.10186579 -0.04576579 0.21603421
# [1] 1.500431e-09 1.465241e-06
# [7]  0.10383421  0.15993421 -0.17666579 -0.02706579
fit$linear.predictors[1:10]
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518
# [6]  0.21626733  0.10394626  0.16010680 -0.17685643 -0.02709500


# type = "risk" (Risk score)
h0 <- bhaz(resBM$lasso, junk$time, junk$status, junk[, names(resBM$lasso)])
> as.numeric(predict(fit,type="risk"))[1:10]
newtime <- median(junk$time)
[1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342 1.1095408
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
[8] 1.1736362 0.8379001 0.9732688
# newx <- junk[ , names(resBM$lasso)]
> exp((lung$age-mean(lung$age)) * 0.0187)[1:10]
# Compute the estimate of the survival probability at training x and time = median(junk$time)
[1] 1.2411448 1.1094165 0.8864188 0.9031508 0.9552657 1.2411448
# using Breslow method
[7] 1.1094165 1.1734337 0.8380598 0.9732972
S2 <- outer(exp(-H0),  exp(xxmy), "^") # row = newtime, col = newx
> exp(fit$linear.predictors)[1:10]
range(abs(esurv$surv[wgr, "lasso"] - S2))
[1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342
# [1] 6.455479e-18 2.459136e-06
  [7] 1.1095408 1.1736362 0.8379001 0.9732688
# My implementation of the prognostic plot
</pre>
#  Note that the x-axis on the plot is based on prognostic scores beta*x,
#  not on treatment modifying scores gamma*x as described in the paper.
#  Maybe it is because inter = FALSE in BMsel() we have used.
p2 <- plot(xxmy[wc], S2[wc], xlab='prognostic score', ylab='survival prob') # cf p1


=== threshold/cutoff ===
> names(esurv)
* https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5882539/ An optimal threshold on the score to separate patients into low- and high-risk groups was determined using the MaxStat package to select the cutoff value producing the maximal log-rank score in the training cohort.
[1] "surv"  "lower" "upper"
* [https://cran.r-project.org/web/packages/maxstat/index.html maxstat]: Maximally Selected Rank Statistics (cf the [https://cran.r-project.org/web/packages/matrixStats/index.html matrixStats]: Functions that Apply to Rows and Columns of Matrices (and to Vectors) package).
> str(esurv$surv)
num [1:500, 1:2] 0.772 0.886 0.961 0.731 0.749 ...
  - attr(*, "dimnames")=List of 2
  ..$ : NULL
  ..$ : chr [1:2] "lasso" "oracle"


== Survival risk prediction ==
esurv2 <- predict(esurv, newdata = junk)
* [https://brb.nci.nih.gov/techreport/Briefings.pdf Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data] Simon 2011. The authors have noted the CV has been used for optimization of tuning parameters but the data available are too limited for effective into training & test sets.
esurv2$surv      # All zeros?
** The CV Kaplan-Meier curves are essentially unbiased and the separation between the curves gives a fair representation of the value of the expression profiles for predicting survival risk.
</pre>
** The log-rank statistic does not have the usual chi-squared distribution under the null hypothesis. This is because the data was used to create the risk groups.
Bug from the sample data (interaction was considered here; inter = TRUE) ?
** Survival ROC curve can be used as a measure of predictive accuracy for the survival risk group model at a certain landmark time.
{{Pre}}
** The ROC curve can be misleading. For example if re-substitution is used, the AUC can be very large.
set.seed(123456)
** The p-value for the significance of the test that AUC=.5 for the cross-validated survival ROC curve can be computed by permutations.
resBM <- BMsel(
** Cross-validated estimates of survival risk discrimination can be pessimistically '''biased''' if the number of folds K is too small for the number of events, and the '''variance''' of the cross-validated risk group survival curves or time-dependent ROC curves will be large, particularly when K is large and the number of events is small. For example, for the null simulations of ''Figure 3'', there are several cases in which the cross-validated Kaplan–Meier curve for the low-risk group is below that for the high-risk group.
  data = Breast,
** (class data) For small sample sizes of fewer than '''50 cases''', they recommended use of leave-one-out cross-validation to minimize mean squared error of the estimate of prediction error.
  x = 4:ncol(Breast),
** (survival data) Subramanian and Simon (Stat Med) recommended use of 5- or 10-fold cross-validation for a wide range of conditions.
  y = 2:1,
** Fig 1: KM substitution. 10 null data.
  tt = 3,
** Fig 2: KM test data. 10 null data.
  inter = TRUE,
** Fig 3: KM 10-fold CV. One null data.
  std.x = TRUE,
** Fig 4A: KM Shedden data resubstitution.
  folds = 5,
** Fig 4B: KM Shedden data. CV
  method = c("lasso", "lasso-pcvl"))
** Fig 5A: Resubstitution time-dep ROC. Shedden.
 
** Fig 5B: CV time-dep ROC. Shedden.
esurv <- expSurv(
** Fig 6A: KM clinical covariates only
  res = resBM,
** Fig 6B: KM combined
  traindata = Breast,
** Fig 7. Time-dep ROC from covariates only and combined.
  boot = FALSE,
* Measure of assessment for prognostic prediction
  smooth = TRUE,
:{| class="wikitable"
  time = 4,
!
  trace = TRUE
! 0/1
)
! Survival
Computation of the expected survival
|-
Computation of analytical confidence intervals
| Sensitivity
Computation of smoothed B-splines
| <math>P(Pred=1|True=1)</math>
Error in cobs(x = x, y = y, print.mesg = F, print.warn = F, method = "uniform",  :
| <math>P(\beta' X > c | T < t)</math>
  There is at least one pair of adjacent knots that contains no observation.
|-
</pre>
| Specificity
 
| <math>P(Pred=0|True=0)</math>
== Plot predictor vs HR ==
| <math>P(\beta' X \le c | T \ge t)</math>
* https://github.com/tjbencomo/survival-talk-pntlab/blob/master/survival_talk.pdf
|}
* https://www.imsbio.co.jp/RGM/R_rdfile?f=Greg/man/plotHR.Rd&d=R_CC which uses [https://rdrr.io/cran/Greg/man/plotHR.html Greg::plotHR()] function.
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.4106/full An evaluation of resampling methods for assessment of survival risk prediction in high-dimensional settings] Subramanian, et al 2010.
* ggplot(Predict(modA_cph, age)) [https://thomaselove.github.io/432-notes/cox-regression-models-for-survival-data-example-2.html log relative hazard vs predictor]
** The conditional probabilities can be estimated by Heagerty et al 2000 (R package [https://cran.r-project.org/web/packages/survivalROC/index.html survivalROC]). '''The AUC(t) can be used for comparing and assessing prognostic models (a measure of accuracy) for future samples.''' In the absence of an independent large dataset, an estimate for AUC(t) is obtained through resampling from the original sample <math>S_n</math>.
** Resubstitution estimate of AUC(t) (i.e. all observations were used for feature selection, model building as well as the estimation of accuracy) is too optimistic. So k-fold CV method is considered.
** There are two ways to compute k-fold CV estimate of AUC(t): the pooling strategy (used in the paper) and average strategy (AUC(t)s are first computed for each test set and are then averaged). In the pooling strategy, all the test set risk-score predictions are first collected and AUC(t) is calculated on this combined set.
** Conclusions: sample splitting and LOOCV have a higher mean square error than other methods. 5-fold or 10-fold CV provide a good balance between bias and variability for a wide range of data settings.
* [https://brb.nci.nih.gov/techreport/JNCI-NSLC-Signatures.pdf *Gene Expression–Based Prognostic Signatures in Lung Cancer: Ready for Clinical Use?] Subramanian, et al 2010.
* [https://academic.oup.com/bioinformatics/article/23/14/1768/188061/Assessment-of-survival-prediction-models-based-on Assessment of survival prediction models based on microarray data] Martin Schumacher, et al. 2007
* [http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0020108 Semi-Supervised Methods to Predict Patient Survival from Gene Expression Data]  Eric Bair , Robert Tibshirani, 2004
* Time dependent ROC curves for censored survival data and a diagnostic marker. Heagerty et al, Biometrics 2000
** [http://faculty.washington.edu/heagerty/Software/SurvROC/SurvivalROC/survivalROCdiscuss.pdf An introduction to survivalROC] by Saha, Heagerty. If the AUCs are computed at several time points, we can plot the AUCs vs time for different models (eg different covariates) and compare them to see which model performs better.
** The '''survivalROC''' package does not draw an ROC curve. It outputs FP (x-axis) and TP (y-axis). We can use basic R or ggplot to draw the curve.
** [https://www.rdocumentation.org/packages/survivalROC/versions/1.0.1/topics/survivalROC survivalROC()] calculates AUC at specified time by using NNE method (default). We can use the prognostic index as marker when there are more than one markers is used. Note that [https://www.rdocumentation.org/packages/survAUC/versions/1.0-5/topics/AUC.uno survAUC::AUC.uno()] uses Uno (2007) to calculate FP and TP.
** [https://rstudio-pubs-static.s3.amazonaws.com/3506_36a9509e9d544386bd3e69de30bca608.html Assessment of Discrimination in Survival Analysis (C-statistics, etc)]
** [http://sachsmc.github.io/plotROC/ plotROC] package by Sachs for showing ROC curves from multiple time points on the same plot.
** [https://datascienceplus.com/time-dependent-roc-for-survival-prediction-models-in-r/ Time-dependent ROC for Survival Prediction Models in R]. It shows the effect of  the number of events and the selection of predict time. It also emphasized the '''survivalROC''' package implements the '''cumulative case/dynamic control ROC''' and the '''risksetROC''' package implements the '''incident case/dynamic control ROC'''.
** survivalROC怎么看最佳cut-off值?/ HOW to use the survivalROC to get optimal cut-off values? 最优的点应该就是斜率等于1的地方.
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-10-413 Survival prediction from clinico-genomic models - a comparative study] Hege M Bøvelstad, 2009
* [http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(19990915/30)18:17/18%3C2529::AID-SIM274%3E3.0.CO;2-5/full Assessment and comparison of prognostic classification schemes for survival data]. E. Graf, C. Schmoor, W. Sauerbrei, et al. 1999
* [http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(20000229)19:4%3C453::AID-SIM350%3E3.0.CO;2-5/full What do we mean by validating a prognostic model?] Douglas G. Altman, Patrick Royston, 2000
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.3768/full On the prognostic value of survival models with application to gene expression signatures] T. Hielscher, M. Zucknick, W. Werft, A. Benner, 2000
* Accuracy of point predictions in survival analysis, Henderson et al, Statist Med, 2001.
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis] Hung-Chia Chen et al 2012
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.7342/abstract Accuracy of predictive ability measures for survival models] Flandre, Detsch and O'Quigley, 2017.
* [http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006026 Association between expression of random gene sets and survival is evident in multiple cancer types and may be explained by sub-classification] Yishai Shimoni, PLOS 2018
* [http://www.bmj.com/content/bmj/357/bmj.j2497 Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study]
* [http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006076 Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data] Ching et al 2018.
* [https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-022-00124-y A scoping methodological review of simulation studies comparing statistical and machine learning approaches to risk prediction for time-to-event data] Smith, 2022


== Survival time prediction ==
== Loglikelihood ==
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-021-04103-w Survival time prediction by integrating cox proportional hazards network and distribution function network] Baek 2021
* fit$loglik is a vector of length 2 (initial model, fitted model). So deviance can be calculated by '''-2*fit$loglik[2]'''; see [https://github.com/nyiuab/BhGLM/blob/master/R/bcoxph.r#L402 here] for an example from BhGLM package.  
* Use '''survival::anova()''' command to do a likelihood ratio test. Note this function does not work on ''glmnet'' object.
* [https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/residuals.coxph residuals.coxph] Calculates martingale, deviance, score or Schoenfeld residuals for a Cox proportional hazards model.
* No deviance() on coxph object!
* [https://stat.ethz.ch/R-manual/R-devel/library/survival/html/logLik.coxph.html logLik()] function will return fit$loglik[2]
* [http://www.erikdrysdale.com/cox_partiallikelihood/ Gradient descent for the elastic net Cox-PH model]


== Assessing the performance of prediction models ==
=== glmnet ===
* [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.6246 Investigating the prediction ability of survival models based on both clinical and omics data: two case studies] by Riccardo De Bin, Statistics in Medicine 2014. (not useful)
* It seems AIC does not require the assumption of nested models.
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis] Chen et al , BMC Medical Research Methodology 2012
* https://en.wikipedia.org/wiki/Akaike_information_criterion, ([https://forvo.com/word/akaike/ akaike pronunciation in Japanese])
* [https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.4242 A simulation study of predictive ability measures in a survival model I: Explained variation measures] Choodari‐Oskooei et al, Stat in Medicine 2011
:<math>
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3575184/ Assessing the performance of prediction models: a framework for some traditional and novel measures] by Ewout W. Steyerberg, Andrew J. Vickers, [...], and Michael W. Kattan, 2010.
\begin{align}
* [https://academic.oup.com/bioinformatics/article/27/22/3206/194302 survcomp: an R/Bioconductor package for performance assessment and comparison of survival models] paper in 2011 and [http://bcb.dfci.harvard.edu/~aedin/courses/Bioconductor/survival.pdf Introduction to R and Bioconductor Survival analysis] where the survcomp package can be used. The summary here is based on this paper.
\mathrm{AIC} &= 2k - 2\ln(\hat L) \\
** [https://rdrr.io/bioc/survcomp/man/concordance.index.html concordance.index()]. Pencina, M. J. and D'Agostino, R. B. (2004) "Overall C as a measure of discrimination in survival analysis: model specific population value and confidence interval estimation", Statistics in Medicine, 23, pages 2109–2123, 2004.
\mathrm{AICc} &= \mathrm{AIC} + \frac{2k^2 + 2k}{n - k - 1}
* [https://stats.stackexchange.com/questions/181634/how-to-compare-predictive-power-of-survival-models How to compare predictive power of survival models?]
\end{align}
* [https://stats.stackexchange.com/questions/17604/how-to-compare-harrell-c-index-from-different-models-in-survival-analysis How to compare Harrell C-index from different models in survival analysis?] and [https://stats.stackexchange.com/q/17648 Frank Harrell's comment]: Doing model comparison with LR statistics is more powerful than using methods that depend on an asymptotic distribution of the C-index.
</math>
 
* [https://stats.stackexchange.com/questions/25817/is-it-possible-to-calculate-aic-and-bic-for-lasso-regression-models Is it possible to calculate AIC and BIC for lasso regression models?]. See the references about the degrees of freedom in Lasso regressions.
=== Hazard ratio ===
[https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/hazard.ratio hazard.ratio()]
{{Pre}}
{{Pre}}
hazard.ratio(x, surv.time, surv.event, weights, strat, alpha = 0.05,
fit <- glmnet(x, y, family = "multinomial")  
            method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)
</pre>


=== Odds ratio ===
tLL <- fit$nulldev - deviance(fit) # ln(L)
[https://academic.oup.com/aje/article/159/9/882/167475?login=true Limitations of the Odds Ratio in Gauging the Performance of a Diagnostic, Prognostic, or Screening Marker] Pepe 2004. C statistic measures discrimination ability better than relative risk does; see [https://www.nejm.org/doi/10.1056/NEJMoa055373 this paper].
k <- fit$df
 
n <- fit$nobs
=== D index ===
AICc <- -tLL+2*k+2*k*(k+1)/(n-k-1)
[https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/D.index D.index()]
AICc
</pre>
* https://github.com/nyiuab/BhGLM/blob/master/R/bmlasso.r#L143
<pre>
f <- glmnet(x = x, y = y, family = family)
f$aic <- deviance(f) + 2 * f$df
</pre>
* For ''glmnet'' object, see [https://rdrr.io/cran/glmnet/man/deviance.glmnet.html ?deviance.glmnet], ?coxnet.deviance and [https://stackoverflow.com/questions/40920051/r-getting-aic-bic-likelihood-from-glmnet R: Getting AIC/BIC/Likelihood from GLMNet]. An example with all continuous variables
{{Pre}}
{{Pre}}
D.index(x, surv.time, surv.event, weights, strat, alpha = 0.05,  
set.seed(10101)
        method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)
N=1000;p=6
</pre>
nzc=p/3
x=matrix(rnorm(N*p),N,p)
beta=rnorm(nzc)
fx=x[,seq(nzc)]%*%beta/3
hx=exp(fx)
ty=rexp(N,hx)
tcens=rbinom(n=N,prob=.3,size=1)# censoring indicator
y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)
coxobj <- coxph(Surv(ty, 1-tcens) ~ x)
coxobj_small <- coxph(Surv(ty, 1-tcens) ~1)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(ty, 1 - tcens)
# Model 1: ~ x
# Model 2: ~ 1
# loglik  Chisq Df P(>|Chi|) 
# 1 -4095.2                     
# 2 -4102.7 15.151  6  0.01911 *


=== AUC ===
fit2 <- glmnet(x,y,family="cox", lambda=0) # ridge regression
See [[#ROC_curve_and_Brier_score|ROC curve]].
deviance(fit2)                             # 2*(loglike_sat - loglike)
 
# [1] 8190.313
Comparison:
coxnet.deviance(x=x, y=y, beta=coef(fit2)) # 2*(loglike_sat - loglike)
{| class="wikitable"
# [1] 8190.313 
!
# https://github.com/cran/glmnet/blob/master/R/coxnet.deviance.R#L79
! Definition
! Interpretation
|-
| Two class
| <math> P(Z_{case} > Z_{control}) </math>
| the probability that a randomly selected '''case''' will have a higher test result (marker value) than a randomly selected '''control'''. It represents a measure of concordance between the marker and the disease status. ROC curves are particularly useful for comparing the discriminatory capacity of different potential biomarkers. (Heagerty &amp; Zheng 2005)
|-
| Survival data
| <math> P(\beta' Z_1 > \beta' Z_2|T_1 < T_2) </math>
| (Roughly speaking) the probability of concordance between predicted and observed responses. The probability that the predictions for a random pair of subjects are concordant with their outcomes. (Heagerty &amp; Zheng 2005). (Precisely) fraction of pairs in your data, where the observation with the higher survival time has the higher probability of survival predicted by your model.
|}


p95 of Heagerty and Zheng (2005) gave a relationship of C-statistic:
assess.glmnet(fit2, x=x, y=y)     # returns deviance and c-index
 
fit2$df
<math>
# [1] 6
C = P(M_j > M_k | T_j < T_k) = \int_t \mbox{AUC(t) w(t)} \; dt
fit2$nulldev - deviance(fit2) # Log-Likelihood ratio statistic
</math>
# [1] 15.15097
 
1-pchisq(fit2$nulldev - deviance(fit2), fit2$df)
where ''M'' is the marker value and <math>w(t) = 2 \cdot f(t) \cdot S(t) </math>. When the interest is in the accuracy of a regression model we will use <math>M_i = Z_i^T \beta</math>.
# [1] 0.01911446
 
</pre>
The time-dependent AUC is also related to time-dependent C-index. <math> C_\tau = P(M_j > M_k | T_j < T_k, T_j < \tau) = \int_t \mbox{AUC(t)} \mbox{w}_{\tau}(t) \; dt  </math> where <math> w_\tau(t) = 2 \cdot f(t) \cdot S(t)/(1-S^2(\tau))</math>.
Here is another example including a factor covariate:
 
{{Pre}}
=== Concordance index/C-index/C-statistic interpretation and R packages ===
library(KMsurv) # Data package for Klein & Moeschberge
* The area under ROC curve (plot of sensitivity of 1-specificity) is also called C-statistic. It is a measure of discrimination generalized for survival data (Harrell 1982 & 2001). The ROC are functions of the sensitivity and specificity for each value of the measure of model. (Nancy Cook, 2007)
data(larynx)
** The sensitivity of a test is the probability of a positive test result, or of a value above a threshold, among those with disease (cases).
larynx$stage <- factor(larynx$stage)
** The specificity of a test is the probability of a negative test result, or of a value below a threshold, among those without disease (noncases).
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)
** Perfect discrimination corresponds to a c-statistic of 1 & is achieved if the scores for all the cases are higher than those for all the non-cases.
coef(coxobj)
** The c-statistic is the '''probability that the measure or predicted risk/risk score is higher for a case than for a noncase'''.  
# age    stage2    stage3    stage4
** The c-statistic is not the probability that individuals are classified correctly or that a person with a high test score will eventually become a case.
# 0.0190311 0.1400402 0.6423817 1.7059796
** C-statistic is a rank-based measure. The c-statistic describes how well models can rank order cases and noncases, but not a function of the actual predicted probabilities.
coxobj_small <- coxph(Surv(time, delta)~age, data = larynx)
* [https://stats.stackexchange.com/questions/29815/how-to-interpret-the-output-for-calculating-concordance-index-c-index?noredirect=1&lq=1 How to interpret the output for calculating concordance index (c-index)?] <math>
anova(coxobj, coxobj_small)
P(\beta' Z_1 > \beta' Z_2|T_1 < T_2)
# Analysis of Deviance Table
</math> where ''T'' is the survival time and ''Z'' is the covariates.
# Cox model: response is Surv(time, delta)
** It is the '''fraction of pairs in your data, where the observation with the higher survival time has the higher probability of survival predicted by your model'''.  
# Model 1: ~ age + stage
** High values mean that your model predicts higher probabilities of survival for higher observed survival times.
# Model 2: ~ age
** The c index estimates the '''probability of concordance between predicted and observed responses'''. A value of 0.5 indicates no predictive discrimination and a value of 1.0 indicates perfect separation of patients with different outcomes. (p371 Harrell 1996)
# loglik  Chisq Df P(>|Chi|)  
* Drawback of C-statistics:
# 1 -187.71                     
** Even though rank indexes such as c are widely applicable and easily interpretable, '''they are not sensitive for detecting small differences in discrimination ability between two models.''' This is due to the fact that a rank method considers the (prediction, outcome) pairs (0.01,0), (0.9, 1) as no more concordant than the pairs (0.05,0), (0.8, 1). A more sensitive likelihood-ratio Chi-square-based statistic that reduces to R2 in the linear regression case may be substituted. (p371 Harrell 1996)
# 2 -195.55 15.681  3  0.001318 **
** If the model is correct, the '''likelihood based measures may be more sensitive in detecting differences in prediction ability''', compared to rank-based measures such as C-indexes. (Uno 2011 p 1113)
---
* [https://statisticaloddsandends.wordpress.com/2019/10/26/what-is-harrells-c-index/ What is Harrell’s C-index?] '''C = #concordant pairs / (# concordant pairs + # discordant pairs)'''
#  Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*0.05 ‘.0.1 ‘ ’ 1
* http://dmkd.cs.vt.edu/TUTORIAL/Survival/Slides.pdf
 
* [https://cran.r-project.org/web/packages/survival/vignettes/concordance.pdf Concordance] vignette from the survival package. It has a good summary of different ways (such as Kendall's tau and Somers' d) to calculate the '''concordance statistic'''. The ''concordance'' function in the survival package can be used with various types of models including logistic and linear regression.
# Now let's look at the glmnet() function.
* <span style="color: magenta"> Assessment of Discrimination in Survival Analysis (C-statistics, etc) </span> [https://rstudio-pubs-static.s3.amazonaws.com/3506_36a9509e9d544386bd3e69de30bca608.html webpage]
# It seems glmnet does not recognize factor covariates.
* [http://gaodoris.blogspot.com/2012/10/5-ways-to-estimate-concordance-index.html 5 Ways to Estimate Concordance Index for Cox Models in R, Why Results Aren't Identical?], [http://zeegroom.com/2015/10/10/cindex/ C-index/C-statistic 计算的5种不同方法及比较]. The 5 functions are rcorrcens() from Hmisc, summary()$concordance from survival, survConcordance() from survival, concordance.index() from survcomp and cph() from rms.
coxobj2 <- with(larynx, glmnet(cbind(age, stage), Surv(time, delta), family = "cox", lambda=0))
* Summary of R packages to compute C-statistic
coxobj2$nulldev - deviance(coxobj2)  # Log-Likelihood ratio statistic
# [1] 15.72596
coxobj1 <- with(larynx, glmnet(cbind(1, age), Surv(time, delta), family = "cox", lambda=0))
deviance(coxobj1) - deviance(coxobj2)
# [1] 13.08457
1-pchisq(deviance(coxobj1) - deviance(coxobj2) , coxobj2$df-coxobj1$df)
# [1] 0.0002977376
</pre>
 
== High dimensional data ==
* https://cran.r-project.org/web/views/Survival.html
* [https://academic.oup.com/bioinformatics/article/40/3/btae132/7623091 Tutorial on survival modeling with applications to omics data] 2024, [https://ocbe-uio.github.io/survomics/survomics.html github].
 
== glmnet + Cox models ==
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-017-0354-0 Robust estimation of the expected survival probabilities from high-dimensional Cox models with biomarker-by-treatment interactions in randomized clinical trials] by Nils Ternès, Federico Rotolo and Stefan Michiels, BMC Medical Research Methodology, 2017 (open review available). The corresponding software '''biospear''' on [https://cran.microsoft.com/web/packages/biospear/index.html cran] and  [https://www.rdocumentation.org/packages/biospear/versions/1.0.1 rdocumentation.org].
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-03618-y Accounting for grouped predictor variables or pathways in high-dimensional penalized Cox regression models] Belhechmi 2020
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-03791-0 Cancer prognosis prediction using somatic point mutation and copy number variation data: a comparison of gene-level and pathway-based models] Zheng 2020
* [http://r.789695.n4.nabble.com/Predict-in-glmnet-for-cox-family-td4706070.html Expected time of survival in glmnet for cox family]
 
=== Error in glmnet: x should be a matrix with 2 or more columns ===
https://stackoverflow.com/questions/29231123/why-cant-pass-only-1-coulmn-to-glmnet-when-it-is-possible-in-glm-function-in-r


: {| class="wikitable"
=== Error in coxnet: (list) object cannot be coerced to type 'double' ===
! Package
Fix: do not use data.frame in X. Use cbind() instead.
! Function
! New data?
|-
| survival
| summary(coxph(formula, data))$concordance["C"], Cindex()
| no, yes
|-
| survC1
| [https://www.rdocumentation.org/packages/survC1/versions/1.0-2/topics/Est.Cval Est.Cval()]
| no
|-
| survAUC
| [https://www.rdocumentation.org/packages/survAUC/versions/1.0-5/topics/UnoC UnoC()]
| yes
|-
| timeROC
| [https://cran.r-project.org/web/packages/timeROC/index.html ?]
| ?
|-
| compareC
| [https://cran.r-project.org/web/packages/compareC/index.html ?]
| ?
|-
| survcomp
| [https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/concordance.index concordance.index()]
| ?
|-
| Hmisc
| [https://www.rdocumentation.org/packages/Hmisc/versions/4.2-0/topics/rcorr.cens rcorr.cens()]
| no
|-
| pec
| [https://www.rdocumentation.org/packages/pec/versions/2018.07.26/topics/cindex cindex()]
| yes
|}


=== Integrated brier score (≈ "mean squared error" of prediction for survival data) ===
= Prediction =
[http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(19990915/30)18:17/18%3C2529::AID-SIM274%3E3.0.CO;2-5/full Assessment and comparison of prognostic classification schemes for survival data] Graf et al Stat. Med. 1999 2529-45, [https://onlinelibrary.wiley.com/doi/pdf/10.1002/bimj.200610301 Consistent Estimation of the Expected Brier Score in General Survival Models with Right‐Censored Event Times] Gerds et al 2006.
== Prognostic factor, prognosis ==
* "Prognostic" refers to the ability to predict the likely outcome or course of a disease. In the context of medicine, prognosis is the prediction of the future course of a disease and the chances of recovery or survival. A prognosis can be based on a variety of factors, including the stage and grade of the disease, the patient's overall health, and the response to treatment.
* Prognostic factors are the characteristics of a patient or a disease that can be used to predict the outcome or course of the disease. These factors can include demographic information (such as age and gender), clinical information (such as the stage and grade of the disease), and laboratory test results.
* Prognostic factors are used to stratify patients into different prognostic groups, which can help guide treatment decisions and identify patients who may be at high risk for poor outcomes. For example, in cancer treatment, the stage of the cancer, the location of the cancer, and the patient's overall health are important prognostic factors that are used to determine the best course of treatment.
* It's worth noting that prognosis is not always certain, and unexpected events can happen that can change the course of the disease. Additionally, the effectiveness of treatment can change the prognosis for a patient. Prognosis is an estimation and it can change over time.
* '''Prognosis'''. Grade I carcinomas tend to have be less aggressive and have a better prognosis than higher grade carcinomas. They are also more often '''ER positive''', which is another feature associated with a more favorable prognosis. [https://pathology.jhu.edu/breast/staging-grade/ STAGING & GRADE] breast cancer.


* Because the point predictions of event-free times will almost inevitably given inaccurate and unsatisfactory result, the mean square error of prediction <math>\frac{1}{n}\sum_1^n (T_i - \hat{T}(X_i))^2</math> method will not be considered. See Parkes 1972 or [http://www.lcc.uma.es/~jja/recidiva/055.pdf Henderson] 2001.
== Prognostic index/risk scores ==
* Another approach is to predict the survival or event status <math>Y=I(T > \tau)</math> at a fixed time point <math>\tau</math> for a patient with X=x. This leads to the expected Brier score <math>E[(Y - \hat{S}(\tau|X))^2]</math> where <math>\hat{S}(\tau|X)</math> is the estimated event-free probabilities (survival probability) at time <math>\tau</math> for subject with predictor variable <math>X</math>.
* [https://en.wikipedia.org/wiki/International_Prognostic_Index International Prognostic Index]
* The time-dependent Brier score (without censoring)
* In R,
: <math>
** [https://stat.ethz.ch/R-manual/R-devel/library/survival/html/predict.coxph.html coxph()] defines '''risk score''' as exp(linear predictor).
\begin{align}
** [https://github.com/cran/survC1/blob/master/R/FUN-cstat-ver003b.R#L220 survC1] package defines '''risk score''' as coxph's linear predictor; see his [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3079915/ paper] on Stat in Med 2011. Some medical papers (such as [https://www.bmj.com/content/351/bmj.h3868 this one]) also defines it in this way.
  \mbox{Brier}(\tau) &= \frac{1}{n}\sum_1^n (I(T_i>\tau) - \hat{S}(\tau|X_i))^2 
* Low scores correspond to the lowest predicted risk and high scores correspond to the greatest predicted risk.
\end{align}
* The test data were first segregated into high-risk and low-risk groups by the median of training risk scores. [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis]
</math>
* On the paper "The C-index is not proper for the evaluation of t-year predicted risk" [https://academic.oup.com/biostatistics/advance-article/doi/10.1093/biostatistics/kxy006/4864363 Blanche et al 2018] defined the true '''t-year predicted risk''' by <math>P(T \le t | Z) = 1 - Survival</math>
* The time-dependent Brier score (with censoring, C is the censoring variable)
: <math>
\begin{align}
  \mbox{Brier}(\tau) = \frac{1}{n}\sum_i^n\bigg[\frac{(\hat{S}_C(t_i))^2I(t_i \leq \tau, \delta_i=1)}{\hat{S}_C(t_i)} + \frac{(1 - \hat{S}_C(t_i))^2 I(t_i > \tau)}{\hat{S}_C(\tau)}\bigg]
\end{align}
</math>
where <math>\hat{S}_C(t_i) = P(C > t_i)</math>, the Kaplan-Meier estimate of the censoring distribution with <math>t_i</math> the survival time of patient ''i''.  
The integration of the Brier score can be done by over time <math>t \in [0, \tau]</math> with respect to some weight function W(t) for which a natual choice is <math>(1 - \hat{S}(t))/(1-\hat{S}(\tau))</math>. The lower the iBrier score, the larger the prediction accuracy is.
* Useful benchmark values for the Brier score are 33%, which corresponds to predicting the risk by a random number drawn from U[0, 1], and 25% which corresponds to predicting 50% risk for everyone. See [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4194196/pdf/nihms-589222.pdf Evaluating Random Forests for Survival Analysis using Prediction Error Curves] by Mogensen et al J. Stat Software 2012 ([https://cran.r-project.org/web/packages/pec/index.html pec] package). The paper has a good summary of different R package implementing Brier scores.


R function
=== linear.predictors component in coxph object ===
* [https://www.rdocumentation.org/packages/pec/versions/2.5.4 pec] by Thomas A. Gerds. The plot.pec() can plot '''prediction error curves''' (defined by Brier score). See an example from [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4841879/pdf/IJPH-45-239.pdf#page=5 this paper]. The .632+ bootstrap prediction error curves is from the paper [https://academic.oup.com/bioinformatics/article/25/7/890/211193#2275428 Boosting for high-dimensional time-to-event data with competing risks] 2009
The $linear.predictors component is not <math>\beta' x</math>. It is <math>\beta' (x-\mu_x)</math>. See [http://r.789695.n4.nabble.com/coxph-linear-predictors-td3015784.html this post].
* [https://www.rdocumentation.org/packages/peperr/versions/1.1-7 peperr] package. The package peperr is an early branch of pec.
* [https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/sbrier.score2proba survcomp::sbrier.score2proba()].  
* [https://www.rdocumentation.org/packages/ipred/versions/0.9-5/topics/sbrier ipred::sbrier()]


Papers on high dimensional covariates
=== predict.coxph, prognostic index & risk score ===
* Assessment of survival prediction models based on microarray data, Bioinformatics , 2007, vol. 23 (pg. 1768-74)
* [https://www.rdocumentation.org/packages/survival/versions/2.41-2/topics/predict.coxph predict.coxph()] Compute fitted values and regression terms for a model fitted by coxph. The Cox model is a relative risk model; predictions of type "linear predictor", "risk", and "terms" are all relative to the sample from which they came. By default, the reference value for each of these is the mean covariate within strata. The primary underlying reason is statistical: a Cox model only predicts relative risks between pairs of subjects within the same strata, and hence the addition of a constant to any covariate, either overall or only within a particular stratum, has no effect on the fitted results. '''Returned value''': a vector or matrix of predictions, or a list containing the predictions (element "fit") and their standard errors (element "se.fit") if the se.fit option is TRUE.
* Allowing for mandatory covariates in boosting estimation of sparse high-dimensional survival models, BMC Bioinformatics , 2008, vol. 9 pg. 14
{{Pre}}
 
predict(object, newdata,
=== Kendall's tau, Goodman-Kruskal's gamma, Somers' d ===
    type=c("lp", "risk", "expected", "terms", "survival"),
* https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient
    se.fit=FALSE, na.action=na.pass, terms=names(object$assign), collapse,
* https://en.wikipedia.org/wiki/Goodman_and_Kruskal%27s_gamma
    reference=c("strata", "sample"), ...)
* https://en.wikipedia.org/wiki/Somers%27_D
</pre> type:
* [https://cran.r-project.org/web/packages/survival/vignettes/concordance.pdf Survival package] has a good summary. Especially '''concordance = (d+1)/2'''.
** "lp": linear predictor
** "risk": risk score exp(lp)
** "expected": the expected number of events given the covariates and follow-up time. The survival probability for a subject is equal to exp(-expected).
** "terms": the terms of the linear predictor.  
* http://stats.stackexchange.com/questions/44896/how-to-interpret-the-output-of-predict-coxph. The '''$linear.predictors''' component represents <math>\beta (x - \bar{x})</math>. The risk score (type='risk') corresponds to <math>\exp(\beta (x-\bar{x}))</math>. '''Factors are converted to dummy predictors as usual'''; see [https://stackoverflow.com/questions/14921805/convert-a-factor-to-indicator-variables model.matrix].  
* http://www.togaware.com/datamining/survivor/Lung1.html
{{Pre}}
library(coxph)
fit <- coxph(Surv(time, status) ~ age , lung)
fit
#  Call:
#  coxph(formula = Surv(time, status) ~ age, data = lung)
#      coef exp(coef) se(coef)    z    p
# age 0.0187      1.02  0.0092 2.03 0.042
#
# Likelihood ratio test=4.24  on 1 df, p=0.0395  n= 228, number of events= 165
fit$means
#      age
# 62.44737


=== C-statistics ===
# type = "lr" (Linear predictor)
* For two groups data (one with event, one without), C-statistic has an intuitive interpretation: if two individuals are selected at random, one with the event and one without, then the C-statistic is '''the probability that the model predicts a higher risk for the individual with the event'''. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3157029/ Analysis of Biomarker Data: logs, odds ratios and ROC curves] by Grund 2010
as.numeric(predict(fit,type="lp"))[1:10] 
* C-statistics is the probability of concordance between predicted and observed survival.
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518  0.21626733
* [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.6370 Comparing two correlated C indices with right‐censored survival outcome: a one‐shot nonparametric approach] Kang et al, Stat in Med, 2014. [https://cran.r-project.org/web/packages/compareC/index.html compareC] package for comparing two correlated C-indices with right censored outcomes. [https://support.sas.com/resources/papers/proceedings17/SAS0462-2017.pdf#page=13 Harrell’s Concordance]. The s.e. of the Harrell's C-statistics can be estimated by the delta method. <math>
# [7] 0.10394626  0.16010680 -0.17685643 -0.02709500
\begin{align}
0.0187 * (lung$age[1:10] - fit$means)
C_H = \frac{\sum_{i,j}I(t_i < t_{j}) I(\hat{\beta} Z_i > \hat{\beta} Z_j) \delta_i}{\sum_{i,j} I(t_i < t_j) \delta_i}
# [1] 0.21603421  0.10383421 -0.12056579 -0.10186579 -0.04576579  0.21603421
\end{align}
# [7] 0.10383421  0.15993421 -0.17666579 -0.02706579
</math> converges to a censoring-dependent quantity <math> P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \text{min}(D_1,D_2)).</math> Here ''D'' is the censoring variable.
fit$linear.predictors[1:10]
* [http://europepmc.org/articles/PMC3079915 On the C-statistics for Evaluating Overall Adequacy of Risk Prediction Procedures with Censored Survival Data] by Uno et al 2011. Let <math>\tau</math> be a specified time point within the support of the censoring variable. <math>
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518
\begin{align}
# [6] 0.21626733  0.10394626  0.16010680 -0.17685643 -0.02709500
C(\tau) = \text{UnoC}(\hat{\pi}, \tau)
 
        = \frac{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) I(\hat{\beta}'Z_i > \hat{\beta}'Z_{i'}) \delta_i}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) \delta_i}
# type = "risk" (Risk score)
\end{align}
> as.numeric(predict(fit,type="risk"))[1:10]
</math>, a measure of the concordance between <math>\hat{\beta} Z_i</math> (the linear predictor) and the survival time. <math>\hat{S}_C(t)</math> is the Kaplan-Meier estimator for the '''censoring distribution/variable/time''' (cf '''event time'''); flipping the definition of <math>\delta_i</math>/considering failure events as "censored" observations and censored observations as "failures" and computing the KM as usual; see p207 of [https://amstat.tandfonline.com/doi/abs/10.1198/000313001317098185#.WtS-pNPwY3F Satten 2001] and the [https://github.com/cran/survC1/blob/master/R/FUN-cstat-ver003b.R#L282 source code from the kmcens()] in survC1. Note that <math>C_\tau</math> converges to <math> P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \tau).</math>
[1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342 1.1095408
** <span style="color: red">Uno's estimator does not require the fitted model to be correct </span>. See also table V in the simulation study where the true model is log-normal regression.
[8] 1.1736362 0.8379001 0.9732688
** <span style="color: red">Uno's estimator is consistent for a population concordance measure that is free of censoring</span>. See the coverage result in table IV and V from his simulation study. Other forms of C-statistic estimate population parameters that may depend on the current study-specific censoring distribution.
> exp((lung$age-mean(lung$age)) * 0.0187)[1:10]
** To accommodate discrete risk scores, in survC1::Est.Cval(), it is using the formula <math>.
[1] 1.2411448 1.1094165 0.8864188 0.9031508 0.9552657 1.2411448
\begin{align}
[7] 1.1094165 1.1734337 0.8380598 0.9732972
\frac{\sum_{i,i'}[ (\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) I(\hat{\beta}'Z_i > \hat{\beta}'Z_{i'}) \delta_i +  0.5 * (\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) I(\hat{\beta}'Z_i = \hat{\beta}'Z_{i'}) \delta_i ]}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) \delta_i}
> exp(fit$linear.predictors)[1:10]
\end{align}
  [1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342
</math>. '''Note that pec::cindex() is using the same formula but survAUC::UnoC() does not.'''
[7] 1.1095408 1.1736362 0.8379001 0.9732688
** If the specified <math>\tau</math> (tau) is 'too' large such that very few events were observed or very few subjects were followed beyond this time point, the standard error estimate for <math>\hat{C}_\tau</math> can be quite large.
</pre>
** Uno mentioned from (page 95) Heagerty and Zheng 2005 that when T is right censoring, one would typically consider <math>C_\tau</math> with a fixed, prespecified follow-up period <math>(0, \tau)</math>.
** Uno also mentioned that when the data is right censored, the censoring variable ''D'' is usually shorter than that of the failure time ''T'', the tail part of the estimated survival function of T is rather unstable. Thus we consider a truncated version of C.
** Heagerty and Zheng (2005) p95 said '''<math>C_\tau</math> is the probability that the predictions for a random pair of subjects are concordant with their outcomes, given that the smaller event time occurs in <math>(0, \tau)</math>'''.  
** real data 1: fit a Cox model. Get risk scores <math>\hat{\beta}'Z</math>. Compute the point and confidence interval estimates (M=500 indep. random samples with the same sample size as the observation data) of <math>C_\tau</math> for different <math>\tau</math>. Compare them with the conventional C-index procedure (Korn).
** real data 1: compute <math>C_\tau</math> for a full model and a reduce model. Compute the difference of them (<math>C_\tau^{(A)} - C_\tau^{(B)} = .01</math>) and the 95% confidence interval (-0.00, .02) of the difference for testing the importance of some variable (HDL in this case). '''Though HDL is quite significant (p=0) with respect to the risk of CV disease but its incremental value evaluated via C-statistics is quite modest.'''
** real data 2: goal - evaluate the prognostic value of a new gene signature in predicting the time to death or metastasis for breast cancer patients. Two models were fitted; one with age+ER and the other is gene+age+ER. For each model we can calculate the point and interval estimates of <math>C_\tau</math> for different <math>\tau</math>s.
** simulation: T is from Weibull regression for case 1 and log-normal regression for case 2. Covariates = (age, ER, gene). 3 kinds of censoring were considered. Sample size is 100, 150, 200 and 300. 1000 iterations. Compute coverage probabilities and average length of 95% confidence intervals, bias and root mean square error for <math>\tau</math> equals to 10 and 15. Compared with the conventional approach, the new method has higher coverage probabilities and less bias in 6 scenarios.
* [https://academic.oup.com/ndt/article/25/5/1399/1843002 Statistical methods for the assessment of prognostic biomarkers (Part I): Discrimination] by Tripep et al 2010
* '''Gonen and Heller''' 2005 concordance index for Cox models
** <math>P(T_2>T_1|g(Z_1)>g(Z_2))</math>. Gonen and Heller's c statistic which is independent of censoring.
** [https://www.rdocumentation.org/packages/survAUC/versions/1.0-5/topics/GHCI GHCI()] from survAUC package. Strangely only one parameter is needed. survAUC allows for testing data but CPE package does not have an option for testing data.  
{{Pre}}
TR <- ovarian[1:16,]
TE <- ovarian[17:26,]
train.fit  <- coxph(Surv(futime, fustat) ~ age,
                    x=TRUE, y=TRUE, method="breslow", data=TR)
lpnew <- predict(train.fit, newdata=TE)     
survAUC::GHCI(lpnew) # .8515


lpnew2 <- predict(train.fit, newdata = TR)
=== threshold/cutoff ===
survAUC::GHCI(lpnew2) # 0.8079495
* https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5882539/  An optimal threshold on the score to separate patients into low- and high-risk groups was determined using the MaxStat package to select the cutoff value producing the maximal log-rank score in the training cohort.
* [https://cran.r-project.org/web/packages/maxstat/index.html maxstat]: Maximally Selected Rank Statistics (cf the [https://cran.r-project.org/web/packages/matrixStats/index.html matrixStats]: Functions that Apply to Rows and Columns of Matrices (and to Vectors) package).


CPE::phcpe(train.fit, CPE.SE = TRUE)
== Survival risk prediction ==
# $CPE
* [https://brb.nci.nih.gov/techreport/Briefings.pdf Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data] Simon 2011. The authors have noted the CV has been used for optimization of tuning parameters but the data available are too limited for effective into training & test sets.
# [1] 0.8079495
** The CV Kaplan-Meier curves are essentially unbiased and the separation between the curves gives a fair representation of the value of the expression profiles for predicting survival risk.
# $CPE.SE
** The log-rank statistic does not have the usual chi-squared distribution under the null hypothesis. This is because the data was used to create the risk groups.
# [1] 0.0670646
** Survival ROC curve can be used as a measure of predictive accuracy for the survival risk group model at a certain landmark time.
 
** The ROC curve can be misleading. For example if re-substitution is used, the AUC can be very large.
Hmisc::rcorr.cens(-TR$age, Surv(TR$futime, TR$fustat))["C Index"]
** The p-value for the significance of the test that AUC=.5 for the cross-validated survival ROC curve can be computed by permutations.
# 0.7654321
** Cross-validated estimates of survival risk discrimination can be pessimistically '''biased''' if the number of folds K is too small for the number of events, and the '''variance''' of the cross-validated risk group survival curves or time-dependent ROC curves will be large, particularly when K is large and the number of events is small. For example, for the null simulations of ''Figure 3'', there are several cases in which the cross-validated Kaplan–Meier curve for the low-risk group is below that for the high-risk group.
Hmisc::rcorr.cens(TR$age, Surv(TR$futime, TR$fustat))["C Index"]
** (class data) For small sample sizes of fewer than '''50 cases''', they recommended use of leave-one-out cross-validation to minimize mean squared error of the estimate of prediction error.
# 0.2345679
** (survival data) Subramanian and Simon (Stat Med) recommended use of 5- or 10-fold cross-validation for a wide range of conditions.
</pre>
** Fig 1: KM substitution. 10 null data.
** Used by [https://bioconductor.org/packages/release/bioc/vignettes/simulatorZ/inst/doc/simulatorZ-vignette.pdf#page=11 simulatorZ] package
** Fig 2: KM test data. 10 null data.
* '''Uno's C-statistics (2011)''' and some examples using different packages
** Fig 3: KM 10-fold CV. One null data.
** C-statistic may or may not be a decreasing function of '''tau'''. However, AUC(t) may not be decreasing; see Fig 1 of Blanche et al 2018. <syntaxhighlight lang='rsplus'>
** Fig 4A: KM Shedden data resubstitution.
library(survAUC); library(pec)
** Fig 4B: KM Shedden data. CV
set.seed(1234)
** Fig 5A: Resubstitution time-dep ROC. Shedden.
dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001) # simulWebib was defined above
** Fig 5B: CV time-dep ROC. Shedden.
#    coef exp(coef) se(coef)    z      p
** Fig 6A: KM clinical covariates only
# x -0.744    0.475    0.269 -2.76 0.0057
** Fig 6B: KM combined
TR <- dat[1:80,]
** Fig 7. Time-dep ROC from covariates only and combined.
TE <- dat[81:100,]
* Some cites: [https://www.pnas.org/doi/epdf/10.1073/pnas.1408792111 Automated identification of stratifying signatures incellular subpopulations] Tibshirani 2014.
train.fit  <- coxph(Surv(time, status) ~ x, data=TR)
* Measure of assessment for prognostic prediction
plot(survfit(Surv(time, status) ~ 1, data =TR))
:{| class="wikitable"
 
!
lpnew <- predict(train.fit, newdata=TE)
! 0/1
Surv.rsp <- Surv(TR$time, TR$status)
! Survival
Surv.rsp.new <- Surv(TE$time, TE$status)             
|-
sapply(c(.25, .5, .75),
| Sensitivity
      function(qtl) UnoC(Surv.rsp, Surv.rsp.new, lpnew, time=quantile(TR$time, qtl)))
| <math>P(Pred=1|True=1)</math>
# [1] 0.2580193 0.2735142 0.2658271
| <math>P(\beta' X > c | T < t)</math>
sapply(c(.25, .5, .75),
|-
      function(qtl) cindex( list(matrix( -lpnew, nrow = nrow(TE))),  
| Specificity
        formula = Surv(time, status) ~ x,
| <math>P(Pred=0|True=0)</math>
        data = TE,
| <math>P(\beta' X \le c | T \ge t)</math>
        eval.times = quantile(TR$time, qtl))$AppC$matrix)
|}
# [1] 0.5041490 0.5186850 0.5106746
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.4106/full An evaluation of resampling methods for assessment of survival risk prediction in high-dimensional settings] Subramanian, et al 2010.
</syntaxhighlight>
** The conditional probabilities can be estimated by Heagerty et al 2000 (R package [https://cran.r-project.org/web/packages/survivalROC/index.html survivalROC]). '''The AUC(t) can be used for comparing and assessing prognostic models (a measure of accuracy) for future samples.''' In the absence of an independent large dataset, an estimate for AUC(t) is obtained through resampling from the original sample <math>S_n</math>.
** Four elements are needed for computing truncated C-statistic using survAUC::UnoC. But it seems pec::cindex does not need the training data.
** Resubstitution estimate of AUC(t) (i.e. all observations were used for feature selection, model building as well as the estimation of accuracy) is too optimistic. So k-fold CV method is considered.
*** training data including covariates,
** There are two ways to compute k-fold CV estimate of AUC(t): the pooling strategy (used in the paper) and average strategy (AUC(t)s are first computed for each test set and are then averaged). In the pooling strategy, all the test set risk-score predictions are first collected and AUC(t) is calculated on this combined set.
*** testing data including covariates,  
** Conclusions: sample splitting and LOOCV have a higher mean square error than other methods. 5-fold or 10-fold CV provide a good balance between bias and variability for a wide range of data settings.
*** predictor from new data,
* [https://brb.nci.nih.gov/techreport/JNCI-NSLC-Signatures.pdf *Gene Expression–Based Prognostic Signatures in Lung Cancer: Ready for Clinical Use?] Subramanian, et al 2010.
*** truncation time/evaluation time/prediction horizon.
* [https://academic.oup.com/bioinformatics/article/23/14/1768/188061/Assessment-of-survival-prediction-models-based-on Assessment of survival prediction models based on microarray data] Martin Schumacher, et al. 2007
** (From ?UnoC) Uno's estimator is based on '''inverse-probability-of-censoring weights''' and '''does not assume a specific working model for deriving the predictor lpnew'''. It is assumed, however, that there is a one-to-one relationship between the predictor and the expected survival times conditional on the predictor. Note that the estimator implemented in UnoC is restricted to situations where the random censoring assumption holds.
* [http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0020108 Semi-Supervised Methods to Predict Patient Survival from Gene Expression Data]  Eric Bair , Robert Tibshirani, 2004
** [https://rdrr.io/cran/survAUC/man/UnoC.html survAUC::UnoC()]. The '''tau''' parameter: Truncation time. The resulting C tells how well the given prediction model works in predicting events that occur in the time range from 0 to tau. <math> P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \tau).</math> Con: no confidence interval estimate for <math>C_\tau</math> nor <math>C_\tau^{(A)} - C_\tau^{(B)}</math>
* Time dependent ROC curves for censored survival data and a diagnostic marker. Heagerty et al, Biometrics 2000
** [https://www.rdocumentation.org/packages/pec/versions/2.4.9/topics/cindex pec::cindex()]. At each timepoint of '''eval.times''' the c-index is computed using only those pairs where one of the event times is known to be earlier than this timepoint. If eval.times is missing or Inf then the '''largest uncensored''' event time is used. See a more general example from [https://github.com/tagteam/webappendix-cindex-not-proper/blob/bdc0a70778955f36aeb1d6566590a51d1913702f/R/cindex-t-year-risk-supplementary-material.R#L118 here]
** [http://faculty.washington.edu/heagerty/Software/SurvROC/SurvivalROC/survivalROCdiscuss.pdf An introduction to survivalROC] by Saha, Heagerty. If the AUCs are computed at several time points, we can plot the AUCs vs time for different models (eg different covariates) and compare them to see which model performs better.
** Est.Cval() from the [https://cran.r-project.org/web/packages/survC1/index.html survC1] package (the only package gives confidence intervals of C-statistic or deltaC, authored by H. Uno). It doesn't take new data nor the vector of predictors obtained from the test data. Pro: [https://www.rdocumentation.org/packages/survC1/versions/1.0-2/topics/Inf.Cval Inf.Cval()] can compute the confidence interval (perturbation-resampling based) of <math>C_\tau</math> & [https://www.rdocumentation.org/packages/survC1/versions/1.0-2/topics/Inf.Cval.Delta Inf.Cval.Delta()] for the difference <math>C_\tau^{(A)} - C_\tau^{(B)}</math>. <syntaxhighlight lang='rsplus'>
** The '''survivalROC''' package does not draw an ROC curve. It outputs FP (x-axis) and TP (y-axis). We can use basic R or ggplot to draw the curve.
library(survAUC)
** [https://www.rdocumentation.org/packages/survivalROC/versions/1.0.1/topics/survivalROC survivalROC()] calculates AUC at specified time by using NNE method (default). We can use the prognostic index as marker when there are more than one markers is used. Note that [https://www.rdocumentation.org/packages/survAUC/versions/1.0-5/topics/AUC.uno survAUC::AUC.uno()] uses Uno (2007) to calculate FP and TP.
# require training and predict sets
** [https://rstudio-pubs-static.s3.amazonaws.com/3506_36a9509e9d544386bd3e69de30bca608.html Assessment of Discrimination in Survival Analysis (C-statistics, etc)]
TR <- ovarian[1:16,]
** [http://sachsmc.github.io/plotROC/ plotROC] package by Sachs for showing ROC curves from multiple time points on the same plot.
TE <- ovarian[17:26,]
** [https://datascienceplus.com/time-dependent-roc-for-survival-prediction-models-in-r/ Time-dependent ROC for Survival Prediction Models in R]. It shows the effect of  the number of events and the selection of predict time. It also emphasized the '''survivalROC''' package implements the '''cumulative case/dynamic control ROC''' and the '''risksetROC''' package implements the '''incident case/dynamic control ROC'''.
train.fit  <- coxph(Surv(futime, fustat) ~ age, data=TR)
** survivalROC怎么看最佳cut-off值?/ HOW to use the survivalROC to get optimal cut-off values? 最优的点应该就是斜率等于1的地方.
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-10-413 Survival prediction from clinico-genomic models - a comparative study] Hege M Bøvelstad, 2009
* [http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(19990915/30)18:17/18%3C2529::AID-SIM274%3E3.0.CO;2-5/full Assessment and comparison of prognostic classification schemes for survival data]. E. Graf, C. Schmoor, W. Sauerbrei, et al. 1999
* [http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(20000229)19:4%3C453::AID-SIM350%3E3.0.CO;2-5/full What do we mean by validating a prognostic model?] Douglas G. Altman, Patrick Royston, 2000
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.3768/full On the prognostic value of survival models with application to gene expression signatures] T. Hielscher, M. Zucknick, W. Werft, A. Benner, 2000
* Accuracy of point predictions in survival analysis, Henderson et al, Statist Med, 2001.  
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis] Hung-Chia Chen et al 2012
* [http://onlinelibrary.wiley.com/doi/10.1002/sim.7342/abstract Accuracy of predictive ability measures for survival models] Flandre, Detsch and O'Quigley, 2017.
* [http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006026 Association between expression of random gene sets and survival is evident in multiple cancer types and may be explained by sub-classification] Yishai Shimoni, PLOS 2018
* [http://www.bmj.com/content/bmj/357/bmj.j2497 Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study]
* [http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006076 Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data] Ching et al 2018.
* [https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-022-00124-y A scoping methodological review of simulation studies comparing statistical and machine learning approaches to risk prediction for time-to-event data] Smith, 2022


lpnew <- predict(train.fit, newdata=TE)
== Survival time prediction ==
Surv.rsp <- Surv(TR$futime, TR$fustat)
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-021-04103-w Survival time prediction by integrating cox proportional hazards network and distribution function network] Baek 2021
Surv.rsp.new <- Surv(TE$futime, TE$fustat)             


UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*1)  
== Assessing the performance of prediction models ==
# [1] 0.9761905
* [https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.6246 Investigating the prediction ability of survival models based on both clinical and omics data: two case studies] by Riccardo De Bin, Statistics in Medicine 2014. (not useful)
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*2)
* [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis] Chen et al , BMC Medical Research Methodology 2012
# [1] 0.7308979
* [https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.4242 A simulation study of predictive ability measures in a survival model I: Explained variation measures] Choodari‐Oskooei et al, Stat in Medicine 2011
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*3)
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3575184/ Assessing the performance of prediction models: a framework for some traditional and novel measures] by Ewout W. Steyerberg, Andrew J. Vickers, [...], and Michael W. Kattan, 2010.
# [1] 0.7308979
* [https://academic.oup.com/bioinformatics/article/27/22/3206/194302 survcomp: an R/Bioconductor package for performance assessment and comparison of survival models] paper in 2011 and [http://bcb.dfci.harvard.edu/~aedin/courses/Bioconductor/survival.pdf Introduction to R and Bioconductor Survival analysis] where the survcomp package can be used. The summary here is based on this paper.
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*4)
** [https://rdrr.io/bioc/survcomp/man/concordance.index.html concordance.index()]. Pencina, M. J. and D'Agostino, R. B. (2004) "Overall C as a measure of discrimination in survival analysis: model specific population value and confidence interval estimation", Statistics in Medicine, 23, pages 2109–2123, 2004.
# [1] 0.7308979
* [https://stats.stackexchange.com/questions/181634/how-to-compare-predictive-power-of-survival-models How to compare predictive power of survival models?]
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*5)
* [https://stats.stackexchange.com/questions/17604/how-to-compare-harrell-c-index-from-different-models-in-survival-analysis How to compare Harrell C-index from different models in survival analysis?] and [https://stats.stackexchange.com/q/17648 Frank Harrell's comment]: Doing model comparison with LR statistics is more powerful than using methods that depend on an asymptotic distribution of the C-index.
# [1] 0.7308979
* [https://www.acpjournals.org/doi/abs/10.7326/M22-0844?journalCode=aim Assessing performance and clinical usefulness in prediction models with survival outcomes: practical guidance for Cox proportional hazards models] 2022. [https://github.com/danielegiardiello/Prediction_performance_survival Source code].
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors)
# [1] 0.7308979
# So the function UnoC() can obtain the exact result as Est.Cval().
# Now try on a new data set. Question: why do we need Surv.rsp?
UnoC(Surv.rsp, Surv.rsp.new, lpnew)
# [1] 0.7333333
UnoC(Surv.rsp, Surv.rsp.new, lpnew, time=365.25*2)
# [1] 0.7333333


library(pec)
=== Hazard ratio ===
cindex( list(matrix( -lpnew, nrow = nrow(TE))),  
[https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/hazard.ratio hazard.ratio()]
        formula = Surv(futime, fustat) ~ age,
{{Pre}}
        data = TE, eval.times = 365.25*2)$AppC
hazard.ratio(x, surv.time, surv.event, weights, strat, alpha = 0.05,  
# $matrix
            method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)
# [1] 0.7333333
</pre>


library(survC1)
=== Odds ratio 優勢比/比值比/發生比 ===
Est.Cval(cbind(TE, lpnew), tau = 365.25*2, nofit = TRUE)$Dhat
* [https://zhuanlan.zhihu.com/p/377185606 RR值、OR值、HR值:临床研究中的3个“R”你都分清了吗?]. 相对危险度RR(relative risk)、风险比HR(hazard ratio)、比值比OR(odds ratio)
# [1] 0.7333333
* [https://blog.csdn.net/weixin_41858481/article/details/95773773 医学统计学中RR、OR和HR三个关于比值的概念]
* [https://zh.wikipedia.org/zh-tw/发生比 發生比 Odds], https://en.wikipedia.org/wiki/Odds_ratio has relative risk and odds ratio.
* [https://academic.oup.com/aje/article/159/9/882/167475?login=true Limitations of the Odds Ratio in Gauging the Performance of a Diagnostic, Prognostic, or Screening Marker] Pepe 2004. C statistic measures discrimination ability better than relative risk does; see [https://www.nejm.org/doi/10.1056/NEJMoa055373 this paper].


# tau is mandatory (>0), no need to have training and predict sets
=== D index ===
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*1)$Dhat
[https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/D.index D.index()]
# [1] 0.9761905
{{Pre}}
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*2)$Dhat
D.index(x, surv.time, surv.event, weights, strat, alpha = 0.05,
# [1] 0.7308979
        method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*3)$Dhat
</pre>
# [1] 0.7308979
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*4)$Dhat
# [1] 0.7308979
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*5)$Dhat
# [1] 0.7308979


svg("~/Downloads/c_stat_scatter.svg", width=8, height=5)
=== AUC ===
par(mfrow=c(1,2))
See [[#ROC_curve_and_Brier_score|ROC curve]].
plot(TR$futime, train.fit$linear.predictors, main="training data",
    xlab="time", ylab="predictor")
mtext("C=.731 at t=2", 3)
plot(TE$futime, lpnew, main="testing data", xlab="time", ylab="predictor")
mtext("C=.733 at t=2", 3)
dev.off()
</syntaxhighlight> [[:File:C stat scatter.svg]]
* Assessing the prediction accuracy of a cure model for censored survival data with long-term survivors: Application to breast cancer data
* The use of ROC for defining the validity of the prognostic index in censored data
* [http://circ.ahajournals.org/content/115/7/928 Use and Misuse of the Receiver Operating Characteristic Curve in Risk Prediction] Cook 2007
* '''Evaluating Discrimination of Risk Prediction Models: The C Statistic''' by Pencina et al, JAMA 2015
* '''Blanche et al(2018)''' [https://academic.oup.com/biostatistics/advance-article-abstract/doi/10.1093/biostatistics/kxy006/4864363?redirectedFrom=fulltext The c-index is not proper for the evaluation of t-year predicted risks]
** There is a bug on script [https://github.com/tagteam/webappendix-cindex-not-proper/blob/master/R/cindex-t-year-risk-supplementary-material.R#L154 line 154].
** With a fixed prediction horizon, '''the concordance index can be higher for a misspecified model than for a correctly specified model'''. The time-dependent AUC does not have this problem.
** (page 8) ''We now show that when a misspecified prediction model satisfies the ranking condition but the true distribution does not, then it is possible that the misspecified model achieves a misleadingly high c-index.''
** The traditional C‐statistic used for the survival models is not guaranteed to identify the “best” model for estimating the risk of t-year survival. In contrast, measures of predicted error do not suffer from these limitations. See this paper [https://onlinelibrary.wiley.com/doi/full/10.1111/ajt.15132 The relationship between the C‐statistic and the accuracy of program‐specific evaluations] by Wey et al 2018
** Unfortunately, a drawback of Harrell’s c-index for the time to event and competing risk settings is that the measure does not provide a value specific to the time horizon of prediction (e.g., a 3-year risk). See this paper [https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-018-0029-2 The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models] by Kattan and Gerds 2018.
** In Fig 1 Y-axis is concordance (AUC/C) and X-axis is time, the caption said '''The ability of (some variable) to discriminate patients who will either die or be transplanted within the next t-years from those who will be event-free at time t'''.
** The <math>\tau</math> considered here is the maximal end of follow-up time
** AUC (riskRegression::Score()), Uno-C (pec::cindex()), Harrell's C (Hmisc::rcorr.cens() for censored and summary(fit)$concordance for uncensored) are considered.
** The C_IPCW(t) or C_Harrell(t) is obtained by artificially censoring the outcome at time t. So C_IPCW(t) is different from Uno's version.


=== C-statistic limitations ===
Comparison:
See the discussion section of [https://onlinelibrary.wiley.com/doi/full/10.1111/ajt.15132 The relationship between the C‐statistic and the accuracy of program‐specific evaluations] by Wey 2018
{| class="wikitable"
* '''Correctly specified models''' can have low or high C‐statistics. Thus, the C‐statistic cannot identify a correctly specified model.
!
* the traditional C‐statistic used for the survival models is not guaranteed to identify the “best” model for estimating the risk of, for example, 1‐year survival
! Definition
 
! Interpretation
Importantly, there exists no measure of risk discrimination or predicted error that can identify a correctly specified model, because they all depend on unknown characteristics of the data. For example, the C‐statistic depends on the variability in recipient‐level risk, while measures of squared error such as the Brier Score depend on residual variability.
|-
| Two class
| <math> P(Z_{case} > Z_{control}) </math>
| the probability that a randomly selected '''case''' will have a higher test result (marker value) than a randomly selected '''control'''. It represents a measure of concordance between the marker and the disease status. ROC curves are particularly useful for comparing the discriminatory capacity of different potential biomarkers. (Heagerty &amp; Zheng 2005)
|-
| Survival data
| <math> P(\beta' Z_1 > \beta' Z_2|T_1 < T_2) </math>
| (Roughly speaking) the probability of concordance between predicted and observed responses. The probability that the predictions for a random pair of subjects are concordant with their outcomes. (Heagerty &amp; Zheng 2005). (Precisely) fraction of pairs in your data, where the observation with the higher survival time has the higher probability of survival predicted by your model.
|}


[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3157029/ Analysis of Biomarker Data: logs, odds ratios and ROC curves]. This paper does not consider the survival time data. It has some summary about C-statistic (interpretation, warnings).
p95 of Heagerty and Zheng (2005) gave a relationship of C-statistic:
* The C-statistic is relatively '''insensitive''' to the added contribution of a new marker when the two models, with and without biomarker, estimate risk on a continuous scale. In fact, many new biomarkers provide only minimal increase in the C-statistic when added to the Framingham model for CHD risk.
 
* The classical C-statistic assumes that high sensitivity and high specificity are equally desirable. This is not always the case – for example, when screening the general population for a low-prevalence outcome requiring invasive follow-up, high specificity is important, while cancer screening in a high-risk group would emphasize high sensitivity.
<math>
* To achieve a noticeable increase in the C-statistic, a biomarker must have a very strong independent association with the event risk (say ORs of 10 or higher per 1 SD increase).
C = P(M_j > M_k | T_j < T_k) = \int_t \mbox{AUC(t) w(t)} \; dt
</math>
 
where ''M'' is the marker value and <math>w(t) = 2 \cdot f(t) \cdot S(t) </math>. When the interest is in the accuracy of a regression model we will use <math>M_i = Z_i^T \beta</math>.
 
The time-dependent AUC is also related to time-dependent C-index. <math> C_\tau = P(M_j > M_k | T_j < T_k, T_j < \tau) = \int_t \mbox{AUC(t)} \mbox{w}_{\tau}(t) \; dt  </math> where <math> w_\tau(t) = 2 \cdot f(t) \cdot S(t)/(1-S^2(\tau))</math>.


=== C-statistic applications ===
=== Integrated brier score (≈ "mean squared error" of prediction for survival data) ===
* [https://www.tandfonline.com/doi/pdf/10.1080/01621459.2018.1482756 Semiparametric Regression Analysis of Multiple Right- and Interval-Censored Events] by Gao et al, JASA 2018
[http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(19990915/30)18:17/18%3C2529::AID-SIM274%3E3.0.CO;2-5/full Assessment and comparison of prognostic classification schemes for survival data] Graf et al Stat. Med. 1999 2529-45, [https://onlinelibrary.wiley.com/doi/pdf/10.1002/bimj.200610301 Consistent Estimation of the Expected Brier Score in General Survival Models with Right‐Censored Event Times] Gerds et al 2006.
* A c statistic of 0.7–0.8 is considered good, while >0.8 is considered excellent. See [https://www.sciencedirect.com/science/article/pii/S0168827817322481#bb0090 this paper]. 2018
* The C statistic, also termed concordance statistic or c-index, is analogous to the area under the curve and is a global measure of model discrimination. Discrimination refers to the ability of a risk prediction model to separate patients who develop a health outcome from patients who do not develop a health outcome. Effectively, the C statistic is the probability that a model will result in a higher-risk score for a patient who develops the outcomes of interest compared with a patient who does not develop the outcomes of interest. See [https://jamanetwork.com/journals/jamanetworkopen/article-abstract/2703140 the paper] JAMA 2018


=== C-statistic vs LRT comparing nested models ===
* Because the point predictions of event-free times will almost inevitably given inaccurate and unsatisfactory result, the mean square error of prediction <math>\frac{1}{n}\sum_1^n (T_i - \hat{T}(X_i))^2</math> method will not be considered. See Parkes 1972 or [http://www.lcc.uma.es/~jja/recidiva/055.pdf Henderson] 2001.
1. Binary data
* Another approach is to predict the survival or event status <math>Y=I(T > \tau)</math> at a fixed time point <math>\tau</math> for a patient with X=x. This leads to the expected Brier score <math>E[(Y - \hat{S}(\tau|X))^2]</math> where <math>\hat{S}(\tau|X)</math> is the estimated event-free probabilities (survival probability) at time <math>\tau</math> for subject with predictor variable <math>X</math>.
{{Pre}}
* The time-dependent Brier score (without censoring)
# https://stats.stackexchange.com/questions/46523/how-to-simulate-artificial-data-for-logistic-regression
: <math>
set.seed(666)
\begin{align}
x1 = rnorm(1000)           # some continuous variables
  \mbox{Brier}(\tau) &= \frac{1}{n}\sum_1^n (I(T_i>\tau) - \hat{S}(\tau|X_i))^2  
x2 = rnorm(1000)
\end{align}
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
</math>
pr = 1/(1+exp(-z))         # pass through an inv-logit function
* The time-dependent Brier score (with censoring, C is the censoring variable)
y = rbinom(1000,1,pr)     # bernoulli response variable
: <math>
df = data.frame(y=y,x1=x1,x2=x2)
\begin{align}
fit <- glm( y~x1+x2,data=df,family="binomial")
  \mbox{Brier}(\tau) = \frac{1}{n}\sum_i^n\bigg[\frac{(\hat{S}_C(t_i))^2I(t_i \leq \tau, \delta_i=1)}{\hat{S}_C(t_i)} + \frac{(1 - \hat{S}_C(t_i))^2 I(t_i > \tau)}{\hat{S}_C(\tau)}\bigg]
summary(fit)  
\end{align}
# Estimate Std. Error z value Pr(>|z|)  
</math>
# (Intercept)   0.9915    0.1185  8.367  <2e-16 ***
where <math>\hat{S}_C(t_i) = P(C > t_i)</math>, the Kaplan-Meier estimate of the censoring distribution with <math>t_i</math> the survival time of patient ''i''.
#  x1            2.2731    0.1789  12.709  <2e-16 ***
The integration of the Brier score can be done by over time <math>t \in [0, \tau]</math> with respect to some weight function W(t) for which a natual choice is <math>(1 - \hat{S}(t))/(1-\hat{S}(\tau))</math>. The lower the iBrier score, the larger the prediction accuracy is.
#  x2            3.1853    0.2157  14.768  <2e-16 ***
* Useful benchmark values for the Brier score are 33%, which corresponds to predicting the risk by a random number drawn from U[0, 1], and 25% which corresponds to predicting 50% risk for everyone. See [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4194196/pdf/nihms-589222.pdf Evaluating Random Forests for Survival Analysis using Prediction Error Curves] by Mogensen et al J. Stat Software 2012 ([https://cran.r-project.org/web/packages/pec/index.html pec] package). The paper has a good summary of different R package implementing Brier scores.  
---
#  Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for binomial family taken to be 1)
#
# Null deviance: 1355.16  on 999  degrees of freedom
# Residual deviance:  582.93  on 997  degrees of freedom
# AIC: 588.93
confint.default(fit)
#                2.5 %  97.5 %
# (Intercept) 0.7592637 1.223790
# x1          1.9225261 2.623659
# x2          2.7625861 3.608069


# LRT - likelihood ratio test
R function
fit2 <- glm( y~x1,data=df,family="binomial")
* [https://www.rdocumentation.org/packages/pec/versions/2.5.4 pec] by Thomas A. Gerds. The plot.pec() can plot '''prediction error curves''' (defined by Brier score). See an example from [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4841879/pdf/IJPH-45-239.pdf#page=5 this paper]. The .632+ bootstrap prediction error curves is from the paper [https://academic.oup.com/bioinformatics/article/25/7/890/211193#2275428 Boosting for high-dimensional time-to-event data with competing risks] 2009
anova.res <- anova(fit2, fit)
* [https://www.rdocumentation.org/packages/peperr/versions/1.1-7 peperr] package. The package peperr is an early branch of pec.
# Analysis of Deviance Table
* [https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/sbrier.score2proba survcomp::sbrier.score2proba()].  
#
* [https://www.rdocumentation.org/packages/ipred/versions/0.9-5/topics/sbrier ipred::sbrier()]
# Model 1: y ~ x1
# Model 2: y ~ x1 + x2
#  Resid. Df Resid. Dev Df Deviance
# 1      998    1186.16           
# 2      997    582.93  1   603.23
1-pchisq( abs(anova.res$Deviance[2]), abs(anova.res$Df[2]))
# [1] 0


# Method 1: use ROC package to compute AUC
Papers on high dimensional covariates
library(ROC)
* Assessment of survival prediction models based on microarray data, Bioinformatics , 2007, vol. 23 (pg. 1768-74)
set.seed(123)
* Allowing for mandatory covariates in boosting estimation of sparse high-dimensional survival models, BMC Bioinformatics , 2008, vol. 9 pg. 14
markers <- predict(fit, newdata = data.frame(x1, x2), type = "response")
roc1 <- rocdemo.sca( truth=y, data=markers, rule=dxrule.sca )
auc <- AUC(roc1); print(auc) # [1] 0.9459085


markers2 <- predict(fit2, newdata = data.frame(x1), type = "response")
=== Kendall's tau, Goodman-Kruskal's gamma, Somers' d ===
roc2 <- rocdemo.sca( truth=y, data=markers2, rule=dxrule.sca )
* https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient
auc2 <- AUC(roc2); print(auc2) # [1] 0.7259098
* https://en.wikipedia.org/wiki/Goodman_and_Kruskal%27s_gamma
auc - auc2 # [1] 0.2199987
* https://en.wikipedia.org/wiki/Somers%27_D
* [https://cran.r-project.org/web/packages/survival/vignettes/concordance.pdf Survival package] has a good summary. Especially '''concordance = (d+1)/2'''.


# Method 2: use pROC package to compute AUC
=== Concordance index/C-index/C-statistic interpretation and R packages ===
roc_obj <- pROC::roc(y, markers)
* [https://onlinelibrary.wiley.com/doi/full/10.1002/sim.9717?campaign=wolearlyview Pitfalls of the concordance index for survival outcomes] Hartman 2023
pROC::auc(roc_obj) # Area under the curve: 0.9459
* The area under ROC curve (plot of sensitivity of 1-specificity) is also called C-statistic. It is a measure of discrimination generalized for survival data (Harrell 1982 & 2001). The ROC are functions of the sensitivity and specificity for each value of the measure of model. (Nancy Cook, 2007)
** The sensitivity of a test is the probability of a positive test result, or of a value above a threshold, among those with disease (cases).
** The specificity of a test is the probability of a negative test result, or of a value below a threshold, among those without disease (noncases).
** Perfect discrimination corresponds to a c-statistic of 1 & is achieved if the scores for all the cases are higher than those for all the non-cases.
** The c-statistic is the '''probability that the measure or predicted risk/risk score is higher for a case than for a noncase'''.
** The c-statistic is not the probability that individuals are classified correctly or that a person with a high test score will eventually become a case.
** C-statistic is a rank-based measure. The c-statistic describes how well models can rank order cases and noncases, but not a function of the actual predicted probabilities.
* [https://stats.stackexchange.com/questions/29815/how-to-interpret-the-output-for-calculating-concordance-index-c-index?noredirect=1&lq=1 How to interpret the output for calculating concordance index (c-index)?] <math>
P(\beta' Z_1 > \beta' Z_2|T_1 < T_2)
</math> where ''T'' is the survival time and ''Z'' is the covariates.
**  It is the '''fraction of pairs in your data, where the observation with the higher survival time has the higher probability of survival predicted by your model'''.
** High values mean that your model predicts higher probabilities of survival for higher observed survival times.
** The c index estimates the '''probability of concordance between predicted and observed responses'''. A value of 0.5 indicates no predictive discrimination and a value of 1.0 indicates perfect separation of patients with different outcomes. (p371 Harrell 1996)
* Drawback of C-statistics:
** Even though rank indexes such as c are widely applicable and easily interpretable, '''they are not sensitive for detecting small differences in discrimination ability between two models.''' This is due to the fact that a rank method considers the (prediction, outcome) pairs (0.01,0), (0.9, 1) as no more concordant than the pairs (0.05,0), (0.8, 1). A more sensitive likelihood-ratio Chi-square-based statistic that reduces to R2 in the linear regression case may be substituted. (p371 Harrell 1996)
** If the model is correct, the '''likelihood based measures may be more sensitive in detecting differences in prediction ability''', compared to rank-based measures such as C-indexes. (Uno 2011 p 1113)
* [https://statisticaloddsandends.wordpress.com/2019/10/26/what-is-harrells-c-index/ What is Harrell’s C-index?] '''C = #concordant pairs / (# concordant pairs + # discordant pairs)'''
* http://dmkd.cs.vt.edu/TUTORIAL/Survival/Slides.pdf
* [https://cran.r-project.org/web/packages/survival/vignettes/concordance.pdf Concordance] vignette from the survival package. It has a good summary of different ways (such as Kendall's tau and Somers' d) to calculate the '''concordance statistic'''. The ''concordance'' function in the survival package can be used with various types of models including logistic and linear regression.
* <span style="color: magenta"> Assessment of Discrimination in Survival Analysis (C-statistics, etc) </span> [https://rstudio-pubs-static.s3.amazonaws.com/3506_36a9509e9d544386bd3e69de30bca608.html webpage]
* [http://gaodoris.blogspot.com/2012/10/5-ways-to-estimate-concordance-index.html 5 Ways to Estimate Concordance Index for Cox Models in R, Why Results Aren't Identical?], [https://blog.csdn.net/anshiquanshu/article/details/53438438 计算的5种不同方法及比较]. The 5 functions are rcorrcens() from Hmisc, summary()$concordance from survival, survConcordance() from survival, concordance.index() from survcomp and cph() from rms.
** The [https://rdocumentation.org/packages/survival/versions/3.5-5/topics/concordance timewt] option in survival::concordance() function is only applicable to censored data. In this case '''the default corresponds to Harrell's C statistic''', which is closely related to the Gehan-Wilcoxon test; timewt="S" corrsponds to the Peto-Wilcoxon, timewt="S/G" is suggested by Schemper, and timewt="n/G2" corresponds to Uno's C.
** Uno’s C-statistic, which is implemented in the UnoC() function in the survAUC package in R, is a '''censoring-adjusted''' concordance statistic. It is based on inverse-probability-of-censoring weights. The inverse-probability-of-censoring weights adjust for the fact that censored observations contribute less information to the concordance statistic than uncensored observations. This adjustment helps to '''reduce bias in the concordance statistic due to censoring'''. How these weights are applied: 1. For each observation in the dataset, calculate the probability of being censored at each time point (The probability of being censored at each time point can then be estimated as one minus the survival function at that time point). 2. Take the inverse of these probabilities to get the weights. 3. Apply these weights when calculating the concordance statistic.


# Method 3: Compute AUC by hand
* Summary of R packages to compute C-statistic
# https://www.r-bloggers.com/calculating-auc-the-area-under-a-roc-curve/
auc_probability <- function(labels, scores, N=1e7){
  pos <- sample(scores[labels], N, replace=TRUE)
  neg <- sample(scores[!labels], N, replace=TRUE)
  # sum( (1 + sign(pos - neg))/2)/N # does the same thing
  (sum(pos > neg) + sum(pos == neg)/2) / N # give partial credit for ties
}
auc_probability(as.logical(y), markers) # [1] 0.945964
</pre>


2. Survival data
: {| class="wikitable"
{{Pre}}
! Package
library(survival)
! Function
data(ovarian)
! New data?
head(ovarian)
! Comparison
range(ovarian$futime) # [1]   59 1227
|-
plot(survfit(Surv(futime, fustat) ~ 1, data = ovarian))
| survival
 
| summary(coxph(formula, data))$concordance["C"], Cindex()
coxph(Surv(futime, fustat) ~ rx + age, data = ovarian)
| no, yes
#        coef exp(coef) se(coef)    z      p
| no
# rx  -0.8040    0.4475  0.6320 -1.27 0.2034
|-
# age  0.1473    1.1587  0.0461  3.19 0.0014
| survC1
#
| [https://www.rdocumentation.org/packages/survC1/versions/1.0-2/topics/Est.Cval Est.Cval()]
# Likelihood ratio test=15.9  on 2 df, p=0.000355
| no
# n= 26, number of events= 12
| Inf.Cval.Delta(, , , tau)
 
|-
require(survC1)
| [https://cran.r-project.org/web/packages/survAUC/index.html survAUC]
covs0 <- as.matrix(ovarian[, c("rx")])
| [https://www.rdocumentation.org/packages/survAUC/versions/1.0-5/topics/UnoC UnoC()]
covs1 <- as.matrix(ovarian[, c("rx", "age")])
| yes
tau=365.25*1
| no
Delta=Inf.Cval.Delta(ovarian[, 1:2], covs0, covs1, tau, itr=200)
|-
round(Delta, digits=3)
| [https://cran.r-project.org/web/packages/survivalROC/index.html survivalROC]
#          Est    SE Lower95 Upper95
| survivalROC()
# Model1 0.844 0.119  0.611  1.077
| no
# Model0 0.659 0.148  0.369  0.949
| no
# Delta  0.185 0.197  -0.201  0.572
|-
</pre>
| [https://cran.r-project.org/web/packages/timeROC/index.html timeROC]
| [https://cran.r-project.org/web/packages/timeROC/index.html ?]
| ?
| compare()
|-
| compareC
| [https://cran.r-project.org/web/packages/compareC/index.html ?]
| ?
| compareC()
|-
| survcomp
| [https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/concordance.index concordance.index()]
| ?
| cindex.comp()
|-
| Hmisc
| [https://www.rdocumentation.org/packages/Hmisc/versions/4.2-0/topics/rcorr.cens rcorr.cens()]
| no
| no
|-
| pec
| [https://www.rdocumentation.org/packages/pec/versions/2018.07.26/topics/cindex cindex()]
| yes
| see ?[https://www.rdocumentation.org/packages/pec/versions/2023.04.12/topics/cindex cindex doc] <BR>
with splitMethod parameter<BR>
Note it requires time t <BR>
See the warning C-stat eval at t is not proper
|}


* [http://r.789695.n4.nabble.com/Comparing-differences-in-AUC-from-2-different-models-td858746.html Comparing differences in AUC from 2 different models]
=== C-statistics ===
 
<ul>
=== Time dependent ROC curves ===
<li>For two groups data (one with event, one without), C-statistic has an intuitive interpretation: if two individuals are selected at random, one with the event and one without, then the C-statistic is '''the probability that the model predicts a higher risk for the individual with the event'''. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3157029/ Analysis of Biomarker Data: logs, odds ratios and ROC curves] by Grund 2010
[https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/tdrocc tdrocc()]
<li>C-statistics is the probability of concordance between predicted and observed survival.
 
<li>[https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.6370 Comparing two correlated C indices with right‐censored survival outcome: a one‐shot nonparametric approach] Kang et al, Stat in Med, 2014. [https://cran.r-project.org/web/packages/compareC/index.html compareC] package for comparing two correlated C-indices with right censored outcomes. [https://support.sas.com/resources/papers/proceedings17/SAS0462-2017.pdf#page=13 Harrell’s Concordance]. The s.e. of the Harrell's C-statistics can be estimated by the delta method. <math>
=== Calibration ===
\begin{align}
[https://onlinelibrary.wiley.com/doi/full/10.1002/sim.8570?campaign=wolearlyview Graphical calibration curves and the integrated calibration index (ICI) for survival models]
C_H = \frac{\sum_{i,j}I(t_i < t_{j}) I(\hat{\beta} Z_i > \hat{\beta} Z_j) \delta_i}{\sum_{i,j} I(t_i < t_j) \delta_i}
\end{align}
</math> converges to a censoring-dependent quantity <math> P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \text{min}(D_1,D_2)).</math> Here ''D'' is the censoring variable.
<li>[http://europepmc.org/articles/PMC3079915 On the C-statistics for Evaluating Overall Adequacy of Risk Prediction Procedures with Censored Survival Data] by Uno et al 2011. Let <math>\tau</math> be a specified time point within the support of the censoring variable. <math>
\begin{align}
C(\tau) = \text{UnoC}(\hat{\pi}, \tau)
        = \frac{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) I(\hat{\beta}'Z_i > \hat{\beta}'Z_{i'}) \delta_i}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) \delta_i}
\end{align}
</math>, a measure of the concordance between <math>\hat{\beta} Z_i</math> (the linear predictor) and the survival time. <math>\hat{S}_C(t)</math> is the Kaplan-Meier estimator for the '''censoring distribution/variable/time''' (cf '''event time'''); flipping the definition of <math>\delta_i</math>/considering failure events as "censored" observations and censored observations as "failures" and computing the KM as usual; see p207 of [https://amstat.tandfonline.com/doi/abs/10.1198/000313001317098185#.WtS-pNPwY3F Satten 2001] and the [https://github.com/cran/survC1/blob/master/R/FUN-cstat-ver003b.R#L282 source code from the kmcens()] in survC1. Note that <math>C_\tau</math> converges to <math> P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \tau).</math>
* <span style="color: red">Uno's estimator does not require the fitted model to be correct </span>. See also table V in the simulation study where the true model is log-normal regression.
* <span style="color: red">Uno's estimator is consistent for a population concordance measure that is free of censoring</span>. See the coverage result in table IV and V from his simulation study. Other forms of C-statistic estimate population parameters that may depend on the current study-specific censoring distribution.
* To accommodate discrete risk scores, in survC1::Est.Cval(), it is using the formula <math>.
\begin{align}
\frac{\sum_{i,i'}[ (\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) I(\hat{\beta}'Z_i > \hat{\beta}'Z_{i'}) \delta_i +  0.5 * (\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) I(\hat{\beta}'Z_i = \hat{\beta}'Z_{i'}) \delta_i ]}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i < t_{i'}, t_i < \tau) \delta_i}
\end{align}
</math>. '''Note that pec::cindex() is using the same formula but survAUC::UnoC() does not.'''
* If the specified <math>\tau</math> (tau) is 'too' large such that very few events were observed or very few subjects were followed beyond this time point, the standard error estimate for <math>\hat{C}_\tau</math> can be quite large.
* Uno mentioned from (page 95) Heagerty and Zheng 2005 that when T is right censoring, one would typically consider <math>C_\tau</math> with a fixed, prespecified follow-up period <math>(0, \tau)</math>.
* Uno also mentioned that when the data is right censored, the censoring variable ''D'' is usually shorter than that of the failure time ''T'', the tail part of the estimated survival function of T is rather unstable. Thus we consider a truncated version of C.
* Heagerty and Zheng (2005) p95 said '''<math>C_\tau</math> is the probability that the predictions for a random pair of subjects are concordant with their outcomes, given that the smaller event time occurs in <math>(0, \tau)</math>'''.
* real data 1: fit a Cox model. Get risk scores <math>\hat{\beta}'Z</math>. Compute the point and confidence interval estimates (M=500 indep. random samples with the same sample size as the observation data) of <math>C_\tau</math> for different <math>\tau</math>. Compare them with the conventional C-index procedure (Korn).
* real data 1: compute <math>C_\tau</math> for a full model and a reduce model. Compute the difference of them (<math>C_\tau^{(A)} - C_\tau^{(B)} = .01</math>) and the 95% confidence interval (-0.00, .02) of the difference for testing the importance of some variable (HDL in this case). '''Though HDL is quite significant (p=0) with respect to the risk of CV disease but its incremental value evaluated via C-statistics is quite modest.'''
* real data 2: goal - evaluate the prognostic value of a new gene signature in predicting the time to death or metastasis for breast cancer patients. Two models were fitted; one with age+ER and the other is gene+age+ER. For each model we can calculate the point and interval estimates of <math>C_\tau</math> for different <math>\tau</math>s.
* simulation: T is from Weibull regression for case 1 and log-normal regression for case 2. Covariates = (age, ER, gene). 3 kinds of censoring were considered. Sample size is 100, 150, 200 and 300. 1000 iterations. Compute coverage probabilities and average length of 95% confidence intervals, bias and root mean square error for <math>\tau</math> equals to 10 and 15. Compared with the conventional approach, the new method has higher coverage probabilities and less bias in 6 scenarios.
<li>[https://academic.oup.com/ndt/article/25/5/1399/1843002 Statistical methods for the assessment of prognostic biomarkers (Part I): Discrimination] by Tripep et al 2010
<li>'''Gonen and Heller''' 2005 concordance index for Cox models
* <math>P(T_2>T_1|g(Z_1)>g(Z_2))</math>. Gonen and Heller's  c statistic which is independent of censoring.
* [https://www.rdocumentation.org/packages/survAUC/versions/1.0-5/topics/GHCI GHCI()] from survAUC package. Strangely only one parameter is needed. survAUC allows for testing data but CPE package does not have an option for testing data.
{{Pre}}
TR <- ovarian[1:16,]
TE <- ovarian[17:26,]
train.fit  <- coxph(Surv(futime, fustat) ~ age,
                    x=TRUE, y=TRUE, method="breslow", data=TR)
lpnew <- predict(train.fit, newdata=TE)     
survAUC::GHCI(lpnew) # .8515
 
lpnew2 <- predict(train.fit, newdata = TR)
survAUC::GHCI(lpnew2) # 0.8079495
 
CPE::phcpe(train.fit, CPE.SE = TRUE)  
# $CPE
# [1] 0.8079495
# $CPE.SE
# [1] 0.0670646


== Prognostic markers vs predictive markers (and other biomarkers) ==
Hmisc::rcorr.cens(-TR$age, Surv(TR$futime, TR$fustat))["C Index"]
* [https://en.wikipedia.org/wiki/Gene_signature Types of gene signature]
# 0.7654321
* '''[https://en.wikipedia.org/wiki/Prognosis_marker Prognostic marker]''' (某種疾病的危險因子) are biomarkers used to measure the progress of a disease in the patient sample. Prognostic markers are useful to stratify the patients into groups, guiding towards precise medicine discovery. ''Prognostic markers inform about likely disease outcome independent of the treatment received''. See [http://europepmc.org/articles/PMC3888208 Statistical and practical considerations for clinical evaluation of predictive biomarkers] by Mei-Yin Polley et al 2013.
Hmisc::rcorr.cens(TR$age, Surv(TR$futime, TR$fustat))["C Index"]
* '''Predictive marker/treatment selection markers''' provide information about likely outcomes with application of specific interventions. See [http://annals.org/aim/fullarticle/746812/measuring-performance-markers-guiding-treatment-decisions Measuring the performance of markers for guiding treatment decisions] by Janes, et al 2011.
# 0.2345679
* [https://academic.oup.com/jnci/article/107/8/djv157/951084 The Fundamental Difficulty With Evaluating the Accuracy of Biomarkers for Guiding Treatment] Janes 2015 , jnci
</pre>
* [https://onlinelibrary.wiley.com/doi/full/10.1002/sim.6564 Designing a study to evaluate the benefit of a biomarker for selecting patient treatment] Janes 2015
* Used by [https://bioconductor.org/packages/release/bioc/vignettes/simulatorZ/inst/doc/simulatorZ-vignette.pdf#page=11 simulatorZ] package
* [https://academic.oup.com/annonc/article/27/12/2160/2736334 Statistical controversies in clinical research: prognostic gene signatures are not (yet) useful in clinical practice] by Michiels 2016.
<li>'''Uno's C-statistics (2011)''' and some examples using different packages
* Diagnostic biomarker, prognostic biomarker and predictive biomarkers. Disease-related biomarkers and drug-related biomarkers. https://en.wikipedia.org/wiki/Biomarker_(medicine)
* C-statistic may or may not be a decreasing function of '''tau'''. However, AUC(t) may not be decreasing; see Fig 1 of Blanche et al 2018. <syntaxhighlight lang='rsplus'>
* Diagnostic biomarker, prognostic biomarker and predictive biomarkers. https://en.wikipedia.org/wiki/Cancer_biomarker
library(survAUC); library(pec)
* '''Diagnostic''' (確定是某種疾病): diagnose conditions, as in the case of identifying early stage cancers
set.seed(1234)
* [https://onlinelibrary.wiley.com/doi/full/10.1002/sim.8091 Statistical methods for building better biomarkers of chronic kidney disease] by Pencina et al 2019.
dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001) # simulWebib was defined above
* '''Qualitative and quantitative interactions''' from [https://en.wikipedia.org/wiki/Interaction_(statistics)#Qualitative_and_quantitative_interactions Interaction (statistics)]
#    coef exp(coef) se(coef)    z      p
** A plot of [https://online.stat.psu.edu/stat509/node/120/ Interactions] from STAT 509 Design and Analysis of Clinical Trials.
# x -0.744    0.475    0.269 -2.76 0.0057
* [https://www.ncbi.nlm.nih.gov/books/NBK402284/ Understanding Prognostic versus Predictive Biomarkers] 2016. No author. Not a good example to use. x-axis is time? [[:File:Progpreg.png]]
TR <- dat[1:80,]
** Example of a biomarker that is '''prognostic''' but not '''predictive''' (fig1)
TE <- dat[81:100,]
** A biomarker that is both '''prognostic''' (negatively) and '''predictive''' (fig2)
train.fit  <- coxph(Surv(time, status) ~ x, data=TR)
** '''quantitative''' treatment-by-biomarker statistical interaction (fig3). Treatment effect 方向不變 但是程度不同 in two subsets.
plot(survfit(Surv(time, status) ~ 1, data =TR))
*'''qualitative''' treatment-by-biomarker interaction (fig4). There is a change in direction of treatment effects in two subsets (biomarker-positive/biomarker-negative). In particular, if the treatment has downsides such as toxicity or cost, it is compelling to consider treating only subjects with sufficiently large treatment effects (Vickers et al., 2007; Janes et al., 2013).
 
* From the paper "Case-Only Approach to Identifying Markers Predicting Treatment Effects on the Relative Risk Scale" by Dai, Biometrics 2018
lpnew <- predict(train.fit, newdata=TE)
** For most clinical applications, a '''qualitative''' marker-by- treatment interaction is desired: the marker is useful if it identifies subgroups with treatment effects opposite in sign from the overall treatment effect.
Surv.rsp <- Surv(TR$time, TR$status)
** However, there are clinical applications where a '''quantitative'''– but not qualitative– marker-by-treatment interaction may be sufficient for a marker to be useful for treatment selection.  
Surv.rsp.new <- Surv(TE$time, TE$status)            
** A limitation of the case-only approach is its requirement that the endpoint of RCT has to be binary,
sapply(c(.25, .5, .75),
* [https://ascopubs.org/doi/full/10.1200/JCO.2015.63.3651 Biomarker: Predictive or Prognostic?] by Karla V. Ballman, 2015. Lots of cites. [[:File:Zlj9991056240002.jpeg]]
      function(qtl) UnoC(Surv.rsp, Surv.rsp.new, lpnew, time=quantile(TR$time, qtl)))
** A '''prognostic biomarker''' informs about a likely cancer outcome independent of treatment received. For a '''pure prognostic biomarker''', '''the difference between Treat and STD in biomarker-A group is similar to the difference between Treat and STD in biomarker-B group'''. ''In other words, if a biomarker is prognostic and treatment is efficacious, <span style="color: red">the treatment benefit is similar</span> for biomarker-positive and biomarker-negative patients.'' 比較 HR from 2 groups 就可確定是否 biomarker is prognostic 或是 test beta(biomarker) = 0 from fitting a Cox model with Treat + biomarker + Treat:biomarker. See Fig 1A or Fig 2A for a (pure) prognostic biomarker. Common examples include PSA level for prostate cancer and PIK3CA mutation status of tumors in women with HER2-positive metastatic breast cancer.
# [1] 0.2580193 0.2735142 0.2658271
** A biomarker is '''predictive''' if the treatment effect is different for biomarker-positive patients compared with biomarker-negative patients. (However, the paper never defines what is biomarker-positive; it could be samples with some biomarker >0 or over-expressed). Test beta(biomarker*Treat) =0 from fitting a Cox model with Treat + biomarker + Treat:biomaker 就可以知道是否 predictive biomarker.
sapply(c(.25, .5, .75),
***  A '''qualitative interaction  = pure predictive''' occurs when one biomarker group obtains benefit from treatment and the other group obtains no benefit (or is harmed) from treatment. The definition also appeared at [https://pubmed.ncbi.nlm.nih.gov/4027319/ Testing for qualitative interactions between treatment effects and patient subsets]. 計算 HR from two groups. 如果一組 HR=1 (or >1) 另一組 HR <1, 就是 qualitative interaction. See Fig 1C or Fig 2B.
      function(qtl) cindex( list(matrix( -lpnew, nrow = nrow(TE))),
*** Both groups derived benefit from the treatment, this is a '''quantitative interaction = both prognostic & predictive'''. See Fig 1B or Fig 2C.
        formula = Surv(time, status) ~ x,
** Fig2 A. A purely prognostic biomarker. X-axis is time. Left is biomarker-negative patients and RHS is biomarker-positive patients. ''Biomarker-positive patients have a better survival than Biomarker-negative patients independent of treatment group.''
        data = TE,
** Fig2 B. A purely predictive marker.
        eval.times = quantile(TR$time, qtl))$AppC$matrix)
** Fig2 C. '''A biomarker is both predictive and prognostic.''' This is also an example of a '''quantitative''' interaction.
# [1] 0.5041490 0.5186850 0.5106746
* [https://www.nxtbook.com/nxtbooks/gen/clinical_omics_issue11/index.php#/24 Prognostic vs. Predictive Biomarkers] from Clinical OMICs.
</syntaxhighlight>
* [https://onlinelibrary.wiley.com/doi/full/10.1002/bimj.201900171 The area between ROC curves, a non-parametric method to evaluate a biomarker for patient treatment selection] Blangero 2020. '''quantitative markers'''
* Four elements are needed for computing truncated C-statistic using survAUC::UnoC. But it seems pec::cindex does not need the training data.
* [https://academic.oup.com/bioinformatics/article/34/19/3365/4991984 Distinguishing prognostic and predictive biomarkers: an information theoretic approach] Sechidis 2018, Bioinformatics.
** training data including covariates,
** This uses a more mathematical way to discuss several issues.
** testing data including covariates,
* [https://www.sciencedirect.com/science/article/pii/S1574789107001020 Prognostic factors versus predictive factors: Examples from a clinical trial of erlotinib] Clark 2008.
** predictor from new data,
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-03655-7 Construction and optimization of gene expression signatures for prediction of survival in two-arm clinical trials] by Joachim Theilhaber et al 2020
** truncation time/evaluation time/prediction horizon.
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5537292/ Evidence for Treatment-by-Biomarker interaction for FDA-approved Oncology Drugs with Required Pharmacogenomic Biomarker Testing] 2017
* (From ?UnoC) Uno's estimator is based on '''inverse-probability-of-censoring weights''' and '''does not assume a specific working model for deriving the predictor lpnew'''. It is assumed, however, that there is a one-to-one relationship between the predictor and the expected survival times conditional on the predictor. Note that the estimator implemented in UnoC is restricted to situations where the random censoring assumption holds.
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5889758/ Evaluation of biomarkers for treatment selection using individual participant data from multiple clinical trials] Kang 2018. Simulations.
* [https://rdrr.io/cran/survAUC/man/UnoC.html survAUC::UnoC()]. The '''tau''' parameter: Truncation time. The resulting C tells how well the given prediction model works in predicting events that occur in the time range from 0 to tau. <math> P(\beta'Z_1 > \beta' Z_2|T_1 < T_2, T_1 < \tau).</math> Con: no confidence interval estimate for <math>C_\tau</math> nor <math>C_\tau^{(A)} - C_\tau^{(B)}</math>
* [https://academic.oup.com/bioinformatics/article/34/19/3365/4991984 Distinguishing prognostic and predictive biomarkers: an information theoretic approach] Sechidis 2018
* [https://www.rdocumentation.org/packages/pec/versions/2.4.9/topics/cindex pec::cindex()]. At each timepoint of '''eval.times''' the c-index is computed using only those pairs where one of the event times is known to be earlier than this timepoint. If eval.times is missing or Inf then the '''largest uncensored''' event time is used. See a more general example from [https://github.com/tagteam/webappendix-cindex-not-proper/blob/bdc0a70778955f36aeb1d6566590a51d1913702f/R/cindex-t-year-risk-supplementary-material.R#L118 here]
* [https://onlinelibrary.wiley.com/doi/full/10.1002/pst.2002 On evaluating how well a biomarker can predict treatment response with survival data] Mboup 2020
* Est.Cval() from the [https://cran.r-project.org/web/packages/survC1/index.html survC1] package (the only package gives confidence intervals of C-statistic or deltaC, authored by H. Uno). It doesn't take new data nor the vector of predictors obtained from the test data. Pro: [https://www.rdocumentation.org/packages/survC1/versions/1.0-2/topics/Inf.Cval Inf.Cval()] can compute the confidence interval (perturbation-resampling based) of <math>C_\tau</math> & [https://www.rdocumentation.org/packages/survC1/versions/1.0-2/topics/Inf.Cval.Delta Inf.Cval.Delta()] for the difference <math>C_\tau^{(A)} - C_\tau^{(B)}</math>. <syntaxhighlight lang='rsplus'>
* [https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1865167 Estimation of Optimal Individualized Treatment Rules Using a Covariate-Specific Treatment Effect Curve With High-Dimensional Covariates] Guo 2021, jasa
library(survAUC)
* [https://onlinelibrary.wiley.com/doi/full/10.1002/sim.9198 Robust method for optimal treatment decision making based on survival data] Fang 2021
# require training and predict sets
TR <- ovarian[1:16,]
TE <- ovarian[17:26,]
train.fit  <- coxph(Surv(futime, fustat) ~ age, data=TR)


=== Prognostic biomarkers ===
lpnew <- predict(train.fit, newdata=TE)
[https://translational-medicine.biomedcentral.com/articles/10.1186/s12967-021-03180-y Detecting prognostic biomarkers of breast cancer by regularized Cox proportional hazards models] Li 2021. '''prognostic risk score (PRS)''', '''training''', '''discovery dataset''', '''independent''', '''validation''', '''enrichment analysis''', '''C-index''', '''overlap''', '''GEO'''
Surv.rsp <- Surv(TR$futime, TR$fustat)
Surv.rsp.new <- Surv(TE$futime, TE$fustat)            


=== biospear package ===
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*1)
Applications based on google scholar on [https://scholar.google.com/scholar?cites=2620077852245240945&as_sdt=20000005&sciodt=0,21&hl=en biospear package paper]:
# [1] 0.9761905
* [https://aacrjournals.org/mct/article/20/8/1454/673280/Gene-Expression-Signature-Correlates-with-Outcomes Gene Expression Signature Correlates with Outcomes in Metastatic Renal Cell Carcinoma Patients Treated with Everolimus Alone or with a Vascular Disrupting Agent] 2021
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*2)
* [https://academic.oup.com/neuro-oncology/article/23/5/795/6046196 Transcription factor networks of oligodendrogliomas treated with adjuvant radiotherapy or observation inform prognosis] 2020. “biospear” package to generate a prognostic signature. We describe efforts to integrate clinical genomics to discover '''predictive bio-markers''' that would inform adjuvant treatment decisions in oligodendrogliomas ... A second prognostic signature for patients treated with observation alone was also developed, representing a '''predictive bio-marker''' for patients who would benefit from adjuvant radiotherapy.
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*3)
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*4)
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*5)
# [1] 0.7308979
UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors)
# [1] 0.7308979
# So the function UnoC() can obtain the exact result as Est.Cval().
# Now try on a new data set. Question: why do we need Surv.rsp?
UnoC(Surv.rsp, Surv.rsp.new, lpnew)
# [1] 0.7333333
UnoC(Surv.rsp, Surv.rsp.new, lpnew, time=365.25*2)
# [1] 0.7333333


=== Treatment Effect ===
library(pec)
* Tian 2014: <math>P(T^1 \geq t_0|z) - P(T^{-1} \geq t_0|z) </math>
cindex( list(matrix( -lpnew, nrow = nrow(TE))),
* Bonetti 2000: Hazard ratio
        formula = Surv(futime, fustat) ~ age,
* Janes 2014: <math> \Delta(Y) = \rho_0(Y) - \rho_1(Y) = P(D=1|T=0, Y) - P(D=1|T=1, Y) </math>
        data = TE, eval.times = 365.25*2)$AppC
** Subjects with <math>\Delta(Y)<0</math> are called '''marker-negative'''; standard/controlled treatment is favored.
# $matrix
** Subjects with <math>\Delta(Y)>0</math> are called '''marker-positive'''; new treatment is favored. The rule is applying treatment onlyto marker-positive patients. And for this portion of patients, the average benefit of treatment is calculated by <math>B_{pos} = E(\Delta(Y) | \Delta(Y) >0)</math>. See p103 on the paper.
# [1] 0.7333333


=== Subgroup identification ===
library(survC1)
[https://onlinelibrary.wiley.com/doi/full/10.1002/bimj.202000331 Identification of subgroups via partial linear regression modeling approach] Zhou 2021
Est.Cval(cbind(TE, lpnew), tau = 365.25*2, nofit = TRUE)$Dhat
# [1] 0.7333333


== Some packages ==
# tau is mandatory (>0), no need to have training and predict sets
=== personalized package ===
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*1)$Dhat
[https://cran.r-project.org/web/packages/personalized/index.html personalized]: Estimation and Validation Methods for Subgroup Identification and Personalized Medicine. [https://youtu.be/XzoJe2mLj18 Subgroup Identification and Precision Medicine with the {personalized} R Package] (youtube)
# [1] 0.9761905
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*2)$Dhat
# [1] 0.7308979
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*3)$Dhat
# [1] 0.7308979
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*4)$Dhat
# [1] 0.7308979
Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*5)$Dhat
# [1] 0.7308979


=== SurvMetrics ===
svg("~/Downloads/c_stat_scatter.svg", width=8, height=5)
[https://cran.r-project.org/web/packages/SurvMetrics/index.html SurvMetrics]: Predictive Evaluation Metrics in Survival Analysis
par(mfrow=c(1,2))
plot(TR$futime, train.fit$linear.predictors, main="training data",
    xlab="time", ylab="predictor")
mtext("C=.731 at t=2", 3)
plot(TE$futime, lpnew, main="testing data", xlab="time", ylab="predictor")
mtext("C=.733 at t=2", 3)
dev.off()
</syntaxhighlight> [[:File:C stat scatter.svg]]
<li>Assessing the prediction accuracy of a cure model for censored survival data with long-term survivors: Application to breast cancer data
<li>The use of ROC for defining the validity of the prognostic index in censored data
<li>[http://circ.ahajournals.org/content/115/7/928 Use and Misuse of the Receiver Operating Characteristic Curve in Risk Prediction] Cook 2007
<li>'''Evaluating Discrimination of Risk Prediction Models: The C Statistic''' by Pencina et al, JAMA 2015
<li>'''Blanche et al(2018)''' [https://academic.oup.com/biostatistics/advance-article-abstract/doi/10.1093/biostatistics/kxy006/4864363?redirectedFrom=fulltext The c-index is not proper for the evaluation of t-year predicted risks]
* There is a bug on script [https://github.com/tagteam/webappendix-cindex-not-proper/blob/master/R/cindex-t-year-risk-supplementary-material.R#L154 line 154].
* With a fixed prediction horizon, '''the concordance index can be higher for a misspecified model than for a correctly specified model'''. The time-dependent AUC does not have this problem.
* (page 8) ''We now show that when a misspecified prediction model satisfies the ranking condition but the true distribution does not, then it is possible that the misspecified model achieves a misleadingly high c-index.''
* The traditional C‐statistic used for the survival models is not guaranteed to identify the “best” model for estimating the risk of t-year survival. In contrast, measures of predicted error do not suffer from these limitations. See this paper [https://onlinelibrary.wiley.com/doi/full/10.1111/ajt.15132 The relationship between the C‐statistic and the accuracy of program‐specific evaluations] by Wey et al 2018
* Unfortunately, a drawback of Harrell’s c-index for the time to event and competing risk settings is that the measure does not provide a value specific to the time horizon of prediction (e.g., a 3-year risk). See this paper [https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-018-0029-2 The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models] by Kattan and Gerds 2018.
* In Fig 1 Y-axis is concordance (AUC/C) and X-axis is time, the caption said '''The ability of (some variable) to discriminate patients who will either die or be transplanted within the next t-years from those who will be event-free at time t'''.
* The <math>\tau</math> considered here is the maximal end of follow-up time
* AUC (riskRegression::Score()), Uno-C (pec::cindex()), Harrell's C (Hmisc::rcorr.cens() for censored and summary(fit)$concordance for uncensored) are considered.
* The C_IPCW(t) or C_Harrell(t) is obtained by artificially censoring the outcome at time t. So C_IPCW(t) is different from Uno's version.
</li>
</ul>


=== SurvBenchmark ===
=== C-statistic limitations ===
[https://www.biorxiv.org/content/10.1101/2021.07.11.451967v1 SurvBenchmark: comprehensive benchmarking study of survival analysis methods using both omics data and clinical data]
See the discussion section of [https://onlinelibrary.wiley.com/doi/full/10.1111/ajt.15132 The relationship between the C‐statistic and the accuracy of program‐specific evaluations] by Wey 2018
* '''Correctly specified models''' can have low or high C‐statistics. Thus, the C‐statistic cannot identify a correctly specified model.
* the traditional C‐statistic used for the survival models is not guaranteed to identify the “best” model for estimating the risk of, for example, 1‐year survival


== Lasso estimation of hierarchical interactions for analyzing heterogeneity of treatment effect ==
Importantly, there exists no measure of risk discrimination or predicted error that can identify a correctly specified model, because they all depend on unknown characteristics of the data. For example, the C‐statistic depends on the variability in recipient‐level risk, while measures of squared error such as the Brier Score depend on residual variability.
[https://onlinelibrary.wiley.com/doi/full/10.1002/sim.9132?campaign=wolearlyview Lasso estimation of hierarchical interactions for analyzing heterogeneity of treatment effect] 2021


== Quantifying treatment differences in confirmatory trials under non-proportional hazards ==
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3157029/ Analysis of Biomarker Data: logs, odds ratios and ROC curves]. This paper does not consider the survival time data. It has some summary about C-statistic (interpretation, warnings).
[https://arxiv.org/abs/1908.10502 Quantifying treatment differences in confirmatory trials under non-proportional hazards]
* The C-statistic is relatively '''insensitive''' to the added contribution of a new marker when the two models, with and without biomarker, estimate risk on a continuous scale. In fact, many new biomarkers provide only minimal increase in the C-statistic when added to the Framingham model for CHD risk.
* The classical C-statistic assumes that high sensitivity and high specificity are equally desirable. This is not always the case – for example, when screening the general population for a low-prevalence outcome requiring invasive follow-up, high specificity is important, while cancer screening in a high-risk group would emphasize high sensitivity.
* To achieve a noticeable increase in the C-statistic, a biomarker must have a very strong independent association with the event risk (say ORs of 10 or higher per 1 SD increase).


The source code in [https://github.com/jjimenezm1989/Quantifying-treatment-differences-in-confirmatory-trials-under-non-proportional-hazards Github].
=== C-statistic applications ===
 
* [https://www.tandfonline.com/doi/pdf/10.1080/01621459.2018.1482756 Semiparametric Regression Analysis of Multiple Right- and Interval-Censored Events] by Gao et al, JASA 2018
== Computation for gene expression (microarray) data ==
* A c statistic of 0.7–0.8 is considered good, while >0.8 is considered excellent. See [https://www.sciencedirect.com/science/article/pii/S0168827817322481#bb0090 this paper]. 2018
* [https://github.com/cran/survival survival] package (basic package, not designed for gene expression)
* The C statistic, also termed concordance statistic or c-index, is analogous to the area under the curve and is a global measure of model discrimination. Discrimination refers to the ability of a risk prediction model to separate patients who develop a health outcome from patients who do not develop a health outcome. Effectively, the C statistic is the probability that a model will result in a higher-risk score for a patient who develops the outcomes of interest compared with a patient who does not develop the outcomes of interest. See [https://jamanetwork.com/journals/jamanetworkopen/article-abstract/2703140 the paper] JAMA 2018
* [https://github.com/cran/GSA/blob/master/R/GSA.morefuns.R gsa] package
* [https://github.com/cran/samr/blob/master/R/samr.morefuns.R samr] package
* [https://github.com/cran/pamr/blob/master/R/pamr.survfuns.R pamr] package
* [http://www.bioconductor.org/packages/release/bioc/manuals/genefilter/man/genefilter.pdf#page=4 (Bioconductor) genefilter], [https://github.com/Bioconductor/genefilter/blob/master/R/all.R source]. genefilter() & coxfilter(). apply() was used.
* [https://github.com/cran/survcomp/blob/master/R/logpl.R logpl()] from [http://www.bioconductor.org/packages/release/bioc/vignettes/survcomp/inst/doc/survcomp.pdf#page=24 survcomp] package


=== C-statistic vs LRT comparing nested models ===
1. Binary data
{{Pre}}
{{Pre}}
n <- 500
# https://stats.stackexchange.com/questions/46523/how-to-simulate-artificial-data-for-logistic-regression
g <- 10000
set.seed(666)
y <- rexp(n)
x1 = rnorm(1000)          # some continuous variables
status <- ifelse(runif(n) < .7, 1, 0)
x2 = rnorm(1000)
x <- matrix(rnorm(n*g), nr=g)
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
treat <- rbinom(n, 1, .5)
pr = 1/(1+exp(-z))         # pass through an inv-logit function
# Method 1
y = rbinom(1000,1,pr)     # bernoulli response variable
system.time(for(i in 1:g) coxph(Surv(y, status) ~ x[i, ] + treat + treat:x[i, ]))
df = data.frame(y=y,x1=x1,x2=x2)
# 28 seconds
fit <- glm( y~x1+x2,data=df,family="binomial")
 
summary(fit)  
# Method 2
# Estimate Std. Error z value Pr(>|z|)   
system.time(apply(x, 1, function(z) coxph(Surv(y, status) ~ z + treat + treat:z)))
# (Intercept)  0.9915    0.1185  8.367  <2e-16 ***
# 29 seconds
#   x1            2.2731    0.1789  12.709  <2e-16 ***
#  x2            3.1853    0.2157  14.768  <2e-16 ***
#  ---
#  Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for binomial family taken to be 1)
#
# Null deviance: 1355.16  on 999  degrees of freedom
# Residual deviance:  582.93  on 997  degrees of freedom
# AIC: 588.93
confint.default(fit)
#                2.5 %  97.5 %
# (Intercept) 0.7592637 1.223790
# x1          1.9225261 2.623659
# x2          2.7625861 3.608069


# Method 3 (Windows)
# LRT - likelihood ratio test
dyn.load("C:/Program Files (x86)/ArrayTools/Fortran/surv64.dll")  
fit2 <- glm( y~x1,data=df,family="binomial")
tme <- y
anova.res <- anova(fit2, fit)
sorted <- order(tme)
# Analysis of Deviance Table
stime <- as.double(tme[sorted])
#
sstat <- as.integer(status[sorted])
# Model 1: y ~ x1
x1 <- x[,sorted]
# Model 2: y ~ x1 + x2
imodel <- 1  # imodel=1, fit univariate gene expression. Return p-values vector.
#  Resid. Df Resid. Dev Df Deviance
nvar <- 1
# 1      998    1186.16           
system.time(outx1 <- .Fortran("coxfitc", as.integer(n), as.integer(g), as.integer(0),
# 2      997    582.93  1  603.23
                stime, sstat, t(x1), as.double(0), as.integer(imodel),
1-pchisq( abs(anova.res$Deviance[2]), abs(anova.res$Df[2]))
                double(2*n+2*nvar*nvar+3*nvar), logdiff = double(g)))
# [1] 0
# 1.69 seconds on R i386
 
# 0.79 seconds on R x64
# Method 1: use ROC package to compute AUC
library(ROC)
set.seed(123)
markers <- predict(fit, newdata = data.frame(x1, x2), type = "response")
roc1 <- rocdemo.sca( truth=y, data=markers, rule=dxrule.sca )
auc <- AUC(roc1); print(auc) # [1] 0.9459085
 
markers2 <- predict(fit2, newdata = data.frame(x1), type = "response")
roc2 <- rocdemo.sca( truth=y, data=markers2, rule=dxrule.sca )
auc2 <- AUC(roc2); print(auc2) # [1] 0.7259098
auc - auc2 # [1] 0.2199987
 
# Method 2: use pROC package to compute AUC
roc_obj <- pROC::roc(y, markers)
pROC::auc(roc_obj) # Area under the curve: 0.9459


# method 4: GSA
# Method 3: Compute AUC by hand
genenames=paste("g", 1:g, sep="")
# https://www.r-bloggers.com/calculating-auc-the-area-under-a-roc-curve/
#create some random gene sets
auc_probability <- function(labels, scores, N=1e7){
genesets=vector("list", 50)
  pos <- sample(scores[labels], N, replace=TRUE)
for(i in 1:50){
  neg <- sample(scores[!labels], N, replace=TRUE)
   genesets[[i]]=paste("g", sample(1:g,size=30), sep="")
  # sum( (1 + sign(pos - neg))/2)/N # does the same thing
}
  (sum(pos > neg) + sum(pos == neg)/2) / N # give partial credit for ties
geneset.names=paste("set",as.character(1:50),sep="")
}
debug(GSA.func)
auc_probability(as.logical(y), markers) # [1] 0.945964
GSA.obj<-GSA(x,y, genenames=genenames, genesets=genesets,   
</pre>
             censoring.status=status,
 
             resp.type="Survival", nperms=1)
2. Survival data
Browse[3]> str(catalog.unique)
{{Pre}}
  int [1:1401] 7943 227 4069 3011 8402 1586 2443 2777 673 9021 ...
library(survival)
Browse[3]> system.time(cox.func(x[catalog.unique,], y, censoring.status, s0=0))
data(ovarian)
# 1.3 seconds
head(ovarian)
range(ovarian$futime) # [1]  59 1227
plot(survfit(Surv(futime, fustat) ~ 1, data = ovarian))
 
coxph(Surv(futime, fustat) ~ rx + age, data = ovarian)
#        coef exp(coef) se(coef)    z      p
# rx  -0.8040    0.4475  0.6320 -1.27 0.2034
# age  0.1473    1.1587  0.0461  3.19 0.0014
#
# Likelihood ratio test=15.9  on 2 df, p=0.000355
# n= 26, number of events= 12
 
require(survC1)
covs0 <- as.matrix(ovarian[, c("rx")])
covs1 <- as.matrix(ovarian[, c("rx", "age")])
tau=365.25*1
Delta=Inf.Cval.Delta(ovarian[, 1:2], covs0, covs1, tau, itr=200)
round(Delta, digits=3)
#          Est    SE Lower95 Upper95
# Model1 0.844 0.119  0.611  1.077
# Model0 0.659 0.148  0.369  0.949
# Delta  0.185 0.197  -0.201  0.572
</pre>
 
* [http://r.789695.n4.nabble.com/Comparing-differences-in-AUC-from-2-different-models-td858746.html Comparing differences in AUC from 2 different models]
 
=== Time dependent ROC curves ===
[https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/tdrocc tdrocc()]
 
=== Calibration ===
[https://onlinelibrary.wiley.com/doi/full/10.1002/sim.8570?campaign=wolearlyview Graphical calibration curves and the integrated calibration index (ICI) for survival models]
 
== Prognostic markers vs predictive markers (and other biomarkers) ==
* [https://en.wikipedia.org/wiki/Gene_signature Types of gene signature]
* 與treatment有沒有關係來區分
** Prognostic biomarkers are used to identify patients who are likely to have a good or poor outcome, regardless of the treatment they receive. These biomarkers provide information about the natural history of the disease and can help clinicians predict the likelihood of disease progression, recurrence, or survival.
*** Characteristics of prognostic biomarkers:
**** Independent of treatment:
**** Disease-related:
**** Outcome-focused:
*** Examples of prognostic biomarkers include:
**** Tumor size and grade in cancer,
**** Blood pressure and lipid levels in cardiovascular disease,
**** HbA1c levels in diabetes.
** Predictive biomarkers, on the other hand, are used to identify patients who are likely to respond to a specific treatment or therapy. These biomarkers provide information about the likelihood of treatment success or failure and can help clinicians make informed decisions about treatment strategies.
*** Characteristics of predictive biomarkers:
**** Treatment-specific:
**** Response-focused:
**** Therapy-related:
*** Examples of predictive biomarkers include:
**** HER2 status in breast cancer for trastuzumab therapy,
**** EGFR mutations in non-small cell lung cancer for tyrosine kinase inhibitors,
**** PD-L1 expression in cancer for immune checkpoint inhibitors.
* '''[https://en.wikipedia.org/wiki/Prognosis_marker Prognostic marker]''' (某種疾病的危險因子) are biomarkers used to measure the progress of a disease in the patient sample. Prognostic markers are useful to stratify the patients into groups, guiding towards precise medicine discovery. ''Prognostic markers inform about likely disease outcome independent of the treatment received''. See [http://europepmc.org/articles/PMC3888208 Statistical and practical considerations for clinical evaluation of predictive biomarkers] by Mei-Yin Polley et al 2013.
* '''Predictive marker/treatment selection markers''' provide information about likely outcomes with application of specific interventions. See [http://annals.org/aim/fullarticle/746812/measuring-performance-markers-guiding-treatment-decisions Measuring the performance of markers for guiding treatment decisions] by Janes, et al 2011.
* [https://academic.oup.com/jnci/article/107/8/djv157/951084 The Fundamental Difficulty With Evaluating the Accuracy of Biomarkers for Guiding Treatment] Janes 2015 , jnci
* [https://onlinelibrary.wiley.com/doi/full/10.1002/sim.6564 Designing a study to evaluate the benefit of a biomarker for selecting patient treatment] Janes 2015
* [https://academic.oup.com/annonc/article/27/12/2160/2736334 Statistical controversies in clinical research: prognostic gene signatures are not (yet) useful in clinical practice] by Michiels 2016.
* '''Prognostic (the likely course of the disease) ability''' = the capacity to predict the likely progression or outcome of a disease.
* Diagnostic biomarker, prognostic biomarker and predictive biomarkers. Disease-related biomarkers and drug-related biomarkers. https://en.wikipedia.org/wiki/Biomarker_(medicine)
* Diagnostic biomarker, prognostic biomarker and predictive biomarkers. https://en.wikipedia.org/wiki/Cancer_biomarker
* '''Diagnostic''' (確定是某種疾病): diagnose conditions, as in the case of identifying early stage cancers
* [https://onlinelibrary.wiley.com/doi/full/10.1002/sim.8091 Statistical methods for building better biomarkers of chronic kidney disease] by Pencina et al 2019.
* '''Qualitative and quantitative interactions''' from [https://en.wikipedia.org/wiki/Interaction_(statistics)#Qualitative_and_quantitative_interactions Interaction (statistics)]
** A plot of [https://online.stat.psu.edu/stat509/node/120/ Interactions] from STAT 509 Design and Analysis of Clinical Trials.
* [https://www.ncbi.nlm.nih.gov/books/NBK402284/ Understanding Prognostic versus Predictive Biomarkers] 2016. No author. Not a good example to use. x-axis is time? [[:File:Progpreg.png]]
** Example of a biomarker that is '''prognostic''' but not '''predictive''' (fig1)
** A biomarker that is both '''prognostic''' (negatively) and '''predictive''' (fig2)
** '''quantitative''' treatment-by-biomarker statistical interaction (fig3). Treatment effect 方向不變 但是程度不同 in two subsets.
**  '''qualitative''' treatment-by-biomarker interaction (fig4). There is a change in direction of treatment effects in two subsets (biomarker-positive/biomarker-negative). In particular, if the treatment has downsides such as toxicity or cost, it is compelling to consider treating only subjects with sufficiently large treatment effects (Vickers et al., 2007; Janes et al., 2013).
* From the paper "Case-Only Approach to Identifying Markers Predicting Treatment Effects on the Relative Risk Scale" by Dai, Biometrics 2018
** For most clinical applications, a '''qualitative''' marker-by- treatment interaction is desired: the marker is useful if it identifies subgroups with treatment effects opposite in sign from the overall treatment effect.
** However, there are clinical applications where a '''quantitative'''– but not qualitative– marker-by-treatment interaction may be sufficient for a marker to be useful for treatment selection.
** A limitation of the case-only approach is its requirement that the endpoint of RCT has to be binary,
* [https://ascopubs.org/doi/full/10.1200/JCO.2015.63.3651 Biomarker: Predictive or Prognostic?] by Karla V. Ballman, 2015. Lots of cites. [[:File:Zlj9991056240002.jpeg]]
** A '''prognostic biomarker''' informs about a likely cancer outcome independent of treatment received. For a '''pure prognostic biomarker''', '''the difference between Treat and STD in biomarker-A group is similar to the difference between Treat and STD in biomarker-B group'''.  ''In other words, if a biomarker is prognostic and treatment is efficacious, <span style="color: red">the treatment benefit is similar</span> for biomarker-positive and biomarker-negative patients.'' 比較 HR from 2 groups 就可確定是否 biomarker is prognostic 或是 test beta(biomarker) = 0 from fitting a Cox model with Treat + biomarker + Treat:biomarker. See Fig 1A or Fig 2A for a (pure) prognostic biomarker. Common examples include PSA level for prostate cancer and PIK3CA mutation status of tumors in women with HER2-positive metastatic breast cancer.
** A biomarker is '''predictive''' if the treatment effect is different for biomarker-positive patients compared with biomarker-negative patients. (However, the paper never defines what is biomarker-positive; it could be samples with some biomarker >0 or over-expressed). Test beta(biomarker*Treat) =0 from fitting a Cox model with Treat + biomarker + Treat:biomaker 就可以知道是否 predictive biomarker.
***  A '''qualitative interaction  = pure predictive''' occurs when one biomarker group obtains benefit from treatment and the other group obtains no benefit (or is harmed) from treatment. The definition also appeared at [https://pubmed.ncbi.nlm.nih.gov/4027319/ Testing for qualitative interactions between treatment effects and patient subsets]. 計算 HR from two groups. 如果一組 HR=1 (or >1) 另一組 HR <1, 就是 qualitative interaction. See Fig 1C or Fig 2B.
*** Both groups derived benefit from the treatment, this is a '''quantitative interaction = both prognostic & predictive'''. See Fig 1B or Fig 2C.
** Fig2 A. A purely prognostic biomarker. X-axis is time. Left is biomarker-negative patients and RHS is biomarker-positive patients. ''Biomarker-positive patients have a better survival than Biomarker-negative patients independent of treatment group.''
** Fig2 B. A purely predictive marker.
** Fig2 C. '''A biomarker is both predictive and prognostic.''' This is also an example of a '''quantitative''' interaction.
* [https://www.nxtbook.com/nxtbooks/gen/clinical_omics_issue11/index.php#/24 Prognostic vs. Predictive Biomarkers] from Clinical OMICs.
* [https://onlinelibrary.wiley.com/doi/full/10.1002/bimj.201900171 The area between ROC curves, a non-parametric method to evaluate a biomarker for patient treatment selection] Blangero 2020. '''quantitative markers'''
* [https://academic.oup.com/bioinformatics/article/34/19/3365/4991984 Distinguishing prognostic and predictive biomarkers: an information theoretic approach] Sechidis 2018, Bioinformatics.
** This uses a more mathematical way to discuss several issues.
* [https://www.sciencedirect.com/science/article/pii/S1574789107001020 Prognostic factors versus predictive factors: Examples from a clinical trial of erlotinib] Clark 2008.
* [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-020-03655-7 Construction and optimization of gene expression signatures for prediction of survival in two-arm clinical trials] by Joachim Theilhaber et al 2020
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5537292/ Evidence for Treatment-by-Biomarker interaction for FDA-approved Oncology Drugs with Required Pharmacogenomic Biomarker Testing] 2017
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5889758/ Evaluation of biomarkers for treatment selection using individual participant data from multiple clinical trials] Kang 2018. Simulations.
* [https://academic.oup.com/bioinformatics/article/34/19/3365/4991984 Distinguishing prognostic and predictive biomarkers: an information theoretic approach] Sechidis 2018
* [https://onlinelibrary.wiley.com/doi/full/10.1002/pst.2002 On evaluating how well a biomarker can predict treatment response with survival data] Mboup 2020
* [https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1865167 Estimation of Optimal Individualized Treatment Rules Using a Covariate-Specific Treatment Effect Curve With High-Dimensional Covariates] Guo 2021, jasa
* [https://onlinelibrary.wiley.com/doi/full/10.1002/sim.9198 Robust method for optimal treatment decision making based on survival data] Fang 2021
 
=== Prognostic biomarkers ===
[https://translational-medicine.biomedcentral.com/articles/10.1186/s12967-021-03180-y Detecting prognostic biomarkers of breast cancer by regularized Cox proportional hazards models] Li 2021. '''prognostic risk score (PRS)''', '''training''', '''discovery dataset''', '''independent''', '''validation''', '''enrichment analysis''', '''C-index''', '''overlap''', '''GEO'''
 
=== biospear package ===
Applications based on google scholar on [https://scholar.google.com/scholar?cites=2620077852245240945&as_sdt=20000005&sciodt=0,21&hl=en biospear package paper]:
* [https://aacrjournals.org/mct/article/20/8/1454/673280/Gene-Expression-Signature-Correlates-with-Outcomes Gene Expression Signature Correlates with Outcomes in Metastatic Renal Cell Carcinoma Patients Treated with Everolimus Alone or with a Vascular Disrupting Agent] 2021
* [https://academic.oup.com/neuro-oncology/article/23/5/795/6046196 Transcription factor networks of oligodendrogliomas treated with adjuvant radiotherapy or observation inform prognosis] 2020. “biospear” package to generate a prognostic signature. We describe efforts to integrate clinical genomics to discover '''predictive bio-markers''' that would inform adjuvant treatment decisions in oligodendrogliomas ... A second prognostic signature for patients treated with observation alone was also developed, representing a '''predictive bio-marker''' for patients who would benefit from adjuvant radiotherapy.
 
=== Treatment Effect ===
* Tian 2014: <math>P(T^1 \geq t_0|z) - P(T^{-1} \geq t_0|z) </math>
* Bonetti 2000: Hazard ratio
* Janes 2014: <math> \Delta(Y) = \rho_0(Y) - \rho_1(Y) = P(D=1|T=0, Y) - P(D=1|T=1, Y) </math>
** Subjects with <math>\Delta(Y)<0</math> are called '''marker-negative'''; standard/controlled treatment is favored.
** Subjects with <math>\Delta(Y)>0</math> are called '''marker-positive'''; new treatment is favored. The rule is applying treatment onlyto marker-positive patients. And for this portion of patients, the average benefit of treatment is calculated by <math>B_{pos} = E(\Delta(Y) | \Delta(Y) >0)</math>. See p103 on the paper.
 
=== Subgroup identification ===
* STEPP/subpopulation treatment effect pattern plot: [https://ascopubs.org/doi/abs/10.1200/JCO.2009.27.9182?journalCode=jco Evaluation of treatment-effect heterogeneity using biomarkers measured on a continuous scale] Lazar 2010 ([https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2988642/ NIH pubmed]). [https://cran.r-project.org/web/packages/stepp/ CRAN] R package.
** Some random articles citing the above paper
** Some disadvantages are discussed in [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-022-01516-w Investigating treatment-effect modification by a continuous covariate in IPD meta-analysis: an approach using fractional polynomials] 2022
** [https://link.springer.com/article/10.1007/s12282-022-01364-y Prognostic values of clinical and molecular features in HER2 low-breast cancer with hormonal receptor overexpression: features of HER2-low breast cancer] 2022
** [https://link.springer.com/article/10.1186/s12885-021-08521-0 The platelet to lymphocyte ratio is a potential inflammatory marker predicting the effects of adjuvant chemotherapy in patients with stage II colorectal cancer] 2021
* [https://onlinelibrary.wiley.com/doi/full/10.1002/sim.7064 Tutorial in biostatistics: data-driven subgroup identification and analysis in clinical trials] Lipkovich 2016
* [https://onlinelibrary.wiley.com/doi/full/10.1002/bimj.202000331 Identification of subgroups via partial linear regression modeling approach] Zhou 2021
 
== Some packages ==
=== personalized package ===
[https://cran.r-project.org/web/packages/personalized/index.html personalized]: Estimation and Validation Methods for Subgroup Identification and Personalized Medicine. [https://youtu.be/XzoJe2mLj18 Subgroup Identification and Precision Medicine with the {personalized} R Package] (youtube)
 
=== SurvMetrics ===
[https://cran.r-project.org/web/packages/SurvMetrics/index.html SurvMetrics]: Predictive Evaluation Metrics in Survival Analysis
 
=== SurvBenchmark ===
[https://www.biorxiv.org/content/10.1101/2021.07.11.451967v1 SurvBenchmark: comprehensive benchmarking study of survival analysis methods using both omics data and clinical data]
 
== Lasso estimation of hierarchical interactions for analyzing heterogeneity of treatment effect ==
[https://onlinelibrary.wiley.com/doi/full/10.1002/sim.9132?campaign=wolearlyview Lasso estimation of hierarchical interactions for analyzing heterogeneity of treatment effect] 2021
 
== Quantifying treatment differences in confirmatory trials under non-proportional hazards ==
[https://arxiv.org/abs/1908.10502 Quantifying treatment differences in confirmatory trials under non-proportional hazards]
 
The source code in [https://github.com/jjimenezm1989/Quantifying-treatment-differences-in-confirmatory-trials-under-non-proportional-hazards Github].
 
== Computation for gene expression (microarray) data ==
* [https://github.com/cran/survival survival] package (basic package, not designed for gene expression)
* [https://github.com/cran/GSA/blob/master/R/GSA.morefuns.R gsa] package
* [https://github.com/cran/samr/blob/master/R/samr.morefuns.R samr] package
* [https://github.com/cran/pamr/blob/master/R/pamr.survfuns.R pamr] package
* [http://www.bioconductor.org/packages/release/bioc/manuals/genefilter/man/genefilter.pdf#page=4 (Bioconductor) genefilter], [https://github.com/Bioconductor/genefilter/blob/master/R/all.R source]. genefilter() & coxfilter(). apply() was used.
* [https://github.com/cran/survcomp/blob/master/R/logpl.R logpl()] from [http://www.bioconductor.org/packages/release/bioc/vignettes/survcomp/inst/doc/survcomp.pdf#page=24 survcomp] package
 
{{Pre}}
n <- 500
g <- 10000
y <- rexp(n)
status <- ifelse(runif(n) < .7, 1, 0)
x <- matrix(rnorm(n*g), nr=g)
treat <- rbinom(n, 1, .5)
# Method 1
system.time(for(i in 1:g) coxph(Surv(y, status) ~ x[i, ] + treat + treat:x[i, ]))
# 28 seconds
 
# Method 2
system.time(apply(x, 1, function(z) coxph(Surv(y, status) ~ z + treat + treat:z)))
# 29 seconds
 
# Method 3 (Windows)
dyn.load("C:/Program Files (x86)/ArrayTools/Fortran/surv64.dll")  
tme <- y
sorted <- order(tme)
stime <- as.double(tme[sorted])
sstat <- as.integer(status[sorted])
x1 <- x[,sorted]
imodel <- 1  # imodel=1, fit univariate gene expression. Return p-values vector.
nvar <- 1
system.time(outx1 <- .Fortran("coxfitc", as.integer(n), as.integer(g), as.integer(0),
                stime, sstat, t(x1), as.double(0), as.integer(imodel),
                double(2*n+2*nvar*nvar+3*nvar), logdiff = double(g)))
# 1.69 seconds on R i386
# 0.79 seconds on R x64
 
# method 4: GSA
genenames=paste("g", 1:g, sep="")
#create some random gene sets
genesets=vector("list", 50)
for(i in 1:50){
   genesets[[i]]=paste("g", sample(1:g,size=30), sep="")
}
geneset.names=paste("set",as.character(1:50),sep="")
debug(GSA.func)
GSA.obj<-GSA(x,y, genenames=genenames, genesets=genesets,   
             censoring.status=status,
             resp.type="Survival", nperms=1)
Browse[3]> str(catalog.unique)
  int [1:1401] 7943 227 4069 3011 8402 1586 2443 2777 673 9021 ...
Browse[3]> system.time(cox.func(x[catalog.unique,], y, censoring.status, s0=0))
# 1.3 seconds
Browse[2]> system.time(cox.func(x, y, censoring.status, s0=0))
Browse[2]> system.time(cox.func(x, y, censoring.status, s0=0))
# 7.259 seconds
# 7.259 seconds
</pre>
</pre>
 
 
== Single gene vs mult-gene survival models ==
== Single gene vs mult-gene survival models ==
[https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2430-9 A comparative study of survival models for breast cancer prognostication revisited: the benefits of multi-gene models] by Grzadkowski et al 2018. To concordance of biomarker performance, the authors use the '''Concordance Correlation Coefficient (CCC)''' as introduced by Lin (1989) and further amended in Lin (2000).
[https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2430-9 A comparative study of survival models for breast cancer prognostication revisited: the benefits of multi-gene models] by Grzadkowski et al 2018. To concordance of biomarker performance, the authors use the '''Concordance Correlation Coefficient (CCC)''' as introduced by Lin (1989) and further amended in Lin (2000).
 
== Random papers using C-index, AUC or Brier scores ==
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4841879/pdf/IJPH-45-239.pdf Predicting the Survival Time for Bladder Cancer Using an Additive Hazards Model in Microarray Data] 2016. AUC, Brier scores and C-index were used
 
== survex package ==
[https://medium.com/responsibleml/survex-model-agnostic-explainability-for-survival-analysis-94444e6ce83d survex: model-agnostic explainability for survival analysis]
 
== More, Web tools ==
* This pdf file from [http://data.princeton.edu/pop509/NonParametricSurvival.pdf data.princeton.edu] contains estimation, hypothesis testing, time varying covariates and baseline survival estimation.
* [http://www.petrkeil.com/?p=2425 Survival analysis: basic terms, the exponential model, censoring, examples in R and JAGS]
* [https://stats.stackexchange.com/questions/36015/prediction-in-cox-regression Survival analysis is not commonly used to predict future times to an event]. Cox model would require specification of the baseline hazard function.
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3774754/ SurvExpress]: An Online Biomarker Validation Tool and Database for Cancer Gene Expression Data Using Survival Analysis 2013. JSP, Javascript, MySQL, Ajax, R & Apache. Public datasets were obtained from GEO, TCGA, ArrayExpress.
 
= Others =
== Landmark analysis ==
* A landmark analysis for survival data is a statistical method used in survival analysis. It involves designating a specific time point during the follow-up period, known as the '''landmark time''', and analyzing only those subjects who have survived until the landmark time. [https://link.springer.com/article/10.1007/s12350-019-01624-z Landmark analysis: A primer].
** This method is often used to estimate survival probabilities in an '''unbiased''' way, conditional on the group membership of patients at the landmark time. A small number of index time points are chosen and survival analysis is done on only those subjects who remain event-free at the specified index times and for follow-up beyond the index times. [https://www.ahajournals.org/doi/pdf/10.1161/circoutcomes.110.957951 Landmark Analysis at the 25-Year Landmark Point] 2011 & [https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-019-0057-6 A comparison of landmark methods and time-dependent ROC methods to evaluate the time-varying performance of prognostic markers for survival outcomes] 2019.
** Landmark analysis can help avoid certain types of '''bias''', such as the guarantee-time bias or the immortal time bias. It's particularly useful when patient predictions are needed at select times, and it facilitates evaluating trends in performance over time.
** In the context of survival data, which consist of a distinct start time and end time, landmark analysis provides a valuable tool for understanding and '''predicting future disease events'''. It's often used in clinical practice to guide medical decision-making.
 
== TCGA data ==
* [https://ramaanathan.github.io/SurvivalAnalysis Survival Analysis of Breast Cancer Data from the TCGA Dataset]
* [https://bioconnector.github.io/workshops/r-survival.html Survival Analysis with R]
* [https://pubmed.ncbi.nlm.nih.gov/34488031/ SmulTCan]: A Shiny application for multivariable survival analysis of TCGA data with gene sets, 2021
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9051584/ GNOSIS]: an R Shiny app supporting cancer genomics survival analysis with cBioPortal, 2022
 
== Machine learning ==
* [https://academic.oup.com/bioinformatics/article/37/17/2789/6125361 mlr3proba: an R package for machine learning in survival analysis]
* [https://github.com/raphaels1/survivalmodels survivalmodels] Currently implemented are five neural networks from the Python packages pycox, DNNSurv, and the Akritas non-parametric conditional estimator. Further updates will include implementations of novel survival models.
* [https://www.biorxiv.org/content/10.1101/2022.10.25.513678v1 SurvivalML]: an integrative platform for the discovery and exploration of prognostic models in multi-center cancer cohorts 2022
 
== Constrained randomization ==
[https://www.rdatagen.net/post/2020-12-22-constrained-randomization-to-evaulate-the-vaccine-rollout-in-nursing-homes/ Constrained randomization to evaulate the vaccine rollout in nursing homes]
 
== Principles and Practice of Clinical Research ==
* [https://ocr.od.nih.gov/courses/ippcrRegistration.html Introduction to the Principles and Practice of Clinical Research (IPPCR)]
* [https://www.youtube.com/playlist?list=PLifjiEBb2Zq7ruvGhgyD2jDsgoaD3p6d4 Videos]
 
== Clinical trials ==
=== Statistical Thinking in Clinical Trials ===
 
=== Fundamental Statistical Concepts in Clinical Trials and Diagnostic Testing ===
[https://jnm.snmjournals.org/content/62/6/757.long JNM 2021]
 
=== Statistical Monitoring of Clinical Trials: A Unified Approach ===
[https://archive.org/details/statistical-monitoring-of-clinical-trials-a-unified-approach/page/252/mode/2up ebook] on archive.org.


== Random papers using C-index, AUC or Brier scores ==
=== Analysis of clinical prediction models registered with clinicaltrials.gov ===
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4841879/pdf/IJPH-45-239.pdf Predicting the Survival Time for Bladder Cancer Using an Additive Hazards Model in Microarray Data] 2016. AUC, Brier scores and C-index were used
* [https://www.jclinepi.com/article/S0895-4356%2824%2900188-4/fulltext?s=09 Planned but ever published? A retrospective analysis of clinical prediction model studies registered on clinicaltrials.gov since 2000], R code in [https://github.com/nicolemwhite/prediction_clinicaltrials Github].
 
== survex package ==
[https://medium.com/responsibleml/survex-model-agnostic-explainability-for-survival-analysis-94444e6ce83d survex: model-agnostic explainability for survival analysis]
 
== More, Web tools ==
* This pdf file from [http://data.princeton.edu/pop509/NonParametricSurvival.pdf data.princeton.edu] contains estimation, hypothesis testing, time varying covariates and baseline survival estimation.
* [http://www.petrkeil.com/?p=2425 Survival analysis: basic terms, the exponential model, censoring, examples in R and JAGS]
* [https://stats.stackexchange.com/questions/36015/prediction-in-cox-regression Survival analysis is not commonly used to predict future times to an event]. Cox model would require specification of the baseline hazard function.
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3774754/ SurvExpress]: An Online Biomarker Validation Tool and Database for Cancer Gene Expression Data Using Survival Analysis 2013. JSP, Javascript, MySQL, Ajax, R & Apache. Public datasets were obtained from GEO, TCGA, ArrayExpress.
 
= Others =
== TCGA data ==
* [https://ramaanathan.github.io/SurvivalAnalysis Survival Analysis of Breast Cancer Data from the TCGA Dataset]
* [https://bioconnector.github.io/workshops/r-survival.html Survival Analysis with R]
* [https://pubmed.ncbi.nlm.nih.gov/34488031/ SmulTCan]: A Shiny application for multivariable survival analysis of TCGA data with gene sets, 2021
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9051584/ GNOSIS]: an R Shiny app supporting cancer genomics survival analysis with cBioPortal, 2022
 
== Machine learning ==
* [https://academic.oup.com/bioinformatics/article/37/17/2789/6125361 mlr3proba: an R package for machine learning in survival analysis]
* [https://github.com/raphaels1/survivalmodels survivalmodels] Currently implemented are five neural networks from the Python packages pycox, DNNSurv, and the Akritas non-parametric conditional estimator. Further updates will include implementations of novel survival models.
* [https://www.biorxiv.org/content/10.1101/2022.10.25.513678v1 SurvivalML]: an integrative platform for the discovery and exploration of prognostic models in multi-center cancer cohorts 2022
 
== Constrained randomization ==
[https://www.rdatagen.net/post/2020-12-22-constrained-randomization-to-evaulate-the-vaccine-rollout-in-nursing-homes/ Constrained randomization to evaulate the vaccine rollout in nursing homes]
 
== Principles and Practice of Clinical Research ==
* [https://ocr.od.nih.gov/courses/ippcrRegistration.html Introduction to the Principles and Practice of Clinical Research (IPPCR)]
* [https://www.youtube.com/playlist?list=PLifjiEBb2Zq7ruvGhgyD2jDsgoaD3p6d4 Videos]
 
== Statistical Monitoring of Clinical Trials: A Unified Approach ==
[https://archive.org/details/statistical-monitoring-of-clinical-trials-a-unified-approach/page/252/mode/2up ebook] on archive.org.


== Principles of Clinical Pharmacology ==
== Principles of Clinical Pharmacology ==
Line 3,166: Line 3,658:
== Progressive disease, stable disease ==
== Progressive disease, stable disease ==
* [https://en.wikipedia.org/wiki/Response_evaluation_criteria_in_solid_tumors#Response_criteria RECIST/Response evaluation criteria in solid tumors]:  
* [https://en.wikipedia.org/wiki/Response_evaluation_criteria_in_solid_tumors#Response_criteria RECIST/Response evaluation criteria in solid tumors]:  
** '''CR Complete response''',
** '''CR Complete response''': This is the best response. It means that all signs of the cancer have disappeared in the tests. There’s no evidence of disease present.
** PR Partial response
** PR Partial response: This means the cancer has significantly reduced in size but is still detectable.
** SD Stable disease ,  
** SD Stable disease: This means the cancer has neither grown nor shrunk. The disease is stable. SD may or may not be considered as a responder. In some cases, maintaining stable disease might be seen as a good response, especially for cancers that are typically very aggressive or hard to treat.
** '''PD Progressive disease'''
** '''PD Progressive disease''': This is the worst response. It means the cancer has grown or spread to other parts of the body.
* [https://www.verywellhealth.com/definition-of-stable-disease-2249195 Stable Disease in Cancer Treatment]. '''Stable disease''' is defined as being a little better than '''progressive disease''' (in which a tumor has increased in size by at least 20%) and a little worse than a '''partial response''' (wherein a tumor has shrunk by at least 50%).
* [https://www.verywellhealth.com/definition-of-stable-disease-2249195 Stable Disease in Cancer Treatment]. '''Stable disease''' is defined as being a little better than '''progressive disease''' (in which a tumor has increased in size by at least 20%) and a little worse than a '''partial response''' (wherein a tumor has shrunk by at least 50%).
* Ideally a drug trial will return results like CR or PR. Responses of SD or PD may indicate that a drug is not an effective treatment for cancer. https://callaix.com/recist
* Ideally a drug trial will return results like CR or PR. Responses of SD or PD may indicate that a drug is not an effective treatment for cancer. https://callaix.com/recist

Latest revision as of 15:53, 7 October 2024

Survival data

Calculating survival time in R

Convert days to years or months

sx_date for surgery date

as.numeric(difftime(last_fup_date, sx_date, units = "days")) / 365.25

To convert days to months

days / (365.25/12)
# 365.25/12 = 30.4375

Overall survival, progression-free, recurrence-free survival

  • Recurrence-free survival (RFS)
    • Event: if a patient relapsed or died. T=(date of relapse or dealth, whichever comes first) - (date of therapy/treatment start)
    • Censored: if a patient had not relapsed and was still alive at last follow-up. T=(date of last follow-up) - (date of therapy start)
  • Disease-free survival (same as RFS)

Progression-free interval

A novel approach to the analysis of Overall Survival (OS) as response with Progression-Free Interval (PFI) as condition based on the RNA-seq expression data in The Cancer Genome Atlas (TCGA)

Censoring

Sample schemes of incomplete data, Censoring in Clinical Trials: Review of Survival Analysis Techniques

  • Type I censoring: the censoring time is fixed
  • Type II censoring
  • Random censoring
    • Right censoring
    • Left censoring
  • Interval censoring
  • Truncation

The most common is called right censoring and occurs when a participant does not have the event of interest during the study and thus their last observed follow-up time is less than their time to event. This can occur when a participant drops out before the study ends or when a participant is event free at the end of the observation period.

Definitions of common terms in survival analysis

  • Event: Death, disease occurrence, disease recurrence, recovery, or other experience of interest
  • Time: The time from the beginning of an observation period (such as surgery or beginning treatment) to (i) an event, or (ii) end of the study, or (iii) loss of contact or withdrawal from the study.
  • Censoring / Censored observation: If a subject does not have an event during the observation time, they are described as censored. The subject is censored in the sense that nothing is observed or known about that subject after the time of censoring. A censored subject may or may not have an event after the end of observation time.

In R, "status" should be called event status. status = 1 means event occurred. status = 0 means no event (censored). Sometimes the status variable has more than 2 states. We can uses "status != 0" to replace "status" in Surv() function.

  • status=0/1/2 for censored, transplant and dead in survival::pbc data.
  • status=0/1/2 for censored, relapse and dead in randomForestSRC::follic data.

How to explore survival data

https://en.wikipedia.org/wiki/Survival_analysis#Survival_analysis_in_R

  • Create graph of length of time that each subject was in the study
library(survival)
# sort the aml data by time
aml <- aml[order(aml$time),]
with(aml, plot(time, type="h"))

File:Aml time.svg

  • Create the life table survival object
summary(aml.survfit)
Call: survfit(formula = Surv(time, status == 1) ~ 1, data = aml)

 time n.risk n.event survival std.err lower 95% CI upper 95% CI
    5     23       2   0.9130  0.0588       0.8049        1.000
    8     21       2   0.8261  0.0790       0.6848        0.996
    9     19       1   0.7826  0.0860       0.6310        0.971
   12     18       1   0.7391  0.0916       0.5798        0.942
   13     17       1   0.6957  0.0959       0.5309        0.912
   18     14       1   0.6460  0.1011       0.4753        0.878
   23     13       2   0.5466  0.1073       0.3721        0.803
   27     11       1   0.4969  0.1084       0.3240        0.762
   30      9       1   0.4417  0.1095       0.2717        0.718
   31      8       1   0.3865  0.1089       0.2225        0.671
   33      7       1   0.3313  0.1064       0.1765        0.622
   34      6       1   0.2761  0.1020       0.1338        0.569
   43      5       1   0.2208  0.0954       0.0947        0.515
   45      4       1   0.1656  0.0860       0.0598        0.458
   48      2       1   0.0828  0.0727       0.0148        0.462
  • Kaplan-Meier curve for aml with the confidence bounds.
plot(aml.survfit, xlab = "Time", ylab="Proportion surviving")
  • Create aml life tables broken out by treatment (x, "Maintained" vs. "Not maintained")
surv.by.aml.rx <- survfit(Surv(time, status == 1) ~ x, data = aml)

summary(surv.by.aml.rx)
Call: survfit(formula = Surv(time, status == 1) ~ x, data = aml)

                x=Maintained 
 time n.risk n.event survival std.err lower 95% CI upper 95% CI
    9     11       1    0.909  0.0867       0.7541        1.000
   13     10       1    0.818  0.1163       0.6192        1.000
   18      8       1    0.716  0.1397       0.4884        1.000
   23      7       1    0.614  0.1526       0.3769        0.999
   31      5       1    0.491  0.1642       0.2549        0.946
   34      4       1    0.368  0.1627       0.1549        0.875
   48      2       1    0.184  0.1535       0.0359        0.944

                x=Nonmaintained 
 time n.risk n.event survival std.err lower 95% CI upper 95% CI
    5     12       2   0.8333  0.1076       0.6470        1.000
    8     10       2   0.6667  0.1361       0.4468        0.995
   12      8       1   0.5833  0.1423       0.3616        0.941
   23      6       1   0.4861  0.1481       0.2675        0.883
   27      5       1   0.3889  0.1470       0.1854        0.816
   30      4       1   0.2917  0.1387       0.1148        0.741
   33      3       1   0.1944  0.1219       0.0569        0.664
   43      2       1   0.0972  0.0919       0.0153        0.620
   45      1       1   0.0000     NaN           NA           NA
  • Plot KM plot broken out by treatment
plot(surv.by.aml.rx, xlab = "Time", ylab="Survival",
     col=c("black", "red"), lty = 1:2, 
     main="Kaplan-Meier Survival vs. Maintenance in AML")
legend(100, .6, c("Maintained", "Not maintained"), 
     lty = 1:2, col=c("black", "red"))
  • Perform the log rank test using the R function survdiff().
surv.diff.aml <- survdiff(Surv(time, status == 1) ~ x, data=aml)
surv.diff.aml

Call:
survdiff(formula = Surv(time, status == 1) ~ x, data = aml)

                 N Observed Expected (O-E)^2/E (O-E)^2/V
x=Maintained    11        7    10.69      1.27       3.4
x=Nonmaintained 12       11     7.31      1.86       3.4

 Chisq= 3.4  on 1 degrees of freedom, p= 0.07

Summary statistics

  • Kaplan-Meier Method and Log-Rank Test
  • Statistics
    • Table of status vs treatment (with proportion)
    • Table of treatment vs training/test
  • Life table
    • summary(survfit(Surv(time, status) ~ 1)) or summary(survfit(Surv(time, status) ~ treatment))
    • KMsurv::lifetab()
  • Coxph function and visualize them using the ggforest package

Some public data

package data (sample size)
survival pbc (418), ovarian (26), aml/leukemia (23), colon (1858), lung (228), veteran (137)
pec GBSG2 (686), cost (518)
randomForestSRC follic (541)
KMsurv A LOT. tongue (80)
survivalROC mayo (312)
survAUC NA

Kaplan & Meier and Nelson-Aalen: survfit.formula(), Surv()

  • Landmarks
    • Kaplan-Meier: 1958
    • Nelson: 1969
    • Cox and Brewlow: 1972 S(t) = exp(-Lambda(t))
    • Aalen: 1978 Lambda(t)
  • https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator
  • A practical guide to understanding Kaplan-Meier curves 2010
  • D distinct times [math]\displaystyle{ t_1 \lt t_2 \lt \cdots \lt t_D }[/math]. At time [math]\displaystyle{ t_i }[/math] there are [math]\displaystyle{ d_i }[/math] events. Let [math]\displaystyle{ Y_i }[/math] be the number of individuals who are at risk at time [math]\displaystyle{ t_i }[/math]. The quantity [math]\displaystyle{ d_i/Y_i }[/math] provides an estimate of the conditional probability that an individual who survives to just prior to time [math]\displaystyle{ t_i }[/math] experiences the event at time [math]\displaystyle{ t_i }[/math]. The KM estimator of the survival function and the Nelson-Aalen estimator of the cumulative hazard (their relationship is given below) are define as follows ([math]\displaystyle{ t_1 \le t }[/math]):
[math]\displaystyle{ \begin{align} \hat{S}(t) &= \prod_{t_i \le t} [1 - d_i/Y_i] \\ \hat{H}(t) &= \sum_{t_i \le t} d_i/Y_i \end{align} }[/math]
str(kidney)
'data.frame':	76 obs. of  7 variables:
$ id     : num  1 1 2 2 3 3 4 4 5 5 ...
$ time   : num  8 16 23 13 22 28 447 318 30 12 ...
$ status : num  1 1 1 0 1 1 1 1 1 1 ...
$ age    : num  28 28 48 48 32 32 31 32 10 10 ...
$ sex    : num  1 1 2 2 1 1 2 2 1 1 ...
$ disease: Factor w/ 4 levels "Other","GN","AN",..: 1 1 2 2 1 1 1 1 1 1 ...
$ frail  : num  2.3 2.3 1.9 1.9 1.2 1.2 0.5 0.5 1.5 1.5 ...
kidney[order(kidney$time), c("time", "status")]
kidney[kidney$time == 13, ] # one is dead and the other is alive
length(unique(kidney$time)) # 60

sfit <- survfit(Surv(time, status) ~ 1, data = kidney)

sfit
Call: survfit(formula = Surv(time, status) ~ 1, data = kidney)

      n  events  median 0.95LCL 0.95UCL 
     76      58      78      39     152 

str(sfit)
List of 13
$ n        : int 76
$ time     : num [1:60] 2 4 5 6 7 8 9 12 13 15 ...
$ n.risk   : num [1:60] 76 75 74 72 71 69 65 64 62 60 ...
$ n.event  : num [1:60] 1 0 0 0 2 2 1 2 1 2 ...
$ n.censor : num [1:60] 0 1 2 1 0 2 0 0 1 0 ...
$ surv     : num [1:60] 0.987 0.987 0.987 0.987 0.959 ...
$ type     : chr "right"
length(unique(kidney$time))  # [1] 60
all(sapply(sfit$time, function(tt) sum(kidney$time >= tt)) == sfit$n.risk) # TRUE
all(sapply(sfit$time, function(tt) sum(kidney$status[kidney$time == tt])) == sfit$n.event) # TRUE
all(sapply(sfit$time, function(tt) sum(1-kidney$status[kidney$time == tt])) == sfit$n.censor) #  TRUE
all(cumprod(1 - sfit$n.event/sfit$n.risk) == sfit$surv) #  FALSE
range(abs(cumprod(1 - sfit$n.event/sfit$n.risk) - sfit$surv))
# [1] 0.000000e+00 1.387779e-17

summary(sfit)
time n.risk n.event survival std.err lower 95% CI upper 95% CI
    2     76       1    0.987  0.0131      0.96155        1.000
    7     71       2    0.959  0.0232      0.91469        1.000
    8     69       2    0.931  0.0297      0.87484        0.991
 ...
  511      3       1    0.042  0.0288      0.01095        0.161
  536      2       1    0.021  0.0207      0.00305        0.145
  562      1       1    0.000     NaN           NA           NA
  • Understanding survival analysis: Kaplan-Meier estimate
  • Note that the KM estimate is left-continuous step function with the intervals closed at left and open at right. For [math]\displaystyle{ t \in [t_j, t_{j+1}) }[/math] for a certain j, we have [math]\displaystyle{ \hat{S}(t) = \prod_{i=1}^j (1-d_i/n_i) }[/math] where [math]\displaystyle{ d_i }[/math] is the number people who have an event during the interval [math]\displaystyle{ [t_i, t_{i+1}) }[/math] and [math]\displaystyle{ n_i }[/math] is the number of people at risk just before the beginning of the interval [math]\displaystyle{ [t_i, t_{i+1}) }[/math].
  • The product-limit estimator can be constructed by using a reduced-sample approach. We can estimate the [math]\displaystyle{ P(T \gt t_i | T \ge t_i) = \frac{Y_i - d_i}{Y_i} }[/math] for [math]\displaystyle{ i=1,2,\cdots,D }[/math]. [math]\displaystyle{ S(t_i) = \frac{S(t_i)}{S(t_{i-1})} \frac{S(t_{i-1})}{S(t_{i-2})} \cdots \frac{S(t_2)}{S(t_1)} \frac{S(t_1)}{S(0)} S(0) = P(T \gt t_i | T \ge t_i) P(T \gt t_{i-1} | T \ge t_{i-1}) \cdots P(T\gt t_2|T \ge t_2) P(T\gt t_1 | T \ge t_1) }[/math] because S(0)=1 and, for a discrete distribution, [math]\displaystyle{ S(t_{i-1}) = P(T \gt t_{i-1}) = P(T \ge t_i) }[/math].
  • Self consistency. If we had no censored observations, the estimator of the survival function at a time t is the proportion of observations which are larger than t, that is, [math]\displaystyle{ \hat{S}(t) = \frac{1}{n}\sum I(X_i \gt t) }[/math].
  • Curves are plotted in the same order as they are listed by print (which gives a 1 line summary of each). For example, -1 < 1 and 'Maintenance' < 'Nonmaintained'. That means, the labels list in the legend() command should have the same order as the curves.
  • Kaplan and Meier is used to give an estimator of the survival function S(t)
  • Nelson-Aalen estimator is for the cumulative hazard H(t). Note that [math]\displaystyle{ 0 \le H(t) \lt \infty }[/math] and [math]\displaystyle{ H(t) \rightarrow \infty }[/math] as t goes to infinity. So there is a constraint on the hazard function, see Wikipedia.

Note that S(t) is related to H(t) by [math]\displaystyle{ H(t) = -ln[S(t)] }[/math] or [math]\displaystyle{ S(t) = exp[-H(t)] }[/math]. The two estimators are similar (see example 4.1A and 4.1B from Klein and Moeschberge).

The Nelson-Aalen estimator has two primary uses in analyzing data

  1. Selecting between parametric models for the time to event
  2. Crude estimates of the hazard rate h(t). This is related to the estimation of the survival function in Cox model. See 8.6 of Klein and Moeschberge.

The Kaplan–Meier estimator (the product limit estimator) is an estimator for estimating the survival function from lifetime data. In medical research, it is often used to measure the fraction of patients living for a certain amount of time after treatment.

Note that

  • The "+" sign in the KM curves means censored observations (this convention matches with the output of Surv() function) and a long vertical line (not '+') means there is a dead observation at that time.
> aml[1:5,]
  time status          x
1    9      1 Maintained
2   13      1 Maintained
3   13      0 Maintained
4   18      1 Maintained
5   23      1 Maintained
> Surv(aml$time, aml$status)[1:5,]
[1]  9  13  13+ 18  23 
  • If the last observation (longest survival time) is dead, the survival curve will goes down to zero. Otherwise, the survival curve will remain flat from the last event time.

Usually the KM curve of treatment group is higher than that of the control group.

The Y-axis (the probability that a member from a given population will have a lifetime exceeding time) is often called

  • Cumulative probability
  • Cumulative survival
  • Percent survival
  • Probability without event
  • Proportion alive/surviving
  • Survival
  • Survival probability

File:KMcurve.png, File:KMannotation.png, File:KMcurve cumhaz.png

> library(survival)
> str(aml$x)
 Factor w/ 2 levels "Maintained","Nonmaintained": 1 1 1 1 1 1 1 1 1 1 ...
> plot(leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml[7:17,] ) , 
      lty=2:3, mark.time = TRUE) # a (small) subset, mark.time is used to show censored obs
> aml[7:17,]
   time status             x
7    31      1    Maintained
8    34      1    Maintained
9    45      0    Maintained
10   48      1    Maintained
11  161      0    Maintained
12    5      1 Nonmaintained
13    5      1 Nonmaintained
14    8      1 Nonmaintained
15    8      1 Nonmaintained
16   12      1 Nonmaintained
17   16      0 Nonmaintained
> legend(100, .9, c("Maintenance", "No Maintenance"), lty = 2:3) # lty: 2=dashed, 3=dotted
> title("Kaplan-Meier Curves\nfor AML Maintenance Study") 

# Cumulative hazard plot
# Lambda(t) = -log(S(t)); 
# see https://en.wikipedia.org/wiki/Survival_analysis
# http://statweb.stanford.edu/~olshen/hrp262spring01/spring01Handouts/Phil_doc.pdf
plot(leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml[7:17,] ) , 
      lty=2:3, mark.time = T, fun="cumhaz", ylab="Cumulative Hazard")
# https://www.lexjansen.com/pharmasug/2011/CC/PharmaSUG-2011-CC16.pdf
mydata <- data.frame(time=c(3,6,8,12,12,21),status=c(1,1,0,1,1,1))
km <- survfit(Surv(time, status)~1, data=mydata)
plot(km, mark.time = T)
survest <- stepfun(km$time, c(1, km$surv))
plot(survest)
> str(km)
List of 13
 $ n        : int 6
 $ time     : num [1:5] 3 6 8 12 21
 $ n.risk   : num [1:5] 6 5 4 3 1
 $ n.event  : num [1:5] 1 1 0 2 1
 $ n.censor : num [1:5] 0 0 1 0 0
 $ surv     : num [1:5] 0.833 0.667 0.667 0.222 0
 $ type     : chr "right"
 $ std.err  : num [1:5] 0.183 0.289 0.289 0.866 Inf
 $ upper    : num [1:5] 1 1 1 1 NA
 $ lower    : num [1:5] 0.5827 0.3786 0.3786 0.0407 NA
 $ conf.type: chr "log"
 $ conf.int : num 0.95
> class(survest)
[1] "stepfun"  "function"
> survest
Step function
Call: stepfun(km$time, c(1, km$surv))
 x[1:5] =      3,      6,      8,     12,     21
6 plateau levels =      1, 0.83333, 0.66667,  ..., 0.22222,      0
> str(survest)
function (v)  
 - attr(*, "class")= chr [1:2] "stepfun" "function"
 - attr(*, "call")= language stepfun(km$time, c(1, km$surv))

File:Kmcurve_toy.svg

Multiple curves

Curves/groups are ordered. The first color in the palette is used to color the first level of the factor variable. This is same idea as ggsurvplot in the survminer package. This affects parameters like col and lty in plot() function. For example,

  • 1<2
  • 'c' < 't'
  • 'control' < 'treatment'
  • 'Control' < 'Treatment'
  • 'female' < 'male'.

For legend(), the first category in legend argument will appear at the top of the legend box.

library(RColorBrewer)
library(survival)
set1 = c(brewer.pal(9,"Set1"), brewer.pal(8, "Dark2"))

fit <- survfit(Surv(futime, fustat) ~ cut(age, quantile(age, seq(0,1,l=4))), data = ovarian) 
plot(fit, col = set1[3:1])
par(xpd=TRUE)
legend(x=800, y=1.1, bty="n", "Risk", cex=0.9, text.font=2)
legend(x=800, y=1.0, bty="n", text.col = set1[3:1], c("Low","Intermediate","High"), cex=0.9)

Continuous predictor

There could be several reasons why we might want to consider Kaplan-Meier (KM) curves using a continuous covariate:

  • Visualizing Survival Differences: KM curves can help visualize survival differences across different levels of a continuous covariate. For example, if the covariate is age, we might be interested in how survival probabilities differ across various age groups.
  • Detecting Non-Proportional Hazards: KM curves can help detect non-proportional hazards, which occur when the hazard ratios between groups change over time. This can be particularly useful when dealing with continuous covariates, as the relationship between the covariate and survival may not be constant over time.
  • Understanding the Effect of Covariates: KM curves can provide insights into the effect of continuous covariates on survival time. This can be useful in understanding the impact of treatment dosage, biomarker levels, or other continuous measures on patient survival.
  • Developing Diagnostic Tools: Some researchers have proposed methods to create KM-type curves for continuous covariates as diagnostic tools. These tools can help visualize the confounder-adjusted effect of continuous variables on a time-to-event outcome.

The Kaplan-Meier estimator is a (non-parametric) univariable method, meaning it approximates the survival function using at most one variable/predictor. When you have a continuous predictor, one common approach is to convert the continuous variable into a categorical variable by creating groups. This can be done by determining cut-points, such as using the median of the predictor as the group’s cut point.

However, this approach has its limitations. The choice of cut-point can greatly influence the results, and arbitrary cut-points may lead to loss of information. Moreover, this method does not adjust for possible confounders.

Estimating x-year probability of survival

Survival Analysis in R. See the explanation there why the “naive” estimate is wrong when we ignore censoring? Correct is 41% but naive is 47%.

plot(survfit(Surv(time, status) ~ 1, data = lung))
summary(survfit(Surv(time, status) ~ 1, data = lung), times = c(200, 400, 600))

This is useful when we want to compare difference in (overall) survival probability at (5) years based on (A model) (high/low risk groups were defined by the median of scores of the training data).

xlab, ylab

  • Survival probability, Time since diagnosis (year)
  • OS probability, Time to death (months)
  • RFS probability, Time to relapse or death (months)

Median survival and 95% CI

median survival definition

  • The length of time from either the date of diagnosis or the start of treatment for a disease, such as cancer, that half of the patients in a group of patients diagnosed with the disease are still alive. In a clinical trial, measuring the median survival is one way to see how well a new treatment works. Also called median overall survival. See cancer.gov
  • The middle point of longevity of a population: an equal number of people live longer than the median as the number of people who die earlier than the median thefreedictionary
  • Median Survival or Mean Survival: Which Measure Is the Most Appropriate for Patients, Physicians, and Policymakers?
  • The median survival time is the time point at which the probability of survival equals 50%. See GraphPad
  • The average (?) survival time, which we quantify using the median. Survival times are not expected to be normally distributed so the mean is not an appropriate summary.
  • What happens if a survival curve doesn't reach 0.5? It means you can't compute the median.

survfit(Surv(time, status) ~ 1, data). Note the "naive" estimate is wrong (median survival time among patients who died). Correct is 310 but naive is 226.

R> survfit(Surv(time, status) ~ 1, data = lung) # correct
Call: survfit(formula = Surv(time, status) ~ 1, data = lung)

       n events median 0.95LCL 0.95UCL
[1,] 228    165    310     285     363
R> lung %>% 
     filter(status == 1) %>% 
     summarize(median_surv = median(time)) # wrong
  median_surv
1         284
R> median(lung$time) # wrong
[1] 255.5

R> survfit(Surv(time, status) ~ 1, data = aml)
Call: survfit(formula = Surv(time, status) ~ 1, data = aml)

      n events median 0.95LCL 0.95UCL
[1,] 23     18     27      18      45

R> survfit(Surv(time, status) ~ x, data = aml)
Call: survfit(formula = Surv(time, status) ~ x, data = aml)

                 n events median 0.95LCL 0.95UCL
x=Maintained    11      7     31      18      NA
x=Nonmaintained 12     11     23       8      NA

# Extract the median survival time
R> library(survMisc)
R> fit <- survfit(Surv(time, status) ~ 1, data = lung) 
R> median_survival_time <- median(fit)
 50 
310 

Restricted mean survival time

  • survival::print.survfit(). How to compute the mean survival time.
    fit <- survfit(Surv(time, status == 1) ~ x, data = aml)
    print(fit, print.rmean=TRUE) # assume the longest survival time is the horizon
    #                  n events rmean* se(rmean) median 0.95LCL 0.95UCL
    # x=Maintained    11      7   52.6     19.83     31      18      NA
    # x=Nonmaintained 12     11   22.7      4.18     23       8      NA
    #     * restricted mean with upper limit =  161 
    print(fit, print.rmean=TRUE, rmean=250)
    #                  n events rmean* se(rmean) median 0.95LCL 0.95UCL
    # x=Maintained    11      7   27.4      3.01     31      18      NA
    # x=Nonmaintained 12     11   21.2      3.53     23       8      NA
    #     * restricted mean with upper limit =  36 
    
    # To extract the RMST values
    survival:::survmean(fit, rmean=36)[[1]][, "rmean"]
    #    x=Maintained x=Nonmaintained 
    #        27.42500        21.15278 
    
  • survRM2 package
  • PWEALL:: rmsth()
    R> library(survRM2)
    R> D = rmst2.sample.data()
    R> nrow(D)
    [1] 312
    R> head(D[,1:3])
           time status arm
    1  1.095140      1   1
    2 12.320329      0   1
    3  2.770705      1   1
    4  5.270363      1   1
    5  4.117728      0   0
    6  6.852841      1   0
    R> time   = D$time
    R> status = D$status
    R> arm    = D$arm
    R> rmst2(time, status, arm, tau=10)
    
    The truncation time: tau = 10  was specified. 
    
    Restricted Mean Survival Time (RMST) by arm 
                  Est.    se lower .95 upper .95
    RMST (arm=1) 7.146 0.283     6.592     7.701
    RMST (arm=0) 7.283 0.295     6.704     7.863
    
    
    Restricted Mean Time Lost (RMTL) by arm 
                  Est.    se lower .95 upper .95
    RMTL (arm=1) 2.854 0.283     2.299     3.408
    RMTL (arm=0) 2.717 0.295     2.137     3.296
    
    
    Between-group contrast 
                           Est. lower .95 upper .95     p
    RMST (arm=1)-(arm=0) -0.137    -0.939     0.665 0.738
    RMST (arm=1)/(arm=0)  0.981     0.878     1.096 0.738
    RMTL (arm=1)/(arm=0)  1.050     0.787     1.402 0.738
    
    R> library(PWEALL)
    R> PWEALL::rmsth(time, status, tcut=10)
    $tcut
    [1] 10
    $rmst
    [1] 7.208579
    $var
    [1] 13.00232
    $vadd
    [1] 3.915123
    
    R> PWEALL::rmsth(time[arm == 0], status[arm ==0], tcut=10)
    $tcut
    [1] 10
    $rmst
    [1] 7.283416
    $var
    [1] 13.30564
    $vadd
    [1] 3.73545
    
    R> PWEALL::rmsth(time[arm == 1], status[arm ==1], tcut=10)
    $tcut
    [1] 10
    $rmst
    [1] 7.146493
    $var
    [1] 12.49073
    $vadd
    [1] 3.967705
    
  • surv2sampleComp, 生存曲線下面積RMST(Restricted mean survival time)
  • Clustered restricted mean survival time regression Chen, 2022

Inverse Probability of Weighting

  • https://en.wikipedia.org/wiki/Inverse_probability_weighting
  • Robust Inference Using Inverse Probability Weighting, pdf
  • The intuition behind inverse probability weighting in causal inference
  • Idea:
    • Inverse Probability of Weighting (IPW) is a statistical technique used in causal inference to adjust for the bias introduced by non-random sampling or missing data. IPW is used to estimate the population average treatment effect from observational data, by weighting the contribution of each individual in the sample based on their probability of receiving the treatment or being observed.
    • The basic idea behind IPW is to use the observed covariates to infer the probabilities of treatment assignment or missing data, and then use these probabilities as weights to correct for the bias in the sample. By doing so, IPW allows for estimation of treatment effects as if the sample were randomly assigned, and it provides a consistent estimate of the population average treatment effect under certain assumptions.
  • Example:
    • Suppose we want to study the effect of a new drug on blood pressure. We collect data from a sample of patients, but some of them do not take the drug as prescribed, and others drop out of the study before it ends. We want to use this sample to estimate the average treatment effect of the drug on blood pressure.
    • To do this using IPW, we first need to estimate the probability of receiving the treatment (i.e., taking the drug as prescribed) and the probability of being observed (i.e., not dropping out of the study) for each patient. We can use logistic regression or other methods to estimate these probabilities based on the patient's covariates (e.g., age, sex, baseline blood pressure, etc.).
    • Once we have these probabilities, we can use them as weights to adjust for the bias introduced by non-random treatment assignment and missing data. For each patient, we multiply their outcome (blood pressure) by the inverse of their probability of receiving the treatment and being observed, and then take the weighted average over the sample. This gives us an estimate of the average treatment effect of the drug on blood pressure that corrects for the bias introduced by non-random sampling and missing data.
  • Numerical example
    • Suppose we have a sample of 100 patients, and we observe the following: 1) 40 patients take the drug as prescribed and have a mean blood pressure reduction of 10 mmHg. 2) 30 patients do not take the drug as prescribed and have a mean blood pressure reduction of 5 mmHg. 3) 20 patients drop out of the study before it ends and have a mean blood pressure reduction of 7 mmHg. 4) 10 patients both take the drug as prescribed and complete the study, and have a mean blood pressure reduction of 12 mmHg.
    • To estimate the average treatment effect of the drug on blood pressure using IPW, we first need to estimate the probability of receiving the treatment (i.e., taking the drug as prescribed) and the probability of being observed (i.e., not dropping out of the study) for each patient. For simplicity, let's assume that these probabilities are equal for all patients.
    • Suppose
      1. For the 40 patients who took the drug as prescribed: Weight = 1 / 0.5 = 2, Weighted outcome = 2 * 10 = 20,
      2. For the 30 patients who did not take the drug as prescribed: Weight = 1 / 0.5 = 2, Weighted outcome = 2 * 5 = 10,
      3. For the 20 patients who dropped out of the study: Weight = 1 / 0.5 = 2, Weighted outcome = 2 * 7 = 14,
      4. For the 10 patients who both took the drug as prescribed and completed the study: Weight = 1 / 0.5 = 2. Weighted outcome = 2 * 12 = 24
    • IPW estimate of average treatment effect = (20 + 10 + 14 + 24) / 100 = 9.8 mmHg. This IPW estimate of 9.8 mmHg suggests that, on average, the drug reduces blood pressure by 9.8 mmHg. This estimate corrects for the bias introduced by non-random treatment assignment and missing data.
    • What is 0.5 when we calculate the weight in the above example?
      • In the numerical example I provided earlier, the value of 0.5 used in the weight calculation represents the estimated probability of receiving the treatment (i.e., taking the drug as prescribed) and the probability of being observed (i.e., not dropping out of the study) for each patient.
      • For simplicity, I assumed that these probabilities are equal for all patients and equal to 0.5 in the example. This is often not the case in real-world data, and these probabilities need to be estimated using methods such as logistic regression or propensity score.
      • The weight for each patient is then calculated as the inverse of these probabilities: Weight = 1 / probability of receiving the treatment and being observed
      • So, in the example, the weight for each patient is equal to 1 / 0.5 = 2. This weight represents the importance of each patient in the IPW estimate of the average treatment effect.
    • The weights in IPW are usually obtained using one of the following methods:
      1. Logistic Regression: This is a common method for estimating the weights in IPW. We use logistic regression to estimate the probability of receiving the treatment or being observed as a function of the patient's covariates. The coefficients from the logistic regression model are then used to calculate the weights for each patient.
      2. Propensity Score: This is another common method for estimating the weights in IPW. The propensity score is defined as the probability of receiving the treatment given the patient's covariates. We can estimate the propensity score using logistic regression or other methods, and then use it to calculate the weights for each patient.
      3. Weight Truncation: This is a method to stabilize the weights in IPW, especially when some of the weights are very large. Weight truncation involves replacing the weights that are larger than a certain threshold with the threshold. This reduces the influence of outliers on the IPW estimate and helps to prevent over-fitting.
      4. Other Methods: There are also other methods for estimating the weights in IPW, such as the Bayesian Hierarchical Modeling and the Kernel Density Estimation. These methods are more complex but can provide more accurate and flexible estimates of the weights, especially when the relationship between the treatment and the covariates is non-linear.
  • Mathematical formula for IPW
    • Let Y be the outcome of interest (e.g., a continuous or binary variable), T be the treatment indicator (e.g., 0 for control group and 1 for treatment group), X be a vector of covariates, and W be the weight for each individual i. The IPW estimate of the average treatment effect (ATE) is given by:[math]\displaystyle{ ATE = E[Y|T=1] - E[Y|T=0] }[/math]
    where E[Y|T=1] and E[Y|T=0] are the expected values of Y for the treated and control groups, respectively. These expected values can be estimated using the weighted sample mean as follows
    [math]\displaystyle{ \begin{align} E[Y|T=1] &= (1/N1) * Σ(W_i * Y_i) \text{ for i in treatment group} \\ E[Y|T=0] &= (1/N0) * Σ(W_i * Y_i) \text{ for i in control group} \end{align} }[/math]
    where [math]\displaystyle{ N1 }[/math] and [math]\displaystyle{ N0 }[/math] are the number of individuals in the treatment and control groups, respectively, and [math]\displaystyle{ W_i }[/math] is the weight for individual [math]\displaystyle{ i }[/math].
    • The weights [math]\displaystyle{ W_i }[/math] are usually estimated using one of the methods discussed earlier (e.g., logistic regression, propensity score, etc.). The IPW estimate of the ATE is unbiased if the weights are correctly estimated and if the distribution of the covariates X is well-balanced between the treatment and control groups.
    • It is important to note that IPW is a complex method that requires careful estimation of the weights and assessment of the assumptions of the model. It is also sensitive to the choice of the covariates X and the model used to estimate the weights. Therefore, it is important to carefully evaluate the validity and robustness of the IPW estimate before drawing any conclusions.

Inverse Probability of Censoring Weighting (IPCW)

The plots below show by flipping the status variable, we can accurately recover the survival function of the censoring variable. See the R code here for superimposing the true exponential distribution on the KM plot of the censoring variable.

require(survival)
n = 10000
beta1 = 2; beta2 = -1
lambdaT = 1 # baseline hazard
lambdaC = 2  # hazard of censoring
set.seed(1234)
x1 = rnorm(n,0)
x2 = rnorm(n,0)
# true event time
# T = rweibull(n, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2)) # Wrong
T = Vectorize(rweibull)(n=1, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2))

# method 1: exponential censoring variable
C <- rweibull(n, shape=1, scale=lambdaC)   
time = pmin(T,C)  
status <- 1*(T <= C) 
mean(status)
summary(T)
summary(C)
par(mfrow=c(2,1), mar = c(3,4,2,2)+.1)
status2 <- 1-status
plot(survfit(Surv(time, status2) ~ 1), 
     ylab="Survival probability",
     main = 'Exponential censoring time')

# method 2: uniform censoring variable
C <- runif(n, 0, 21)
time = pmin(T,C)  
status <- 1*(T <= C) 
status2 <- 1-status
plot(survfit(Surv(time, status2) ~ 1), 
     ylab="Survival probability",
     main = "Uniform censoring time")

File:Ipcw.svg

  • Numerical example
    • Suppose we have a sample of 100 patients and we are interested in estimating the mean survival time. We observe the survival times for 80 of the patients and 20 are censored, meaning that the event of interest (death in this case) has not occurred at the time of data collection.
    • Let's assume that we have estimated the probability of censoring for each individual using a logistic regression model. The probabilities are given by:
    Individual 1: p_1 = 0.1
    Individual 2: p_2 = 0.2
    ...
    Individual 100: p_100 = 0.05
    
    • The IPCW weights for each individual are then calculated as the inverse of the probability of censoring:
    Individual 1: w_1 = 1 / p_1 = 1 / 0.1 = 10
    Individual 2: w_2 = 1 / p_2 = 1 / 0.2 = 5
    ...
    Individual 100: w_100 = 1 / p_100 = 1 / 0.05 = 20
    
    • The IPCW estimate of the mean survival time is then calculated as the weighted average of the survival times, where the weights are the IPCW weights:
    IPCW estimate = (w_1 * survival time of individual 1 + w_2 * survival time of individual 2 + ... + w_100 * survival time of individual 100) / (w_1 + w_2 + ... + w_100)
    
    • The IPCW estimate takes into account the probability of censoring for each individual, and it gives more weight to individuals who are at higher risk of censoring, which can help to reduce the bias in the estimated mean survival time.

stepfun() and plot.stepfun()

GGally package (ggplot object)

ggsurv() from the GGally package. GGally has 2 times downloaded of survminer & more authors.

Advantage: return object class is c("gg", "ggplot") while survminer::ggsurvplot returns object class "ggsurvplot" "ggsurv", "list".

It seems to be better to apply order.legend = FALSE if we want the default color palette has the same order as the levels. For example

data(lung, package = "survival")
sf.sex <- survival::survfit(Surv(time, status) ~ sex, data = lung)
ggsurv(sf.sex)   # 2 = Salmon, 1 = Iris blue
                 # Colors are defined by the final survival time

ggsurv(sf.sex, order.legend = FALSE) # 1 = Salmon, 2 = Iris blue
                        # More consistent with what we expect
                        # Colors are defined by the levels

# More options
ggsurv(sf.sex, order.legend = FALSE, surv.col = scales::hue_pal()(2))

To combine multiple ggplot2 plots, use the ggpubr package. gridExtra is not developed after 2017.

library(GGally)
library(survival)
data(lung, package = "survival")
sf.lung <- survfit(Surv(time, status) ~ sex, data = lung)
p1 <- ggsurv(sf.lung, plot.cens = FALSE, lty.est = c(1, 3), size.est = 0.8, 
             xlab = "Time", ylab = "Survival", main = "Lower score")
p1 <- p1 + annotate("text", x=0, y=.25, hjust=0, label="zxcvb")
p2 <- ggsurv(sf.lung, plot.cens = FALSE, lty.est = c(1, 3), size.est = 0.8, 
             xlab = "Time", ylab = "Survival", main = "High score")
p2 <- p2 + annotate("text", x=0, y=.25, hjust=0, label="asdfg")

# gridExtra::grid.arrange(p1, p2, ncol=2, nrow =1) # no common legend option
ggpubr::ggarrange(p1, p2,  common.legend = TRUE, legend = "right")
# return object class: "gg"   "ggplot"    "ggarrange"

Survival curves with number at risk at bottom: survminer package

R function survminer::ggsurvplot()

  • survminer Cheatsheet by RStudio. It includes KM curves (ggsurvplot), diagnostics (ggcoxdiagnostics) and summary of Cox model (ggforest).
  • sthda
  • ggsurvplot()
    • ggsurvplot_facet() - if we want to create KM curves based subset of data (one plot)
    • ggsurvplot_group_by() - if we want to create KM curves based subset of data (separate plots)
    • ggsurvplot_list() - if we want to create a list of KM curves (practical application?)
    • ggsurvplot_combine() - if we want to combine OS and PFS for example in one plot
  • Error: object of type 'symbol' is not subsettable. Use survminer::surv_fit() in lieu of survival::survfit()
    • This is needed if we want to separate Surv() (formula) and survfit() in two statements. For instance, if we want to fit the same data with different formulas.
    • surv_fit()
  • To save ggsurvplot(), use ggsave(FILE, res$plot) . To save arrange_ggsurvplots(), use ggsave(FILE, res)
  • http://r-addict.com/2016/05/23/Informative-Survival-Plots.html
  • Add the numbers at risk table. cowplot::plot_grid() was used to combine the KM plot and risk table together.
  • Adjusting for covariates under non-proportional hazards. break.x.by or break.time.by to control x axis breaks.
    gp <- survminer::ggsurvplot(, risk.table = TRUE, 
                          break.x.by = 6,  # if we use 'months' as time unit
                          legend.title = "",  # default is "Strata"
                          legend.labs = c("Male", "Female"), # c("Sex=Male", "Sex=Female")
                          palette = c("blue", "red") # Change color palettes
                          conf.int = FALSE,
                          linetype = 1, # Or linetype = "strata"
                          xlab = "Time (months)",
                          ylab = "Overall survival",
                          surv.median.line = "hv", # Specify median survival
                          ggtheme = theme_bw(),
                          risk.table.fontsize = 4,
                          legend = c(0.8,0.8)))
    gp$plot <- gp$plot + scale_linetype_manual(values = c("solid", "solid"))
    gp$plot <- gp$plot + annotate("text", x=75, y=1, 
                                    label=paste(pval,hrpe,hrci,sep="\n"), 
                                    cex=4, hjust=0, vjust=1)
    gp$table <- gp$table + 
                theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())
    library(survminer)
    ggsurvplot(survo, risk.table = TRUE, pval=TRUE, pval.method = TRUE, 
               palette = c("#F8766D", "#00BFC4")) # (Salmon, Iris Blue)
    
  • Arrange ggsurv plots with one shared legend. Note we can add a title to a corner of an individual plot by a trick ggsurvplot()$plot + labs(title = "A") .
  • Arranging Multiple ggsurvplots arrange_ggsurvplots(). When I need to put two KM curves plot side by side using arrange_ggsurvplots(), some issues came out (these properties seem to inherit from arrangeGrob):
    • if I try it on a terminal the function will open two graph devices and the first one is blank?
    • if I try it on a terminal with print = FALSE option, it still open a blank graph window,
    • if I try it in RStudio, the plot is not generated in RStudio but in a separate X window. It does not matter I am using macOS or Linux.
    • if I just draw a plot from ggsurvplot(), the plot is drawn in RStudio as we want.
    • ggpubr::ggarrange() is an alternative to arrange_ggsurvplots() but ggpubr::ggarrange() does not work with ggsurvplot() objects.
    • survminer::ggsurvplot_combine() will put two curves in one plot.
    • Solution: using the patchwork package. A single legend for multiple ggsurvplots using arrange_ggsurvplot
    library(patchwork)
    res1 <- ggsurvplot()
    ...
    res1$plot + res2$plot + res3$plot + res4$plot + plot_layout(nrow=2, byrow = FALSE)
  • Add custom annotation to ggsurvplot. However, even I use the same x-value in ggsurvplot(pval.coord) and ggplot2::annotate(x), the texts are not aligned in x-axis.
    ggsurv$plot <- ggsurv$plot+ 
                  ggplot2::annotate("text", 
                                    x = 100, y = 0.2, # x and y coordinates of the text
                                    label = "My label", size=1)
    
  • Use solid instead dashed lines for median times. Modify the line p <- .add_surv_median(p, fit, type = surv.median.line, fun = fun, data = data) by adding linetype = "solid".

Paper examples

Questions:

  • How to remove tick mark on censored observations especially the case with a large sample size?

finalfit R package:

ggfortify

ggsurvfit

ggsurvfit: Easy and Flexible Time-to-Event Figures

KMunicate

https://cran.r-project.org/web/packages/KMunicate/index.html

Life table

Re-construct survival data from KM curves

reconstructKM package

Calculation by hand

What is survival analysis? Examples by hand and in R

Compare the KM curve to the Cox model curve

Publication examples

Alternatives to survival function plot

https://www.rdocumentation.org/packages/survival/versions/2.43-1/topics/plot.survfit The fun argument, a transformation of the survival curve

  • fun = "event" or "F": f(y) = 1-y; it calculates P(T < t). This is like a t-year risk (Blanche 2018).
  • fun = "cumhaz": cumulative hazard function (f(y) = -log(y)); it calculates H(t). See Intuition for cumulative hazard function.

Breslow estimate

Logrank/log-rank/log rank test

  • Logrank test is a hypothesis test to compare the survival distributions of two samples. The logrank test statistic compares estimates of the hazard functions of the two groups at each observed event time.
  • Statistics Notes - The logrank test 2004
    • Calculation and an example data are provided.
    • It is also possible to test for a trend in survival across ordered groups.
    • The logrank test is based on the same assumptions as the Kaplan Meier survival curve - namely, that censoring is unrelated to prognosis, the survival probabilities are the same for subjects recruited early and late in the study, and the events happened at the times specified.
    • The logrank test is most likely to detect a difference between groups when the risk of an event is consistently greater for one group than another. It is unlikely to detect a difference when survival curves cross, as can happen when comparing a medical with a surgical intervention.
?coxph
test1 <- list(time=c(4,3,1,1,2,2,3), 
              status=c(1,1,1,0,1,1,0), 
              x=c(0,2,1,1,1,0,0), 
              sex=c(0,0,0,0,1,1,1))
summary(coxph(Surv(time, status) ~ x, test1) )
# Call:
# coxph(formula = Surv(time, status) ~ x, data = test1)
#
#   n= 7, number of events= 5 
#
#     coef exp(coef) se(coef)     z Pr(>|z|)
# x 0.4608    1.5853   0.5628 0.819    0.413
#
#  exp(coef) exp(-coef) lower .95 upper .95
# x     1.585     0.6308    0.5261     4.777
#
# Concordance= 0.643  (se = 0.135 )
# Likelihood ratio test= 0.66  on 1 df,   p=0.4
# Wald test            = 0.67  on 1 df,   p=0.4
# Score (logrank) test = 0.71  on 1 df,   p=0.4

Logrank test vs Cox model

  • Logrank test vs Cox model.
    • The cox model relies on the proportional hazards assumption. The logrank test does not. If your data are not consistent with the proportional hazards assumption, then the cox results may not be valid.
    • the graph you show does not seem consistent with the proportional hazards assumption.
  • Logrank test relationship to other statistics & assumptions from wikipedia.
  • The logrank test statistic is equivalent to the score of a Cox regression. Is there an advantage of using a logrank test over a Cox regression? Since the log-rank test is a special case of the Cox model, it does not have fewer assumptions or more power. IMHO we no longer need to be using or teaching the log-rank test. Answered by Frank Harrell.
    • The log-rank Test Assumes More Than the Cox Model. Numerical examples were given.
    • I can confirm the log-rank tests and Cox regression pvalues are very close by using median as a cutoff from one data with 7288 proteins. The scatterplot shows both p-values are on a 45 degree line and the p-values distribution is like Uniform.
  • Kaplan-Meier Curves, Log-Rank Tests, and Cox Regression for Time-to-Event Data.
    • The null hypothesis tested by the log-rank test is that the survival curves are identical over time; it thus compares the entire curves rather than the survival probability at a specific time point.
    • The log-rank test assesses statistical significance but does not estimate an effect size.
    • The Cox proportional hazards regression5 technique does not actually model the survival time or probability but the so-called hazard function. This function can be thought of as the instantaneous risk of experiencing the event of interest at a certain time point.
    • While the HR is not the same as a relative risk, it can for all practical purposes be interpreted as such. See Survival Analysis and Interpretation of Time-to-Event Data: The Tortoise and the Hare.
  • The logrank test in BMJ, 2004
    • The logrank test is based on the same assumptions as the Kaplan Meier survival curve—namely, that censoring is unrelated to prognosis, the survival probabilities are the same for subjects recruited early and late in the study, and the events happened at the times specified. Deviations from these assumptions matter most if they are satisfied differently in the groups being compared, for example if censoring is more likely in one group than another.
    • The logrank test is most likely to detect a difference between groups when the risk of an event is consistently greater for one group than another. It is unlikely to detect a difference when survival curves cross, as can happen when comparing a medical with a surgical intervention.
    • Statistics review 12: Survival analysis
    • 生存分析(三)log-rank检验在什么情况下失效? Wilcoxon test
  • Visualize a survival estimate according to a continuous variable.
  • How to access the fit of a Cox regression?
  • Read the comment in the section Analyzing Continuous Variables Kaplan Meier Mistakes
    • Analyzing Continuous Variables. Optimal cutpoint is problematic because testing every cutpoint creates a multiple testing problem. dichotomization causes loss of statistical power; using binary variables instead of continuous variables can triple the number of samples needed to detect an effect. Dichotomization also makes poor assumptions about the distribution of risk among patients,
    • Covariate Adjustment. Kaplan Meier is a univariate method. At a minimum the variable should be analyzed in a Cox model with other basic prognostic factors.
    • Added Value. AUC-ROC, the Likelihood Ratio Test, and R² .
  • An example
R> sdf <- survdiff(Surv(futime, fustat) ~ rx, data = ovarian)
R> sdf$chisq
[1] 1.06274
R> 1 - pchisq(sdf$chisq, length(sdf$n) - 1) 
[1] 0.3025911                                 <----------
R> fit <- coxph(Surv(futime, fustat) ~ rx, data = ovarian)
R> coef(summary(fit))[, "Pr(>|z|)"]
[1] 0.3096304
R> fit$score
[1] 1.06274
R> summary(fit)
Call:
coxph(formula = Surv(futime, fustat) ~ rx, data = ovarian)

  n= 26, number of events= 12 

      coef exp(coef) se(coef)      z Pr(>|z|)
rx -0.5964    0.5508   0.5870 -1.016     0.31

   exp(coef) exp(-coef) lower .95 upper .95
rx    0.5508      1.816    0.1743      1.74

Concordance= 0.608  (se = 0.07 )
Likelihood ratio test= 1.05  on 1 df,   p=0.3
Wald test            = 1.03  on 1 df,   p=0.3
Score (logrank) test = 1.06  on 1 df,   p=0.3  <--- df = model df

Two-sided vs one-sided p-value

The p-value that R (or SAS) returns is for a two-sided test. To obtain a one-sided p-value from this, simply divide the two-sided p-value by 2. Survival Statistics with PROC LIFETEST and PROC PHREG: Pitfall-Avoiding Survival Lessons for Programmers.

> survdiff(Surv(futime, fustat) ~ rx,data=ovarian)
Call:
survdiff(formula = Surv(futime, fustat) ~ rx, data = ovarian)

      N Observed Expected (O-E)^2/E (O-E)^2/V
rx=1 13        7     5.23     0.596      1.06
rx=2 13        5     6.77     0.461      1.06

 Chisq= 1.1  on 1 degrees of freedom, p= 0.3 
> pchisq(1.1, 1, lower.tail = F)
[1] 0.2942661
> pnorm(sqrt(1.1), 0, 1, lower.tail = F)
[1] 0.1471331

Create 2 groups from a continuous variable

See case_when() or tidyverse

merged_data = merged_data %>% 
  mutate(group = case_when(
    KRAS_expression > quantile(KRAS_expression, 0.5) ~ 'KRAS_High',
    KRAS_expression < quantile(KRAS_expression, 0.5) ~ 'KRAS_Low',
    TRUE ~ NA_character_
  ))

fit = survfit(Surv(time, status) ~ group, data = merged_data)

Optimal cut-off

Survival curve with confidence interval

http://www.sthda.com/english/wiki/survminer-r-package-survival-data-analysis-and-visualization

Parametric models and survival function for censored data

Assume the CDF of survival time T is [math]\displaystyle{ F(\cdot) }[/math] and the CDF of the censoring time C is [math]\displaystyle{ G(\cdot) }[/math],

[math]\displaystyle{ \begin{align} P(T\gt t, \delta=1) &= \int_t^\infty (1-G(s))dF(s), \\ P(T\gt t, \delta=0) &= \int_t^\infty (1-F(s))dG(s) \end{align} }[/math]

R

Parametric models and likelihood function for uncensored data

plot.survfit()

  • Exponential. [math]\displaystyle{ T \sim Exp(\lambda) }[/math]. [math]\displaystyle{ H(t) = \lambda t. }[/math] and [math]\displaystyle{ ln(S(t)) = -H(t) = -\lambda t. }[/math]
  • Weibull. [math]\displaystyle{ T \sim W(\lambda,p). }[/math] [math]\displaystyle{ H(t) = \lambda^p t^p. }[/math] and [math]\displaystyle{ ln(-ln(S(t))) = ln(\lambda^p t^p)=const + p ln(t) }[/math].

http://www.math.ucsd.edu/~rxu/math284/slect4.pdf

See also accelerated life models where a set of covariates were used to model survival time.

Survival modeling

Accelerated life models - a direct extension of the classical linear model

http://data.princeton.edu/wws509/notes/c7.pdf and also Kalbfleish and Prentice (1980).

[math]\displaystyle{ log T_i = x_i' \beta + \epsilon_i }[/math] Therefore

  • [math]\displaystyle{ T_i = exp(x_i' \beta) T_{0i} }[/math]. So if there are two groups (x=1 and x=0), and [math]\displaystyle{ exp(\beta) = 2 }[/math], it means one group live twice as long as people in another group.
  • [math]\displaystyle{ S_1(t) = S_0(t/ exp(x' \beta)) }[/math]. This explains the meaning of accelerated failure-time. Depending on the sign of [math]\displaystyle{ \beta' x }[/math], the time is either accelerated by a constant factor or degraded by a constant factor. If [math]\displaystyle{ exp(\beta)=2 }[/math], the probability that a member in group one (eg treatment) will be alive at age t is exactly the same as the probability that a member in group zero (eg control group) will be alive at age t/2.
  • The hazard function [math]\displaystyle{ \lambda_1(t) = \lambda_0(t/exp(x'\beta))/ exp(x'\beta) }[/math]. So if [math]\displaystyle{ exp(\beta)=2 }[/math], at any given age people in group one would be exposed to half the risk of people in group zero half their age.

In applications,

  • If the errors are normally distributed, then we obtain a log-normal model for the T. Estimation of this model for censored data by maximum likelihood is known in the econometric literature as a Tobit model.
  • If the errors have an extreme value distribution, then T has an exponential distribution. The hazard [math]\displaystyle{ \lambda }[/math] satisfies the log linear model [math]\displaystyle{ \log \lambda_i = x_i' \beta }[/math].

Proportional hazard models

Note PH models is a type of multiplicative hazard rate models [math]\displaystyle{ h(x|Z) = h_0(x)c(\beta' Z) }[/math] where [math]\displaystyle{ c(\beta' Z) = \exp(\beta ' Z) }[/math].

Assumption: Survival curves for two strata (determined by the particular choices of values for covariates) must have hazard functions that are proportional over time (i.e. constant relative hazard over time). Proportional hazards assumption meaning. The ratio of the hazard rates from two individuals with covariate value [math]\displaystyle{ Z }[/math] and [math]\displaystyle{ Z^* }[/math] is a constant function time.

[math]\displaystyle{ \begin{align} \frac{h(t|Z)}{h(t|Z^*)} = \frac{h_0(t)\exp(\beta 'Z)}{h_0(t)\exp(\beta ' Z^*)} = \exp(\beta' (Z-Z^*)) \mbox{ independent of time} \end{align} }[/math]

Test the assumption; see here.

Weibull and Exponential model to Cox model

In summary:

  • Weibull distribution (Klein) [math]\displaystyle{ h(t) = p \lambda (\lambda t)^{p-1} }[/math] and [math]\displaystyle{ S(t) = exp(-\lambda t^p) }[/math]. If p >1, then the risk increases over time. If p<1, then the risk decreases over time.
    • Note that Weibull distribution has a different parametrization. See http://data.princeton.edu/pop509/ParametricSurvival.pdf#page=2. [math]\displaystyle{ h(t) = \lambda^p p t^{p-1} }[/math] and [math]\displaystyle{ S(t) = exp(-(\lambda t)^p) }[/math]. R and wikipedia also follows this parametrization except that [math]\displaystyle{ h(t) = p t^{p-1}/\lambda^p }[/math] and [math]\displaystyle{ S(t) = exp(-(t/\lambda)^p) }[/math].
  • Exponential distribution [math]\displaystyle{ h(t) }[/math] = constant (independent of t). This is a special case of Weibull distribution (p=1).
  • Weibull (and also exponential) distribution regression model is the only case which belongs to both the proportional hazards and the accelerated life families.
[math]\displaystyle{ \begin{align} \frac{h(x|Z_1)}{h(x|Z_2)} = \frac{h_0(x\exp(-\gamma' Z_1)) \exp(-\gamma ' Z_1)}{h_0(x\exp(-\gamma' Z_2)) \exp(-\gamma ' Z_2)} = \frac{(a/b)\left(\frac{x \exp(-\gamma ' Z_1)}{b}\right)^{a-1}\exp(-\gamma ' Z_1)}{(a/b)\left(\frac{x \exp(-\gamma ' Z_2)}{b}\right)^{a-1}\exp(-\gamma ' Z_2)} \quad \mbox{which is independent of time x} \end{align} }[/math]
f(t)=h(t)*S(t) h(t) S(t) Mean
Exponential (Klein p37) [math]\displaystyle{ \lambda \exp(-\lambda t) }[/math] [math]\displaystyle{ \lambda }[/math] [math]\displaystyle{ \exp(-\lambda t) }[/math] [math]\displaystyle{ 1/\lambda }[/math]
Weibull (Klein, Bender, wikipedia) [math]\displaystyle{ p\lambda t^{p-1}\exp(-\lambda t^p) }[/math] [math]\displaystyle{ p\lambda t^{p-1} }[/math] [math]\displaystyle{ exp(-\lambda t^p) }[/math] [math]\displaystyle{ \frac{\Gamma(1+1/p)}{\lambda^{1/p}} }[/math]
Exponential (R) [math]\displaystyle{ \lambda \exp(-\lambda t) }[/math], [math]\displaystyle{ \lambda }[/math] is rate [math]\displaystyle{ \lambda }[/math] [math]\displaystyle{ \exp(-\lambda t) }[/math] [math]\displaystyle{ 1/\lambda }[/math]
Weibull (R, wikipedia) [math]\displaystyle{ \frac{a}{b}\left(\frac{t}{b}\right)^{a-1} \exp(-(\frac{t}{b})^a) }[/math],
[math]\displaystyle{ a }[/math] is shape, and [math]\displaystyle{ b }[/math] is scale
[math]\displaystyle{ \frac{a}{b}\left(\frac{t}{b}\right)^{a-1} }[/math] [math]\displaystyle{ \exp(-(\frac{t}{b})^a) }[/math] [math]\displaystyle{ b\Gamma(1+1/a) }[/math]
  • Accelerated failure-time model. Let [math]\displaystyle{ Y=\log(T)=\mu + \gamma'Z + \sigma W }[/math]. Then the survival function of [math]\displaystyle{ T }[/math] at the covariate Z,
[math]\displaystyle{ \begin{align} S_T(t|Z) &= P(T \gt t |Z) \\ &= P(Y \gt \ln t|Z) \\ &= P(\mu + \sigma W \gt \ln t-\gamma' Z | Z) \\ &= P(e^{\mu + \sigma W} \gt t\exp(-\gamma'Z) | Z) \\ &= S_0(t \exp(-\gamma'Z)). \end{align} }[/math]

where [math]\displaystyle{ S_0(t) }[/math] denote the survival function T when Z=0. Since [math]\displaystyle{ h(t) = -\partial \ln (S(t)) }[/math], the hazard function of T with a covariate value Z is related to a baseline hazard rate [math]\displaystyle{ h_0 }[/math] by (p56 Klein)

[math]\displaystyle{ \begin{align} h(t|Z) = h_0(t\exp(-\gamma' Z)) \exp(-\gamma ' Z) \end{align} }[/math]
> mean(rexp(1000)^(1/2))
[1] 0.8902948
> mean(rweibull(1000, 2, 1))
[1] 0.8856265

> mean((rweibull(1000, 2, scale=4)/4)^2)
[1] 1.008923

Graphical way to check Weibull, AFT, PH

http://stat.ethz.ch/education/semesters/ss2011/seminar/contents/handout_9.pdf#page=40

Weibull is related to Extreme value distribution

Weibull distribution and bathtub

Weibull distribution and reliability

Survival Analysis – Fitting Weibull Models for Improving Device Reliability in R (simulation)

Optimisation of a Weibull survival model using Optimx()

Optimisation of a Weibull survival model using Optimx() in R

CDF follows Unif(0,1)

https://stats.stackexchange.com/questions/161635/why-is-the-cdf-of-a-sample-uniformly-distributed

Take the Exponential distribution for example

stem(pexp(rexp(1000)))
stem(pexp(rexp(10000)))

Another example is from simulating survival time. Note that this is exactly Bender et al 2005 approach. See also the simsurv (newer) and survsim (older) packages.

set.seed(100) 

#Define the following parameters outlined in the step: 
n = 1000 
beta_0 = 0.5
beta_1 = -1
beta_2 = 1 

b = 1.6 #This will be changed later as mentioned in Step 5 of documentation 

#Step 1
x_1<-rbinom(n, 1, 0.25)
x_2<-rbinom(n, 1, 0.7)

#Step 2 
U<-runif(n, 0,1)
T<-(-log(U)*exp(-(beta_0+beta_1*x_1+beta_2*x_2))) #Eqn (5) 

Fn <- ecdf(T) # https://stat.ethz.ch/R-manual/R-devel/library/stats/html/ecdf.html
# verify F(T) or 1-F(T) ~ U(0, 1)
hist(Fn(T))
# look at the plot of survival probability vs time
plot(T, 1 - Fn(T))

Simulate survival data

Note that status = 1 means an event (e.g. death) happened; Ti <= Ci. That is, the status variable used in R/Splus means the death indicator.

  • http://www.bioconductor.org/packages/release/bioc/manuals/genefilter/man/genefilter.pdf#page=4
    y <- rexp(10)
    cen <- runif(10)
    status <- ifelse(cen < .7, 1, 0)
    
  • Inference on Selected Subgroups in Clinical Trials [math]\displaystyle{ \lambda(t) = \lambda_0(t) e^{\beta_i D} }[/math] for subgroup i=1,2, respectively where D is the treatment indicator and [math]\displaystyle{ \lambda_0(t) }[/math] is the baseline hazard function of Weibull(1,1). The subjects fall into one of the two subgroups with probability 0.5, and the treatment assignment is also random with equal probability. The response generated from the above model is then censored randomly from the right by a censoring variable C, where log(C) follows the uniform distribution on (-1.25, 1.00). The censoring rate is about 40% across different choices of [math]\displaystyle{ \beta_i }[/math] considered in this study.
  • How much power/accuracy is lost by using the Cox model instead of Weibull model when both model are correct? [math]\displaystyle{ h(t|x)=\lambda=e^{3x+1} = h_0(t)e^{\beta x} }[/math] where [math]\displaystyle{ h_0(t)=e^1, \beta=3 }[/math].
    Note that for the exponential distribution, larger rate/[math]\displaystyle{ \lambda }[/math] corresponds to a smaller mean. This relation matches with the Cox regression where a large covariate corresponds to a smaller survival time. So the coefficient 3 in myrates in the below example has the same sign as the coefficient (2.457466 for censored data) in the output of the Cox model fitting.
    n <- 30
    x <- scale(1:n, TRUE, TRUE) # create covariates (standardized)
                                # the original example does not work on large 'n'
    myrates <- exp(3*x+1)
    set.seed(1234)
    y <- rexp(n, rate = myrates) # generates the r.v.
    cen <- rexp(n, rate = 0.5 )  #  E(cen)=1/rate
    ycen <- pmin(y, cen)
    di <- as.numeric(y <= cen)
    survreg(Surv(ycen, di)~x, dist="weibull")$coef[2]  # -3.080125
    # library(flexsurvreg); flexsurvreg(Surv(ycen, di)~x, dist="weibull")
    coxph(Surv(ycen, di)~x)$coef  # 2.457466 
    
    # no censor
    survreg(Surv(y,rep(1, n))~x,dist="weibull")$coef[2]  # -3.137603
    survreg(Surv(y,rep(1, n))~x,dist="exponential")$coef[2]  # -3.143095
    coxph(Surv(y,rep(1, n))~x)$coef  # 2.717794 
    
    # See the pdf note for the rest of code
    
  • Intercept in survreg for the exponential distribution. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=25.
    [math]\displaystyle{ \begin{align} \lambda = exp(-intercept) \end{align} }[/math]
    > futime <- rexp(1000, 5)
    > survreg(Surv(futime,rep(1,1000))~1,dist="exponential")$coef
    (Intercept) 
      -1.618263 
    > exp(1.618263)
    [1] 5.044321
    
  • Intercept and scale in survreg for a Weibull distribution. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf#page=28.
    [math]\displaystyle{ \begin{align} \gamma &= 1/scale \\ \alpha &= exp(-(Intercept)*\gamma) \end{align} }[/math]
    > survreg(Surv(futime,rep(1,1000))~1,dist="weibull")
    Call:
    survreg(formula = Surv(futime, rep(1, 1000)) ~ 1, dist = "weibull")
    
    Coefficients:
    (Intercept) 
      -1.639469 
    
    Scale= 1.048049 
    
    Loglik(model)= 620.1   Loglik(intercept only)= 620.1
    n= 1000 
    
  • rsurv() function from the ipred package
  • Use Weibull distribution to model survival data. We assume the shape is constant across subjects. We then allow the scale to vary across subjects. For subject [math]\displaystyle{ i }[/math] with covariate [math]\displaystyle{ X_i }[/math], [math]\displaystyle{ \log(scale_i) }[/math] = [math]\displaystyle{ \beta ' X_i }[/math]. Note that if we want to make the [math]\displaystyle{ \beta }[/math] sign to be consistent with the Cox model, we want to use [math]\displaystyle{ \log(scale_i) }[/math] = [math]\displaystyle{ -\beta ' X_i }[/math] instead.
  • http://sas-and-r.blogspot.com/2010/03/example-730-simulate-censored-survival.html. Assuming shape=1 in the Weibull distribution, then the hazard function can be expressed as a proportional hazard model
    [math]\displaystyle{ h(t|x) = 1/scale = \frac{1}{\lambda/e^{\beta 'x}} = \frac{e^{\beta ' x}}{\lambda} = h_0(t) \exp(\beta' x) }[/math]
    n = 10000
    beta1 = 2; beta2 = -1
    lambdaT = .002 # baseline hazard
    lambdaC = .004  # hazard of censoring
    set.seed(1234)
    x1 = rnorm(n,0)
    x2 = rnorm(n,0)
    # true event time
    T = Vectorize(rweibull)(n=1, shape=1, scale=lambdaT*exp(-beta1*x1-beta2*x2)) 
    # No censoring
    event2 <- rep(1, length(T))
    coxph(Surv(T, event2)~ x1 + x2)
    #        coef exp(coef) se(coef)      z      p
    # x1  1.99825   7.37613  0.01884 106.07 <2e-16
    # x2 -1.00200   0.36715  0.01267 -79.08 <2e-16
    #
    # Likelihood ratio test=15556  on 2 df, p=< 2.2e-16
    # n= 10000, number of events= 10000 
    
    # Censoring
    C = rweibull(n, shape=1, scale=lambdaC)   #censoring time
    time = pmin(T,C)  #observed time is min of censored and true
    event = time==T   # set to 1 if event is observed
    coxph(Surv(time, event)~ x1 + x2)
    #        coef exp(coef) se(coef)      z      p
    # x1  2.01039   7.46622  0.02250  89.33 <2e-16
    # x2 -0.99210   0.37080  0.01552 -63.95 <2e-16
    #
    # Likelihood ratio test=11321  on 2 df, p=< 2.2e-16
    # n= 10000, number of events= 6002
    mean(event)
    # [1] 0.6002
    
  • https://stats.stackexchange.com/a/135129 (Bender's inverse probability method). Let [math]\displaystyle{ h_0(t)=\lambda \rho t^{\rho - 1} }[/math] where shape 𝜌>0 and scale 𝜆>0. Following the inverse probability method, a realisation of 𝑇∼𝑆(⋅|𝐱) is obtained by computing [math]\displaystyle{ t = \left( - \frac{\log(v)}{\lambda \exp(x' \beta)} \right) ^ {1/\rho} }[/math] with 𝑣 a uniform variate on (0,1). Using results on transformations of random variables, one may notice that 𝑇 has a conditional Weibull distribution (given 𝐱) with shape 𝜌 and scale 𝜆exp(𝐱′𝛽).
    # N = sample size    
    # lambda = scale parameter in h0()
    # rho = shape parameter in h0()
    # beta = fixed effect parameter
    # rateC = rate parameter of the exponential distribution of censoring variable C
    
    simulWeib <- function(N, lambda, rho, beta, rateC)
    {
      # covariate --> N Bernoulli trials
      x <- sample(x=c(0, 1), size=N, replace=TRUE, prob=c(0.5, 0.5))
    
      # Weibull latent event times
      v <- runif(n=N)
      Tlat <- (- log(v) / (lambda * exp(x * beta)))^(1 / rho)
    
      # censoring times
      C <- rexp(n=N, rate=rateC)
    
      # follow-up times and event indicators
      time <- pmin(Tlat, C)
      status <- as.numeric(Tlat <= C)
    
      # data set
      data.frame(id=1:N,
                 time=time,
                 status=status,
                 x=x)
    }
    # Test
    set.seed(1234)
    betaHat <- rate <- rep(NA, 1e3)
    for(k in 1:1e3)
    {
      dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001)
      fit <- coxph(Surv(time, status) ~ x, data=dat)
      rate[k] <- mean(dat$status == 0)
      betaHat[k] <- fit$coef
    }
    mean(rate)
    # [1] 0.12287
    mean(betaHat)
    # [1] -0.6085473
    
  • Generating survival times to simulate Cox proportional hazards models Bender et al 2005 [math]\displaystyle{ T=H_0^{-1}[-\log(U) \exp(\beta' x)] }[/math] Bender2005.png, Bender2005table2.png
  • Simple example from glmnet
    set.seed(10101)
    N = 1000
    p = 30
    nzc = p/3
    x = matrix(rnorm(N * p), N, p)
    beta = rnorm(nzc)
    fx = x[, seq(nzc)] %*% beta/3
    hx = exp(fx)
    ty = rexp(N, hx)
    tcens = rbinom(n = N, prob = 0.3, size = 1)  # censoring indicator
    y = cbind(time = ty, status = 1 - tcens)  # y=Surv(ty,1-tcens) with library(survival)
    fit = glmnet(x, y, family = "cox")
    pred = predict(fit, newx = x)
    Cindex(pred, y)
    
  • A non-standard baseline hazard function [math]\displaystyle{ \lambda_0(t)=(t - .5)^2 }[/math] from the paper: A new nonparametric screening method for ultrahigh-dimensional survival data Liu 2018. The censoring time [math]\displaystyle{ C = \widetilde{C} \wedge \tau }[/math], where [math]\displaystyle{ \widetilde{C} }[/math] was generated from Unif (0, [math]\displaystyle{ \tau + 2 }[/math]) where [math]\displaystyle{ \tau }[/math] was chosen to yield the desirable censoring rates of 20% and 40%, respectively.
  • Regularization paths for Cox's proportional hazards model via coordinate descent. J Stat Software Simon et al 2011. Gsslasso Cox: a Bayesian hierarchical model for predicting survival and detecting associated genes by incorporating pathway information by Tang 2019. See also Tian 2014 JASA p1525. X ~ standard Gaussian. True survival time exp(beta X + k · Z). Z ~ N(0,1), and k is chosen so that the signal-to-noise ratio is 3.0 or to induce a certain censoring rate. Censoring time C = exp (k · Z). The observed survival time T = min{Y, C}.
  • survParamSim: Parametric Survival Simulation with Parameter Uncertainty
  • vivaGen – a survival data set generator for software testing BMC Bioinformatics 2020
  • Simulating survival outcomes: setting the parameters for the desired distribution. simstudy, Follow-up: simstudy function for generating parameters for survival distribution package was used.

Age + gene expression

Simulate a data such as gene is significant in ~ age + gene model, but insignificant in ~ gene model.

# Set seed for reproducibility
set.seed(123)

# Simulate data
n <- 200
age <- rnorm(n, mean = 50, sd = 10)  # Continuous variable for age
gene_expression <- rnorm(n, mean = 0, sd = 1)  # Continuous variable for gene expression

# Simulate survival data with a moderate effect of gene expression
time <- rexp(n, rate = 0.1 + 0.01 * age + 0.06 * gene_expression)
status <- sample(0:1, n, replace = TRUE, prob = c(0.3, 0.7))  # Censored status

# Create data frame
df <- data.frame(time, status, age, gene_expression)

# Fit Cox models
cox_model_1 <- coxph(Surv(time, status) ~ gene_expression, data = df)
cox_model_2 <- coxph(Surv(time, status) ~ age + gene_expression, data = df)

summary(cox_model_1)  # p(gene)=0.0675
summary(cox_model_2)  # p(gene)=0.0361, p(age)=0.0329

To use Kaplan-Meier curves to show the relationship between gene expression and survival while adjusting for age.

# Categorize age into two groups
df$age_group <- ifelse(df$age > median(df$age), "Older", "Younger")

# Categorize gene expression into two groups
df$gene_group <- ifelse(df$gene_expression > median(df$gene_expression), "High", "Low")

install.packages("survival")
install.packages("survminer")
library(survival)
library(survminer)

# KM
# Assuming 'gene_high' is a binary variable for high/low gene expression
km_fit <- survfit(Surv(time, status) ~ gene_group + age_group, data = df)
ggsurvplot(km_fit, data = df, pval = TRUE, risk.table = TRUE, 
           legend.title = "Gene Expression & Age Group")
# Enhance readability
df$group <- with(df, interaction(gene_group, age_group))
levels(df$group) <- c("Low/Young", "Low/Old", "High/Young", "High/Old")
km_fit <- survfit(Surv(time, status) ~ group, data = df)
p <- ggsurvplot(km_fit, data = df, pval = TRUE, risk.table = FALSE, 
           legend.title = "Groups", 
           legend.labs = c("Low/Young", "Low/Old", "High/Young", "High/Old"))
p$plot <- p$plot + guides(colour = guide_legend(nrow = 4)) + 
          theme(legend.position = "right")
p

# Cox regression
cox_fit <- coxph(Surv(time, status) ~ gene_high + age_group, data = df)
ggsurvplot(survfit(cox_fit), data = df, pval = TRUE, legend.title = "Adjusted for Age")

Warning on multiple rates

Search Vectorize() function in this page.

mean(rexp(1000, rate=2) )
# [1] 0.5258078
mean(rexp(1000, rate=1) )
# [1] 0.9712124

z = rexp(1000, rate=c(1, 2))
mean(z[seq(1, 1000, by=2)])
# [1] 1.041969
mean(z[seq(2, 1000, by=2)])
# [1] 0.5079594

Markov model

Fake Survival Data for the Disease Progression Model

Non-proportional hazards

Simulating time-to-event outcomes with non-proportional hazards

Standardize covariates

coxph() does not have an option to standardize covariates but glmnet() does.

library(glmnet)
library(survival)

N=1000;p=30
nzc=p/3
beta <- c(rep(1, 5), rep(-1, 5))

set.seed(1234)
  x=matrix(rnorm(N*p),N,p)
  x[, 1:5] <- x[, 1:5]*2
  x[, 6:10] <- x[, 6:10] + 2

  fx=x[,seq(nzc)] %*% beta
  hx=exp(fx)
  ty=rexp(N,hx)
  tcens <- rep(0,N)
  y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)

coxph(Surv(ty, 1-tcens) ~ x) %>% coef %>% head(10)
#         x1         x2         x3         x4         x5         x6         x7
#  0.6076146  0.6359927  0.6346022  0.6469274  0.6152082 -0.6614930 -0.5946101
#         x8         x9        x10
# -0.6726081 -0.6275205 -0.7073704

xscale <- scale(x, TRUE, TRUE) # halve the covariate values
coxph(Surv(ty, 1-tcens) ~ xscale) %>% coef %>% head(10) # double the coef
#    xscale1    xscale2    xscale3    xscale4    xscale5    xscale6    xscale7
#  1.2119940  1.2480628  1.2848646  1.2857796  1.1959619 -0.6431946 -0.5941309
#    xscale8    xscale9   xscale10
# -0.6723137 -0.6188384 -0.6793313

  set.seed(1)
  fit=cv.glmnet(x,y,family="cox", nfolds=10, standardize = TRUE)
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1]  0.9351341  0.9394696  0.9187242  0.9418540  0.9111623 -0.9303783
# [7] -0.9271438 -0.9597583 -0.9493759 -0.9386065

  set.seed(1)
  fit=cv.glmnet(x,y,family="cox", nfolds=10, standardize = FALSE)
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1]  0.9357171  0.9396877  0.9200247  0.9420215  0.9118803 -0.9257406
# [7] -0.9232813 -0.9554017 -0.9448827 -0.9356009  

  set.seed(1)
  fit=cv.glmnet(xscale,y,family="cox", nfolds=10, standardize = TRUE)
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1]  1.8652889  1.8436015  1.8601198  1.8719515  1.7712951 -0.9046420
# [7] -0.9263966 -0.9593383 -0.9362407 -0.9014015

  set.seed(1)
  fit=cv.glmnet(xscale,y,family="cox", nfolds=10, standardize = FALSE)
  as.vector(coef(fit, s = "lambda.min"))[seq(nzc)]
# [1]  1.8652889  1.8436015  1.8601198  1.8719515  1.7712951 -0.9046420
# [7] -0.9263966 -0.9593383 -0.9362407 -0.9014015

Predefined censoring rates

Simulating survival data with predefined censoring rates for proportional hazards models

Cross validation

  • CVPL (cross-validated partial likelihood)
    • https://www.rdocumentation.org/packages/survcomp/versions/1.22.0/topics/cvpl (lower is better)
    • https://rdrr.io/cran/dynpred/man/CVPL.html. source code. 1. it does LOOCV so no need to set a random seed. 2. it seems the function does not include lasso/glmnet 3. the formula on pages 173-174 of the book Dynamic Prediction in Clinical Survival Analysis says the partial log likelihood should include the penalty term. 4. concordance measures like Harrell’s C-index are not appropriate because they only measure the discrimination and not the calibration. PS: I downloaded and looked at the chapter source code. It uses optL1() function from the penalized package to obtain cross validated log partial likelihood.
      R> library(dynpred)
      R> data(ova)
      R> CVPL(Surv(tyears, d) ~ 1, data = ova)
      [1] NA
      R> CVPL(Surv(tyears, d) ~ Karn + Broders + FIGO + Ascites + Diam,
         data = ova)
      [1] -1652.169
      R> coxph(Surv(tyears, d) ~ Karn + Broders + FIGO + Ascites + Diam, data = ova)$loglik[2] # No CV
      [1] -1374.717
      
    • optL1() from the penalized package. It seems the penalized package has its own sequence of lambdas and these lambdas are totally different from glmnet() has created though the CV plot from each package shows a convex shape.
    • Gsslasso paper. CVPL does not include the penalty term.
    • https://web.stanford.edu/~hastie/Papers/v39i05.pdf#page=10 (larger is better)

Competing risk and cumulative incidence

Survival rate terminology

  • Disease-free survival (DFS): the period after curative treatment [disease eliminated] when no disease can be detected
    • DFS stands for disease-free survival, which measures the length of time that a patient survives without any signs or symptoms of the disease or cancer recurrence. It is calculated from the date of treatment initiation to the date of disease recurrence or death from any cause. DFS is often used as a secondary endpoint in clinical trials, especially in early-stage cancers where the primary goal of treatment is to achieve long-term remission.
    • What Is The Difference Between PFS And DFS? Disease-free survival (DFS), also known as relapse-free survival (RFS), is often used as the primary endpoint in phase III trials of adjuvant therapy. Progression-free survival (PFS) is commonly used as the primary endpoint in phase III trials evaluating the treatment of metastatic cancer.
    • The main difference between PFS and DFS is that PFS measures the time until the cancer progresses, whereas DFS measures the time until the cancer recurs or returns after treatment. PFS is generally considered a more sensitive measure of treatment efficacy than DFS because it accounts for any disease progression, not just a recurrence. However, DFS may be more appropriate for patients with early-stage cancer who are at lower risk of disease progression but have a higher risk of disease recurrence.

Time-dependent covariates

  • Using Time Dependent Covariates and Time Dependent Coefficients in the Cox Mode
  • Building an Elastic-Net Cox Model with Time-Dependent covariates
  • Survival Analysis in R Emily Zabor
  • Difference of Time-dependent covariate and time-independent covariate: The difference between time-dependent and time-independent covariates in the context of a Cox model is indeed in how the Surv() function is used.
    # Time-independent covariate 
    Surv(time, status)
    
    # Time-dependent covariate 
    Surv(start, stop, status)
    

    Here, start and stop define an interval of time during which the covariates are assumed to be constant. This allows the covariates to change over time, as each subject can have multiple rows in the data corresponding to different time intervals.

  • Example: Let’s say we’re studying the effect of a treatment on survival time in patients with a certain disease. We have a covariate that changes over time: the dosage of the treatment, which can be increased or decreased at different times for each patient. Our data might look something like this:
    Patient ID Start Time Stop Time Status Dosage
    1 0 3 0 10
    1 3 6 1 20
    2 0 2 0 10
    2 2 5 0 15
    2 5 8 1 15

    Here, each row represents a time interval for a patient. The Start Time and Stop Time columns represent the beginning and end of the interval. The Status column indicates whether the event of interest (e.g., death) occurred at the end of the interval (1 if the event occurred, 0 otherwise). The Dosage column is our time-dependent covariate.

  • A time-dependent covariate in a Cox model becomes a time-independent covariate under the special case where the covariate does not change over the duration of the study for any subject. In other words, if the value of the covariate is constant for each individual across all time points, it can be treated as a time-independent covariate. For example, consider a study investigating the effect of gender (a binary variable: male or female) on survival time. Since an individual’s gender does not change over time, it is a time-independent covariate. On the other hand, a variable like blood pressure, which can change at different time points for the same individual, would typically be considered a time-dependent covariate.

Books

Class notes

Cox proportional hazards model and the partial log-likelihood function

Let Yi denote the observed time (either censoring time or event time) for subject i, and let Ci be the indicator that the time corresponds to an event (i.e. if Ci = 1 the event occurred and if Ci = 0 the time is a censoring time). The hazard function for the Cox proportional hazard model has the form

[math]\displaystyle{ \lambda(t|X) = \lambda_0(t)\exp(\beta_1X_1 + \cdots + \beta_pX_p) = \lambda_0(t)\exp(X \beta^\prime). }[/math]

This expression gives the hazard at time t for an individual with covariate vector (explanatory variables) X. Based on this hazard function, a partial likelihood (defined on hazard function) can be constructed from the datasets as

[math]\displaystyle{ L(\beta) = \prod\limits_{i:C_i=1}\frac{\theta_i}{\sum_{j:Y_j\ge Y_i}\theta_j}, }[/math]

where θj = exp(Xj β) and X1, ..., Xn are the covariate vectors for the n independently sampled individuals in the dataset (treated here as column vectors). This pdf or this note give a toy example

The corresponding log partial likelihood is

[math]\displaystyle{ \ell(\beta) = \sum_{i:C_i=1} \left(X_i \beta^\prime - \log \sum_{j:Y_j\ge Y_i}\theta_j\right). }[/math]

This function can be maximized over β to produce maximum partial likelihood estimates of the model parameters.

The partial score function is [math]\displaystyle{ \ell^\prime(\beta) = \sum_{i:C_i=1} \left(X_i - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j}{\sum_{j:Y_j\ge Y_i}\theta_j}\right), }[/math]

and the Hessian matrix of the partial log likelihood is

[math]\displaystyle{ \ell^{\prime\prime}(\beta) = -\sum_{i:C_i=1} \left(\frac{\sum_{j:Y_j\ge Y_i}\theta_jX_jX_j^\prime}{\sum_{j:Y_j\ge Y_i}\theta_j} - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j\times \sum_{j:Y_j\ge Y_i}\theta_jX_j^\prime}{[\sum_{j:Y_j\ge Y_i}\theta_j]^2}\right). }[/math]

Using this score function and Hessian matrix, the partial likelihood can be maximized using the Newton-Raphson algorithm. The inverse of the Hessian matrix, evaluated at the estimate of β, can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate standard errors for the regression coefficients.

If X is age, then the coefficient is likely >0. If X is some treatment, then the coefficient is likely <0.

Get the partial likelihood of a Cox PH Model with new data

offset was used. See https://stackoverflow.com/questions/26721551/is-there-a-way-to-get-the-partial-likelihood-of-a-cox-ph-model-with-new-data-and

How to compute partial log-likelihood function in Cox proportional hazards model?

set.seed(1)
n <- 1000
t <- rexp(100)
c <- rbinom(100, 1, .2) ## censoring indicator (independent process)
x <- rbinom(100, 1, exp(-t)) ## some arbitrary relationship btn x and t
betamax <- coxph(Surv(t, c) ~ x)
beta1 <- coxph(Surv(t, c) ~ x, init = c(1), control=coxph.control(iter.max=0))

betamax$loglik[2]  # [1]=initial, [2]=final
# [1] -52.81476
beta1$loglik[2]
# [1] -52.85067

Implementing the Cox model

Implementing the Cox model in R

Optimization

Optimisation of a Cox proportional hazard model using Optimx()

Compare the partial likelihood to the full likelihood

http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=10

z-column (Wald statistic) from R's coxph()

How exactly can the Cox-model ignore exact times?

The Cox model does not depend on the times itself, instead it only needs an ordering of the events.

library(survival)
survfit(Surv(time, status) ~ x, data = aml) 
fit <- coxph(Surv(time, status) ~ x, data = aml)
coef(fit) # 0.9155326
min(diff(sort(unique(aml$time)))) # 1

# Shift survival time for some obs but keeps the same order
# make sure we choose obs (n=20 not works but n=21 works) with twins
rbind(order(aml$time), sort(aml$time), aml$time[order(aml$time)])
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16]
# [1,]   12   13   14   15    1   16    2    3   17     4     5    18    19     6    20     7
# [2,]    5    5    8    8    9   12   13   13   16    18    23    23    27    28    30    31
# [3,]    5    5    8    8    9   12   13   13   16    18    23    23    27    28    30    31
# [,17] [,18] [,19] [,20] [,21] [,22] [,23]
# [1,]    21     8    22     9    23    10    11
# [2,]    33    34    43    45    45    48   161
# [3,]    33    34    43    45    45    48   161

aml$time2 <- aml$time
aml$time2[order(aml$time)[1:21]] <- aml$time[order(aml$time)[1:21]] - .9
fit2 <- coxph(Surv(time2, status) ~ x, data = aml); fit2
coef(fit2) #      0.9155326
coef(fit) == coef(fit2) # TRUE

aml$time3 <- aml$time 
aml$time3[order(aml$time)[1:20]] <- aml$time[order(aml$time)[1:20]] - .9
fit3 <- coxph(Surv(time3, status) ~ x, data = aml); fit3
coef(fit3) #      0.8891567
coef(fit) == coef(fit3) # FALSE

Partial likelihood when there are ties; hypothesis testing: Likelihood Ratio Test, Wald Test & Score Test

http://math.ucsd.edu/~rxu/math284/slect5.pdf#page=29

In R's coxph(): Nearly all Cox regression programs use the Breslow method by default, but not this one. The Efron approximation is used as the default here, it is more accurate when dealing with tied death times, and is as efficient computationally.

http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/xaghtmlnode28.html (include the case when there is a partition of parameters). The formulas for 3 tests are also available on Appendix B of Klein book.

The following code does not test for models. But since there is only one coefficient, the results are the same. If there is more than one variable, we can use anova(model1, model2) to run LRT.

library(KMsurv)
# No ties. Section 8.2
data(btrial)
str(btrial)
# 'data.frame':	45 obs. of  3 variables:
# $ time : int  19 25 30 34 37 46 47 51 56 57 ...
# $ death: int  1 1 1 1 1 1 1 1 1 1 ...
# $ im   : int  1 1 1 1 1 1 1 1 1 1 ...
table(subset(btrial, death == 1)$time)
# death time is unique
coxph(Surv(time, death) ~ im, data = btrial)
#     coef exp(coef) se(coef)    z     p
# im 0.980     2.665    0.435 2.25 0.024
# Likelihood ratio test=4.45  on 1 df, p=0.03
# n= 45, number of events= 24 

# Ties, Section 8.3
data(kidney)
str(kidney)
# 'data.frame':	119 obs. of  3 variables:
# $ time : num  1.5 3.5 4.5 4.5 5.5 8.5 8.5 9.5 10.5 11.5 ...
# $ delta: int  1 1 1 1 1 1 1 1 1 1 ...
# $ type : int  1 1 1 1 1 1 1 1 1 1 ...
table(subset(kidney, delta == 1)$time)
# 0.5  1.5  2.5  3.5  4.5  5.5  6.5  8.5  9.5 10.5 11.5 15.5 16.5 18.5 23.5 26.5 
# 6    1    2    2    2    1    1    2    1    1    1    2    1    1    1    1 

# Default: Efron method
coxph(Surv(time, delta) ~ type, data = kidney)
# coef exp(coef) se(coef)     z    p
# type -0.613     0.542    0.398 -1.54 0.12
# Likelihood ratio test=2.41  on 1 df, p=0.1
# n= 119, number of events= 26 
summary(coxph(Surv(time, delta) ~ type, data = kidney))
# n= 119, number of events= 26 
# coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6126    0.5420   0.3979 -1.539    0.124
#
# exp(coef) exp(-coef) lower .95 upper .95
# type     0.542      1.845    0.2485     1.182
#
# Concordance= 0.497  (se = 0.056 )
# Rsquare= 0.02   (max possible= 0.827 )
# Likelihood ratio test= 2.41  on 1 df,   p=0.1
# Wald test            = 2.37  on 1 df,   p=0.1
# Score (logrank) test = 2.44  on 1 df,   p=0.1

# Breslow method
summary(coxph(Surv(time, delta) ~ type, data = kidney, ties = "breslow"))
# n= 119, number of events= 26 
#         coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6182    0.5389   0.3981 -1.553     0.12
#
#       exp(coef) exp(-coef) lower .95 upper .95
# type    0.5389      1.856     0.247     1.176
#
# Concordance= 0.497  (se = 0.056 )
# Rsquare= 0.02   (max possible= 0.827 )
# Likelihood ratio test= 2.45  on 1 df,   p=0.1
# Wald test            = 2.41  on 1 df,   p=0.1
# Score (logrank) test = 2.49  on 1 df,   p=0.1

# Discrete/exact method
summary(coxph(Surv(time, delta) ~ type, data = kidney, ties = "exact"))
#         coef exp(coef) se(coef)      z Pr(>|z|)
# type -0.6294    0.5329   0.4019 -1.566    0.117
#
#      exp(coef) exp(-coef) lower .95 upper .95
# type    0.5329      1.877    0.2424     1.171
#
# Rsquare= 0.021   (max possible= 0.795 )
# Likelihood ratio test= 2.49  on 1 df,   p=0.1
# Wald test            = 2.45  on 1 df,   p=0.1
# Score (logrank) test = 2.53  on 1 df,   p=0.1

Hazard (function) and survival function

A hazard is the rate at which events happen, so that the probability of an event happening in a short time interval is the length of time multiplied by the hazard.

[math]\displaystyle{ h(t) = \lim_{\Delta t \to 0} \frac{P(t \leq T \lt t+\Delta t|T \geq t)}{\Delta t} = \frac{f(t)}{S(t)} = -\partial{ln[S(t)]} }[/math]

Therefore

[math]\displaystyle{ H(x) = \int_0^x h(u) d(u) = -ln[S(x)]. }[/math]

or

[math]\displaystyle{ S(x) = e^{-H(x)} }[/math]

Hazards (or probability of hazards) may vary with time, while the assumption in proportional hazard models for survival is that the hazard is a constant proportion.

Examples:

  • If h(t)=c, S(t) is exponential. f(t) = c exp(-ct). The mean is 1/c.
  • If [math]\displaystyle{ \log h(t) = c + \rho t }[/math], S(t) is Gompertz distribution.
  • If [math]\displaystyle{ \log h(t)=c + \rho \log (t) }[/math], S(t) is Weibull distribution.
  • For Cox regression, the survival function can be shown to be [math]\displaystyle{ S(t|X) = S_0(t) ^ {\exp(X\beta)} }[/math].
[math]\displaystyle{ \begin{align} S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\ &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\ &= e^{-\int_0^t h_0(u) du \cdot exp(X \beta)} \\ &= S_0(t)^{exp(X \beta)} \end{align} }[/math]

Alternatively,

[math]\displaystyle{ \begin{align} S(t|X) &= e^{-H(t)} = e^{-\int_0^t h(u|X)du} \\ &= e^{-\int_0^t h_0(u) exp(X\beta) du} \\ &= e^{-H_0(t) \cdot exp(X \beta)} \end{align} }[/math]

where the cumulative baseline hazard at time t, [math]\displaystyle{ H_0(t) }[/math], is commonly estimated through the non-parametric Breslow estimator.

How to assess Cox model fit

Check the proportional hazard (constant HR over time) assumption by cox.zph() - Schoenfeld Residuals

Strata, Stratification

bladder1 <- bladder[bladder$enum < 5, ] 
o <- coxph(Surv(stop, event) ~ rx + size + number + strata(enum) , bladder1)
# the strata will not be a term in covariate in the model fitting
anova(o)

Sample size calculators

How many events are required to fit the Cox regression reliably?

  • The recommended number of events to fit a Cox regression model for survival data is typically guided by a rule of thumb. This rule suggests having at least 10-20 events per predictor in the model; see Survival analysis with rare events.
  • If we have only 1 covariate and the covariate is continuous, we need at least 2 events (one for the baseline hazard and one for beta).
  • If the covariate is discrete, we need at least one event from (each of) two groups in order to fit the Cox regression reliably. For example, if status=(0,0,0,1,0,1) and x=(0,0,1,1,2,2) works fine.
library(survival)
head(ovarian)
#   futime fustat     age resid.ds rx ecog.ps
# 1     59      1 72.3315        2  1       1
# 2    115      1 74.4932        2  1       1
# 3    156      1 66.4658        2  1       2
# 4    421      0 53.3644        2  2       1
# 5    431      1 50.3397        2  1       1
# 6    448      0 56.4301        1  1       2

ova <- ovarian # n=26
ova$time <- ova$futime
ova$status <- 0
ova$status[1:4] <- 1
coxph(Surv(time, status) ~ rx, data = ova) # OK
summary(survfit(Surv(time, status) ~ rx, data =ova))
#                 rx=1 
#  time n.risk n.event survival std.err lower 95% CI upper 95% CI
#    59     13       1    0.923  0.0739        0.789            1
#   115     12       1    0.846  0.1001        0.671            1
#   156     11       1    0.769  0.1169        0.571            1
#                 rx=2 
#     time  n.risk  n.event  survival  std.err lower 95% CI upper 95% CI 
# 421.0000 10.0000   1.0000    0.9000   0.0949       0.7320       1.0000 

# Suspicious Cox regression result due to 0 sample size in one group
ova$status <- 0
ova$status[1:3] <- 1
coxph(Surv(time, status) ~ rx, data = ova)
#         coef exp(coef)  se(coef) z p
# rx -2.13e+01  5.67e-10  2.32e+04 0 1
#
# Likelihood ratio test=4.41  on 1 df, p=0.04
# n= 26, number of events= 3 
# Warning message:
# In fitter(X, Y, strats, offset, init, control, weights = weights,  :
#   Loglik converged before variable  1 ; beta may be infinite. 

summary(survfit(Surv(time, status) ~ rx, data = ova))
#                rx=1 
# time n.risk n.event survival std.err lower 95% CI upper 95% CI
#   59     13       1    0.923  0.0739        0.789            1
#  115     12       1    0.846  0.1001        0.671            1
#  156     11       1    0.769  0.1169        0.571            1
#                rx=2 
# time n.risk n.event survival std.err lower 95% CI upper 95% CI

Extract p-values

fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)

# method 1:
beta <- coef(fit)
se <- sqrt(diag(vcov(fit)))
1 - pchisq((beta/se)^2, 1)

# method 2: https://www.biostars.org/p/65315/
coef(summary(fit))[, "Pr(>|z|)"]

More statistics including the HR confidence intervals.

Expectation of life & expected future lifetime

  • The average lifetime is the same as the area under the survival curve.
[math]\displaystyle{ \begin{align} \mu &= \int_0^\infty t f(t) dt \\ &= \int_0^\infty S(t) dt \end{align} }[/math]

by integrating by parts making use of the fact that -f(t) is the derivative of S(t), which has limits S(0)=1 and [math]\displaystyle{ S(\infty)=0 }[/math]. The average lifetime may not be bounded if you have censored data, there's censored observations that last beyond your last recorded death.

[math]\displaystyle{ \frac{1}{S(t_0)} \int_0^{\infty} t\,f(t_0+t)\,dt = \frac{1}{S(t_0)} \int_{t_0}^{\infty} S(t)\,dt, }[/math]

Hazard Ratio (exp^beta) vs Relative Risk

  1. https://en.wikipedia.org/wiki/Hazard_ratio
  2. Hazard represents the instantaneous event rate, which means the probability that an individual would experience an event (e.g. death/relapse) at a particular given point in time after the intervention, assuming that this individual has survived to that particular point of time without experiencing any event. See an example here.
  3. Hazard ratio is a measure of an effect of an intervention of an outcome of interest over time. The hazard ratio is not computed at any one time point. See an example here.
  4. Since there is only one hazard ratio reported, it can can only be interpreted if you assume that the population hazard ratio is consistent over time, and that any differences are due to random sampling. If two survival curves cross, the hazard ratios are certainly not consistent. See Hazard ratio from survival analysis including how the hazard ratio is computed.
  5. Hazard ratio = hazard in the intervention group / Hazard in the control group
  6. A hazard ratio is often reported as a “reduction in risk of death or progression” – This risk reduction is calculated as 1 minus the Hazard Ratio (exp^beta), e.g., HR of 0.84 is equal to a 16% reduction in risk. See this video Interpreting Hazard Ratios and stackexchange.com.
  7. If the hazard ratio for overall survival (OS) from initiation of therapy for patients with BRCAm vs BRCAwt is 0.812, this means that, at any given time point, the hazard of death (or event of interest) for patients with BRCAm is 0.81 times the hazard of death for patients with BRCAwt. In other words, patients with BRCAm have a 19% lower risk of death at any time point compared to patients with BRCAwt. Prevalence and prognosis of BRCAm, homologous recombination repair mutation (HRRm) or HR deficiency positive (HRD+) across tumor types.
  8. Hazard ratio and its confidence can be obtained in R by using the summary() method; e.g. fit <- coxph(Surv(time, status) ~ x); summary(fit)$conf.int; confint(fit)
  9. The coefficient beta represents the expected change in log hazard if X changes by one unit and all other variables are held constant in Cox models. See Variable selection – A review and recommendations for the practicing statistician by Heinze et al 2018.
  10. Understanding the endpoints in oncology: overall survival, progression free survival, hazard ratio, censored value

Another example (John Fox, Cox Proportional-Hazards Regression for Survival Data) is assuming Y ~ age + prio + others.

  • If exp(beta_age) = 0.944. It means an additional year of age reduces the hazard by a factor of .944 on average, or (1-.944)*100 = 5.6 percent.
  • If exp(beta_prio) = 1.096, it means each prior conviction increases the hazard by a factor of 1.096, or 9.6 percent.

Interpretation of Hazard Ratio for Progression-Free Survival

  • Assuming females are the reference group
  • If exp(beta_sex) = 1.5, it suggests that males have a 50% higher risk of disease progression or death (whichever comes first) at any given time compared to females. In other words, males are 1.5 times more likely to experience disease progression or death compared to females, assuming all other variables in the model are held constant.
  • If HR = 0.7, it suggests that males have a 30% lower risk of disease progression or death at any given time compared to females.

Interpretation of Hazard Ratio for Overall Survival

  • Assuming females are the reference group
  • If HR = 1.5, it suggests that males have a 50% higher risk of death at any given time compared to females.
  • If HR = 0.7, it suggests that males have a 30% lower risk of death at any given time compared to females.

How do you explain the difference between hazard ratio and relative risk to a layman? from Quora.

See Using R for Biomedical Statistics for relative risk, odds ratio, et al.

Odds Ratio, Hazard Ratio and Relative Risk by Janez Stare

For two groups that differ only in treatment condition, the ratio of the hazard functions is given by [math]\displaystyle{ e^\beta }[/math], where [math]\displaystyle{ \beta }[/math] is the estimate of treatment effect derived from the regression model. See here.

Compute ratio ratios from coxph() in R (Hint: exp(beta)).

Prognostic index is defined on http://www.math.ucsd.edu/~rxu/math284/slect6.pdf#page=2.

Basics of the Cox proportional hazards model. Good prognostic factor (b<0 or HR<1) and bad prognostic factor (b>0 or HR>1).

Variable selection: variables were retained in the prediction models if they had a hazard ratio of <0.85 or >1.15 (for binary variables) and were statistically significant at the 0.01 level. see Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study.

library(KMsurv)
# No ties. Section 8.2
data(btrial)
coxph(Surv(time, death) ~ im, data = btrial)
summary(coxph(Surv(time, death) ~ im, data = btrial))$conf.int
#     exp(coef) exp(-coef) lower .95 upper .95
# im  2.664988  0.3752362  1.136362  6.249912

So the hazard ratio and its 95% ci can be obtained from the 1st, 3rd and 4th element in conf.int.

Hazard Ratio, confidence interval, Table 1

  • Google image: survival data cox model hazard ratio table 1
  • To get the 95% CI, use the summary() function
    > mod = coxph(Surv(time,status) ~ x, data = aml)
    > summary(mod)
      n= 23, number of events= 18 
    
                     coef exp(coef) se(coef)     z Pr(>|z|)  
    xNonmaintained 0.9155    2.4981   0.5119 1.788   0.0737 .
    ---
    Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
    
                   exp(coef) exp(-coef) lower .95 upper .95
    xNonmaintained     2.498     0.4003    0.9159     6.813
    
    Concordance= 0.619  (se = 0.063 )
    Likelihood ratio test= 3.38  on 1 df,   p=0.07
    Wald test            = 3.2  on 1 df,   p=0.07
    Score (logrank) test = 3.42  on 1 df,   p=0.06
    

    Naive method (wrong) to calculate the hazard ratio

    > with(aml, table(x, status))
                   status
    x                0  1
      Maintained     4  7
      Nonmaintained  1 11
    > (11/12) / (7/11)  # hazard from the 2nd group / hazard from the 1st group
    [1] 1.440476
    
  • To report the HR in table 1 for multiple variables, one must use Univariate Cox regression; for example this one uses lapply().
  • finalfit package. Time-to-event (Survival) vignette.
    library(finalfit) # finalfit()
    library(survival)
    library(forcats) # fct_recode()
    
    melanoma = boot::melanoma #F1 here for help page with data dictionary
    
    melanoma = melanoma %>%
      mutate(
        # Overall survival
        status_os = ifelse(status == 2, 0, # "still alive"
                1), # "died of melanoma" or "died of other causes"
        sex = factor(sex) %>% 
            fct_recode("Male" = "1", 
                       "Female" = "0"),
        ulcer = factor(ulcer) %>% 
            fct_recode("No" = "0",
                       "Yes" = "1")
      )
    
    dependent_os = "Surv(time, status_os)"
    explanatory = c("age", "sex", "thickness", "ulcer")
    
    mykable = function(x){
        knitr::kable(x, row.names = FALSE, align = c("l", "l", "r", "r", "r", "r", "r", "r", "r"))
    }
    
    univariate_results <- melanoma %>% 
        finalfit(dependent_os, explanatory) 
    univariate_results2 <- univariate_results[, -5] # exclude multivariate column
    
    # Output to CSV
    write.csv(univariate_results, file = "univariate_results.csv", row.names = FALSE)
    
    # Install and load required packages
    library(flextable)
    library(officer)
    
    # Convert to flextable
    ft <- flextable::flextable(univariate_results2)
    
    # Adjust the table style (optional)
    ft <- ft %>%
      flextable::theme_booktabs() %>%
      flextable::autofit()
    
    # Save as Word document
    doc <- read_docx()
    doc <- body_add_flextable(doc, value = ft)
    print(doc, target = "univariate_results.docx")
    
    # Hazard ratio plot
    melanoma %>% 
        hr_plot(dependent_os, explanatory)
    

Hazard Ratio and death probability

https://en.wikipedia.org/wiki/Hazard_ratio#The_hazard_ratio_and_survival

Suppose S0(t)=.2 (20% survived at time t) and the hazard ratio (hr) is 2 (a group has twice the chance of dying than a comparison group), then (Cox model is assumed)

  1. S1(t)=S0(t)hr = .22 = .04 (4% survived at t)
  2. The corresponding death probabilities are 0.8 and 0.96.
  3. If a subject is exposed to twice the risk of a reference subject at every age, then the probability that the subject will be alive at any given age is the square of the probability that the reference subject (covariates = 0) would be alive at the same age. See p10 of this lecture notes.
  4. exp(x*beta) is the relative risk associated with covariate value x.

Hazard Ratio Forest Plot

The forest plot quickly summarizes the hazard ratio data across multiple variables –If the line crosses the 1.0 value, the hazard ratio is not significant and there is no clear advantage for either arm.

See also ggplot2 forest plot. survminer::ggforest(), survivalAnalysis::forest_plot() and forestmodel::forest_model().

library(survival)
library(survivalAnalysis)
library(survminer)
data(cancer, package = 'survival') # load colon among others
colon$sex <- factor(colon$sex)

tmp1 <- survival::colon %>%
   analyse_multivariate(vars(time, status),
      vars(rx, sex, age, obstruct, perfor, nodes, differ, extent)) 
tmp1 %>% forest_plot()

tmp2 <- coxph(Surv(time, status) ~ rx + sex + age + obstruct + 
     perfor + nodes + differ + extent, data=colon)
survminer::ggforest(tmp2, data = colon)

# Note that the above is not quite right since it is not based on 
# the univariate model
coxph(Surv(time, status) ~ sex, data  = colon)

# Even if all are continuous, fitting univariate and multivariate models
# returns different results
coxph(Surv(time, status) ~ obstruct, data  = colon)
coxph(Surv(time, status) ~ obstruct + perfor + age, data  = colon)

So the problem with survminer::ggforest() is it cannot run univariate Cox model for multiple variables. survivalAnalysis package can do that but I need to make sure the data looks correct (e.g. change 'unknown' to data that should be a continuous value). See the section "Multiple Univariate Analyses" in the Multivariate Survival Analysis vignette.

df <- survival::lung %>% 
  mutate(sex=rename_factor(sex, `1` = "male", `2` = "female"))

map(vars(age, sex, ph.ecog, wt.loss), function(by)
{
  analyse_multivariate(df,
                       vars(time, status),
                       covariates = list(by), # covariates expects a list
                       covariate_name_dict = covariate_names)
}) %>%
  forest_plot(factor_labeller = covariate_names,
              endpoint_labeller = c(time="OS"),
              orderer = ~order(HR),
              labels_displayed = c("endpoint", "factor", "n"),
              ggtheme = ggplot2::theme_bw(base_size = 10))

Other examples:

Multivariate model

  • Variables order does not change the hazard ratios or the p-value
    R> data(cancer, package = 'survival') # load colon among others
    R> colon$sex <- factor(colon$sex)
    R> tmp2 <- coxph(Surv(time, status) ~ rx + sex + age + obstruct + 
                     perfor + nodes + differ + extent, data=colon)
    R> tmp2
    Call:
    coxph(formula = Surv(time, status) ~ rx + sex + age + obstruct +
        perfor + nodes + differ + extent, data = colon)
    
                   coef exp(coef)  se(coef)      z        p
    rxLev     -0.072841  0.929749  0.079231 -0.919   0.3579
    rxLev+5FU -0.450133  0.637543  0.085975 -5.236 1.64e-07
    sex1      -0.090141  0.913803  0.068075 -1.324   0.1855
    age        0.002164  1.002166  0.002874  0.753   0.4516
    obstruct   0.202638  1.224629  0.084372  2.402   0.0163
    perfor     0.149875  1.161689  0.182766  0.820   0.4122
    nodes      0.081185  1.084571  0.006698 12.120  < 2e-16
    differ     0.146674  1.157977  0.070095  2.093   0.0364
    extent     0.467536  1.596057  0.081726  5.721 1.06e-08
    
    Likelihood ratio test=212.6  on 9 df, p=< 2.2e-16
    n= 1776, number of events= 876
       (82 observations deleted due to missingness)
    
    # Move 'nodes' to the last term
    R> tmp3 <- coxph(Surv(time, status) ~ rx + sex + age + obstruct +
                     perfor + differ + extent + nodes, data=colon)
    R> tmp3
    Call:
    coxph(formula = Surv(time, status) ~ rx + sex + age + obstruct +
        perfor + differ + extent + nodes, data = colon)
    
                   coef exp(coef)  se(coef)      z        p
    rxLev     -0.072841  0.929749  0.079231 -0.919   0.3579
    rxLev+5FU -0.450133  0.637543  0.085975 -5.236 1.64e-07
    sex1      -0.090141  0.913803  0.068075 -1.324   0.1855
    age        0.002164  1.002166  0.002874  0.753   0.4516
    obstruct   0.202638  1.224629  0.084372  2.402   0.0163
    perfor     0.149875  1.161689  0.182766  0.820   0.4122
    differ     0.146674  1.157977  0.070095  2.093   0.0364
    extent     0.467536  1.596057  0.081726  5.721 1.06e-08
    nodes      0.081185  1.084571  0.006698 12.120  < 2e-16
    
    Likelihood ratio test=212.6  on 9 df, p=< 2.2e-16
    n= 1776, number of events= 876
       (82 observations deleted due to missingness)
    
  • Univariate model and multivariate model result diff
    R> coxph(Surv(time, status) ~ perfor, data = colon)
    Call:
    coxph(formula = Surv(time, status) ~ perfor, data = colon)
    
             coef exp(coef) se(coef)     z     p
    perfor 0.2644    1.3026   0.1800 1.469 0.142
    
    Likelihood ratio test=1.99  on 1 df, p=0.1583
    n= 1858, number of events= 920
    R> coxph(Surv(time, status) ~ age + perfor, data = colon)
    Call:
    coxph(formula = Surv(time, status) ~ age + perfor, data = colon)
    
                coef exp(coef)  se(coef)      z     p
    age    -0.002325  0.997678  0.002797 -0.831 0.406
    perfor  0.259370  1.296113  0.180067  1.440 0.150
    
    Likelihood ratio test=2.68  on 2 df, p=0.2621
    n= 1858, number of events= 920
    

Infinity HR

Monotone likelihood and coxphf package.

Piece-wise constant baseline hazard model, Poisson model and Breslow estimate

Estimate baseline hazard [math]\displaystyle{ h_0(t) }[/math], Breslow cumulative baseline hazard [math]\displaystyle{ H_0(t) }[/math], baseline survival function [math]\displaystyle{ S_0(t) }[/math] and the survival function [math]\displaystyle{ S(t) }[/math]

Google: how to estimate baseline hazard rate

  • Nelson-Aalen estimator in R. The easiest way to get the Nelson-Aalen estimator is
    basehaz(coxph(Surv(time,status)~1,data=aml)) 
    

    because the (Breslow) hazard estimator for a Cox model reduces to the Nelson-Aalen estimator when there are no covariates. You can also compute it from information returned by survfit().

    fit <- survfit(Surv(time, status) ~ 1, data = aml)
    cumsum(fit$n.event/fit$n.risk) # the Nelson-Aalen estimator for the times given by fit$times
    -log(fit$surv)   # cumulative hazard
    

Manually compute

Breslow estimator of the baseline cumulative hazard rate reduces to the Nelson-Aalen estimator [math]\displaystyle{ \sum_{t_i \le t} \frac{d_i}{Y_i} }[/math] ([math]\displaystyle{ Y_i }[/math] is the number at risk at time [math]\displaystyle{ t_i }[/math]) when there are no covariates present; see p283 of Klein 2003.

[math]\displaystyle{ \begin{align} \hat{H}_0(t) &= \sum_{t_i \le t} \frac{d_i}{W(t_i;b)}, \\ W(t_i;b) &= \sum_{j \in R(t_i)} \exp(b' z_j) \end{align} }[/math]

where [math]\displaystyle{ t_1 \lt t_2 \lt \cdots \lt t_D }[/math] denotes the distinct death times and [math]\displaystyle{ d_i }[/math] be the number of deaths at time [math]\displaystyle{ t_i }[/math]. The estimator of the baseline survival function [math]\displaystyle{ S_0(t) = \exp [-H_0(t)] }[/math] is given by [math]\displaystyle{ \hat{S}_0(t) = \exp [-\hat{H}_0(t)] }[/math].

  • Below we use the formula to compute the cumulative hazard (and survival function) and compare them with the result obtained using R's built-in functions. The following code is a modification of the snippet from the post Breslow cumulative hazard and basehaz().
    bhaz <- function(beta, time, status, x) {
      # time can be duplicated
      # x (covariate) must be continuous
      data <- data.frame(time,status,x)
      data <- data[order(data$time), ]
      dt   <- unique(data$time)
      k    <- length(dt)
      risk <- exp(data.matrix(data[,-c(1:2)]) %*% beta)
      h    <- rep(0,k)
      
      for(i in 1:k) {
        h[i] <- sum(data$status[data$time==dt[i]]) / sum(risk[data$time>=dt[i]])          
      }
      
      return(data.frame(h, dt))
    }
    
    # Example 1 'ovarian' which has unique survival time
    all(table(ovarian$futime) == 1) # TRUE
    
    fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)
    # 1. compute the cumulative baseline hazard 
    # 1.1 manually using Breslow estimator at the observed time
    h0 <- bhaz(fit$coef, ovarian$futime, ovarian$fustat, ovarian$age)
    H0 <- cumsum(h0$h)
    # 1.2 use basehaz (always compute at the observed time)
    # since we consider the baseline, we need to use centered=FALSE
    H0.bh <- basehaz(fit, centered = FALSE)
    cbind(H0, h0$dt, H0.bh)
    range(abs(H0 - H0.bh$hazard)) # [1] 6.352747e-22 5.421011e-20
    
    # 2. compute the estimation of the survival function
    # 2.1 manually using Breslow estimator at t = observed time (one dim, sorted) 
    #     and observed age (another dim):
    # S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
    S1 <- outer(exp(-H0),  exp(coef(fit) * ovarian$age), "^")
    dim(S1) # row = times, col = age
    # 2.2 How about considering times not at observed (e.g. h0$dt - 10)?
    # Let's look at the hazard rate
    newtime <- h0$dt - 10
    H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
    S2 <- outer(exp(-H0),  exp(coef(fit) * ovarian$age), "^")
    dim(S2) # row = newtime, col = age
    
    # 2.3 use summary() and survfit() to obtain the estimation of the survival
    S3 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = h0$dt)$surv
    dim(S3)  # row = times, col = age
    range(abs(S1 - S3)) # [1] 2.117244e-17 6.544321e-12
    # 2.4 How about considering times not at observed (e.g. h0$dt - 10)?
    # Note that we cannot put times larger than the observed
    S4 <- summary(survfit(fit, data.frame(age = ovarian$age)), times = newtime)$surv
    range(abs(S2 - S4)) # [1] 0.000000e+00 6.544321e-12
    
    # Example 2 'kidney' which has duplicated time
    fit <- coxph(Surv(time, status) ~ age, data = kidney)
    # manually compute the breslow cumulative baseline hazard
    #   at the observed time
    h0 <- with(kidney, bhaz(fit$coef, time, status, age))
    H0 <- cumsum(h0$h)
    # use basehaz (always compute at the observed time)
    # since we consider the baseline, we need to use centered=FALSE
    H0.bh <- basehaz(fit, centered = FALSE)
    head(cbind(H0, h0$dt, H0.bh))
    range(abs(H0 - H0.bh$hazard)) # [1] 0.000000000 0.005775414
    
    # manually compute the estimation of the survival function
    # at t = observed time (one dim, sorted) and observed age (another dim):
    # S(t) = S0(t) ^ exp(bx) = exp(-H0(t)) ^ exp(bx)
    S1 <- outer(exp(-H0),  exp(coef(fit) * kidney$age), "^")
    dim(S1) # row = times, col = age
    # How about considering times not at observed (h0$dt - 1)?
    # Let's look at the hazard rate
    newtime <- h0$dt - 1
    H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
    S2 <- outer(exp(-H0),  exp(coef(fit) * kidney$age), "^")
    dim(S2) # row = newtime, col = age
    
    # use summary() and survfit() to obtain the estimation of the survival
    S3 <- summary(survfit(fit, data.frame(age = kidney$age)), times = h0$dt)$surv
    dim(S3)  # row = times, col = age
    range(abs(S1 - S3)) # [1] 0.000000000 0.002783715
    # How about considering times not at observed (h0$dt - 1)?
    # We cannot put times larger than the observed
    S4 <- summary(survfit(fit, data.frame(age = kidney$age)), times = newtime)$surv
    range(abs(S2 - S4)) # [1] 0.000000000 0.002783715
    
  • basehaz() (an alias for survfit) from stackexchange.com and here. basehaz() has a parameter centered which by default is TRUE. Actually basehaz() gives cumulative hazard H(t). See here. That is, exp(-basehaz(fit)$hazard) is the same as summary(survfit(fit))$surv. basehaz() function is not useful.
    fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian) 
    > fit
    Call:
    coxph(formula = Surv(futime, fustat) ~ age, data = ovarian)
    
          coef exp(coef) se(coef)    z      p
    age 0.1616    1.1754   0.0497 3.25 0.0012
    
    Likelihood ratio test=14.3  on 1 df, p=0.000156
    n= 26, number of events= 12 
    
    # Note the default 'centered = TRUE' for basehaz() 
    > exp(-basehaz(fit)$hazard) # exp(-cumulative hazard)
     [1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
     [8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
    [15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
    [22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
    > dim(ovarian)
    [1] 26  6
    > exp(-basehaz(fit)$hazard)[ovarian$fustat == 1]
     [1] 0.9880206 0.9738738 0.9545899 0.8973620 0.8243117 0.8243117 0.7750981
     [8] 0.7750981 0.5204807 0.5204807 0.5204807 0.5204807
    > summary(survfit(fit))$surv 
     [1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
     [8] 0.7750981 0.7244924 0.6734146 0.5962187 0.5204807
    > summary(survfit(fit, data.frame(age=mean(ovarian$age))), 
              time=ovarian$futime[ovarian$fustat == 1])$surv
    # Same result as above
    > summary(survfit(fit, data.frame(age=mean(ovarian$age))), 
                        time=ovarian$futime)$surv
     [1] 0.9880206 0.9738738 0.9545899 0.9334790 0.8973620 0.8624781 0.8243117
     [8] 0.8243117 0.8243117 0.7750981 0.7750981 0.7244924 0.6734146 0.6734146
    [15] 0.5962187 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
    [22] 0.5204807 0.5204807 0.5204807 0.5204807 0.5204807
    
  • Calculating survival probability per person at time (t) from Cox PH

Predicted survival probability in Cox model: survfit.coxph(), plot.survfit() & summary.survfit( , times)

For theory, see section 8.6 Estimation of the survival function in Klein & Moeschberger. See the formula in Prediction in Cox regression.

For R, see Extract survival probabilities in Survfit by groups

plot.survfit(). fun="log" to plot log survival curve, fun="event" plot cumulative events, fun="cumhaz" plots cumulative hazard (f(y) = -log(y)).

The plot function below will draw 4 curves: [math]\displaystyle{ S_0(t)^{\exp(\hat{\beta}_{age}*60)} }[/math], [math]\displaystyle{ S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageII})} }[/math], [math]\displaystyle{ S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIII})} }[/math], [math]\displaystyle{ S_0(t)^{\exp(\hat{\beta}_{age}*60+\hat{\beta}_{stageIV})} }[/math].

library(KMsurv) # Data package for Klein & Moeschberge
data(larynx)
larynx$stage <- factor(larynx$stage)
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)

# Figure 8.3 from Section 8.6
plot(survfit(coxobj, newdata = data.frame(age=rep(60, 4), stage=factor(1:4))), lty = 1:4)

# Estimated probability for a 60-year old for different stage patients
out <- summary(survfit(coxobj, data.frame(age = rep(60, 4), stage=factor(1:4))), times = 5)
out$surv
#  time n.risk n.event survival1 survival2 survival3 survival4
#    5     34      40     0.702     0.665      0.51     0.142
sum(larynx$time >=5) # n.risk
# [1] 34
sum(larynx$delta[larynx$time <=5]) # n.event
# [1] 40
sum(larynx$time >5) # Wrong
# [1] 31
sum(larynx$delta[larynx$time <5]) # Wrong
# [1] 39

# 95% confidence interval
out$lower
# 0.8629482 0.9102532 0.7352413 0.548579
out$upper
# 0.5707952 0.4864903 0.3539527 0.03691768

We need to pay attention when the number of covariates is large (and we don't want to specify each covariates in the formula). The key is to create a data frame and use dot (.) in the formula. This is to fix a warning message: 'newdata' had XXX rows but variables found have YYY rows from running survfit(, newdata).

Another way is to use as.formula() if we don't want to create a new object.

xsub <- data.frame(xtrain)
colnames(xsub) <- paste0("x", 1:ncol(xsub))

coxobj <- coxph(Surv(ytrain[, "time"], ytrain[, "status"]) ~ ., data = xsub)

newdata <- data.frame(xtest)
colnames(newdata) <- paste0("x", 1:ncol(newdata))

survprob <- summary(survfit(coxobj, newdata=newdata), 
                    times = 5)$surv[1, ]  
# since there is only 1 time point, we select the first row in surv (surv is a matrix with one row).

The predictSurvProb() function from the pec package can also be used to extract survival probability predictions from various modeling approaches.

Visualizing the estimated distribution of survival times

survminer::ggsurvplot(); see here.

Predicted survival probabilities from glmnet: c060/peperr, biospear packages and manual computation

## S3 method for class 'glmnet'
predictProb(object, response, x, times, complexity, ...)

set.seed(1234)
junk <- biospear::simdata(n=500, p=500, q.main = 10, q.inter = 0, 
                  prob.tt = .5, m0=1, alpha.tt=0, 
                  beta.main= -.5, b.corr = .7, b.corr.by=25, 
                  wei.shape = 1, recr=3, fu=2, timefactor=1)
summary(junk$time)
library(glmnet)
library(c060) # Error: object 'predictProb' not found
library(peperr)

y <- cbind(time=junk$time, status=junk$status)
x <- cbind(1, junk[, "treat", drop = FALSE])
names(x) <- c("inter", "treat")
x <- as.matrix(x)
cvfit <- cv.glmnet(x, y, family = "cox")
obj <- glmnet(x, y, family = "cox")
xnew <- matrix(c(0,0), nr=1)
predictProb(obj, y, xnew, times=1, complexity = cvfit$lambda.min)
# Error in exp(lp[response[, 1] >= t.unique[i]]) : 
#   non-numeric argument to mathematical function
# In addition: Warning message:
# In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL'
expSurv(res, traindata, method, ci.level = .95, boot = FALSE, nboot, smooth = TRUE,
  pct.group = 4, time, trace = TRUE, ncores = 1)
# S3 method for resexpSurv
predict(object, newdata, ...)
# continue the example
# BMsel() takes a little while
resBM <- biospear::BMsel(
    data = junk, 
    method = "lasso", 
    inter = FALSE, 
    folds = 5)
  
# Note: if we specify time =5 in expsurv(), we will get an error message
# 'time' is out of the range of the observed survival time.
# Note: if we try to specify more than 1 time point, we will get the following msg
# 'time' must be an unique value; no two values are allowed.
esurv <- biospear::expSurv(
    res = resBM,
    traindata = junk,
    boot = FALSE,
    time = median(junk$time),
    trace = TRUE)
# debug(biospear:::plot.resexpSurv)
plot(esurv, method = "lasso")
# This is equivalent to doing the following
xx <- attributes(esurv)$m.score[, "lasso"]
wc <- order(xx); wgr <- 1:nrow(esurv$surv)
p1 <- plot(x = xx[wc], y = esurv$surv[wgr, "lasso"][wc], 
           xlab='prognostic score', ylab='survival prob')
# prognostic score beta*x in this cases.
# ignore treatment effect and interactions
xxmy <- as.vector(as.matrix(junk[, names(resBM$lasso)]) %*% resBM$lasso)
# See page4 of the paper. Scaled scores were used in the plot
range(abs(xx - (xxmy-quantile(xxmy, .025)) / (quantile(xxmy, .975) -  quantile(xxmy, .025))))
# [1] 1.500431e-09 1.465241e-06

h0 <- bhaz(resBM$lasso, junk$time, junk$status, junk[, names(resBM$lasso)])
newtime <- median(junk$time)
H0 <- sapply(newtime, function(tt) sum(h0$h[h0$dt <= tt]))
# newx <- junk[ , names(resBM$lasso)]
# Compute the estimate of the survival probability at training x and time = median(junk$time)
# using Breslow method
S2 <- outer(exp(-H0),  exp(xxmy), "^") # row = newtime, col = newx
range(abs(esurv$surv[wgr, "lasso"] - S2))
# [1] 6.455479e-18 2.459136e-06
# My implementation of the prognostic plot
#   Note that the x-axis on the plot is based on prognostic scores beta*x, 
#   not on treatment modifying scores gamma*x as described in the paper.
#   Maybe it is because inter = FALSE in BMsel() we have used.
p2 <- plot(xxmy[wc], S2[wc], xlab='prognostic score', ylab='survival prob')  # cf p1

> names(esurv)
[1] "surv"  "lower" "upper"
> str(esurv$surv)
 num [1:500, 1:2] 0.772 0.886 0.961 0.731 0.749 ...
 - attr(*, "dimnames")=List of 2
  ..$ : NULL
  ..$ : chr [1:2] "lasso" "oracle"

esurv2 <- predict(esurv, newdata = junk)
esurv2$surv       # All zeros?

Bug from the sample data (interaction was considered here; inter = TRUE) ?

set.seed(123456)
resBM <-  BMsel(
  data = Breast,
  x = 4:ncol(Breast),
  y = 2:1,
  tt = 3,
  inter = TRUE,
  std.x = TRUE,
  folds = 5,
  method = c("lasso", "lasso-pcvl"))

esurv <- expSurv(
  res = resBM,
  traindata = Breast,
  boot = FALSE,
  smooth = TRUE,
  time = 4,
  trace = TRUE
)
Computation of the expected survival
Computation of analytical confidence intervals
Computation of smoothed B-splines
Error in cobs(x = x, y = y, print.mesg = F, print.warn = F, method = "uniform",  : 
  There is at least one pair of adjacent knots that contains no observation.

Plot predictor vs HR

Loglikelihood

  • fit$loglik is a vector of length 2 (initial model, fitted model). So deviance can be calculated by -2*fit$loglik[2]; see here for an example from BhGLM package.
  • Use survival::anova() command to do a likelihood ratio test. Note this function does not work on glmnet object.
  • residuals.coxph Calculates martingale, deviance, score or Schoenfeld residuals for a Cox proportional hazards model.
  • No deviance() on coxph object!
  • logLik() function will return fit$loglik[2]
  • Gradient descent for the elastic net Cox-PH model

glmnet

[math]\displaystyle{ \begin{align} \mathrm{AIC} &= 2k - 2\ln(\hat L) \\ \mathrm{AICc} &= \mathrm{AIC} + \frac{2k^2 + 2k}{n - k - 1} \end{align} }[/math]
fit <- glmnet(x, y, family = "multinomial") 

tLL <- fit$nulldev - deviance(fit) # ln(L)
k <- fit$df
n <- fit$nobs
AICc <- -tLL+2*k+2*k*(k+1)/(n-k-1)
AICc
f <- glmnet(x = x, y = y, family = family)
f$aic <- deviance(f) + 2 * f$df
set.seed(10101)
N=1000;p=6
nzc=p/3
x=matrix(rnorm(N*p),N,p)
beta=rnorm(nzc)
fx=x[,seq(nzc)]%*%beta/3
hx=exp(fx)
ty=rexp(N,hx)
tcens=rbinom(n=N,prob=.3,size=1)# censoring indicator
y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)
coxobj <- coxph(Surv(ty, 1-tcens) ~ x)
coxobj_small <- coxph(Surv(ty, 1-tcens) ~1)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(ty, 1 - tcens)
# Model 1: ~ x
# Model 2: ~ 1
# loglik  Chisq Df P(>|Chi|)  
# 1 -4095.2                      
# 2 -4102.7 15.151  6   0.01911 *

fit2 <- glmnet(x,y,family="cox", lambda=0) # ridge regression
deviance(fit2)                             # 2*(loglike_sat - loglike)
# [1] 8190.313
coxnet.deviance(x=x, y=y, beta=coef(fit2)) # 2*(loglike_sat - loglike)
# [1] 8190.313   
# https://github.com/cran/glmnet/blob/master/R/coxnet.deviance.R#L79

assess.glmnet(fit2, x=x, y=y)      # returns deviance and c-index
fit2$df
# [1] 6
fit2$nulldev - deviance(fit2) # Log-Likelihood ratio statistic
# [1] 15.15097
1-pchisq(fit2$nulldev - deviance(fit2), fit2$df)
# [1] 0.01911446

Here is another example including a factor covariate:

library(KMsurv) # Data package for Klein & Moeschberge
data(larynx)
larynx$stage <- factor(larynx$stage)
coxobj <- coxph(Surv(time, delta) ~ age + stage, data = larynx)
coef(coxobj)
# age    stage2    stage3    stage4 
# 0.0190311 0.1400402 0.6423817 1.7059796
coxobj_small <- coxph(Surv(time, delta)~age, data = larynx)
anova(coxobj, coxobj_small)
# Analysis of Deviance Table
# Cox model: response is  Surv(time, delta)
# Model 1: ~ age + stage
# Model 2: ~ age
# loglik  Chisq Df P(>|Chi|)   
# 1 -187.71                       
# 2 -195.55 15.681  3  0.001318 **
#   ---
#   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

# Now let's look at the glmnet() function.
# It seems glmnet does not recognize factor covariates.
coxobj2 <- with(larynx, glmnet(cbind(age, stage), Surv(time, delta), family = "cox", lambda=0))
coxobj2$nulldev - deviance(coxobj2)  # Log-Likelihood ratio statistic
# [1] 15.72596
coxobj1 <- with(larynx, glmnet(cbind(1, age), Surv(time, delta), family = "cox", lambda=0))
deviance(coxobj1) - deviance(coxobj2) 
# [1] 13.08457
1-pchisq(deviance(coxobj1) - deviance(coxobj2) , coxobj2$df-coxobj1$df)
# [1] 0.0002977376

High dimensional data

glmnet + Cox models

Error in glmnet: x should be a matrix with 2 or more columns

https://stackoverflow.com/questions/29231123/why-cant-pass-only-1-coulmn-to-glmnet-when-it-is-possible-in-glm-function-in-r

Error in coxnet: (list) object cannot be coerced to type 'double'

Fix: do not use data.frame in X. Use cbind() instead.

Prediction

Prognostic factor, prognosis

  • "Prognostic" refers to the ability to predict the likely outcome or course of a disease. In the context of medicine, prognosis is the prediction of the future course of a disease and the chances of recovery or survival. A prognosis can be based on a variety of factors, including the stage and grade of the disease, the patient's overall health, and the response to treatment.
  • Prognostic factors are the characteristics of a patient or a disease that can be used to predict the outcome or course of the disease. These factors can include demographic information (such as age and gender), clinical information (such as the stage and grade of the disease), and laboratory test results.
  • Prognostic factors are used to stratify patients into different prognostic groups, which can help guide treatment decisions and identify patients who may be at high risk for poor outcomes. For example, in cancer treatment, the stage of the cancer, the location of the cancer, and the patient's overall health are important prognostic factors that are used to determine the best course of treatment.
  • It's worth noting that prognosis is not always certain, and unexpected events can happen that can change the course of the disease. Additionally, the effectiveness of treatment can change the prognosis for a patient. Prognosis is an estimation and it can change over time.
  • Prognosis. Grade I carcinomas tend to have be less aggressive and have a better prognosis than higher grade carcinomas. They are also more often ER positive, which is another feature associated with a more favorable prognosis. STAGING & GRADE breast cancer.

Prognostic index/risk scores

  • International Prognostic Index
  • In R,
    • coxph() defines risk score as exp(linear predictor).
    • survC1 package defines risk score as coxph's linear predictor; see his paper on Stat in Med 2011. Some medical papers (such as this one) also defines it in this way.
  • Low scores correspond to the lowest predicted risk and high scores correspond to the greatest predicted risk.
  • The test data were first segregated into high-risk and low-risk groups by the median of training risk scores. Assessment of performance of survival prediction models for cancer prognosis
  • On the paper "The C-index is not proper for the evaluation of t-year predicted risk" Blanche et al 2018 defined the true t-year predicted risk by [math]\displaystyle{ P(T \le t | Z) = 1 - Survival }[/math]

linear.predictors component in coxph object

The $linear.predictors component is not [math]\displaystyle{ \beta' x }[/math]. It is [math]\displaystyle{ \beta' (x-\mu_x) }[/math]. See this post.

predict.coxph, prognostic index & risk score

  • predict.coxph() Compute fitted values and regression terms for a model fitted by coxph. The Cox model is a relative risk model; predictions of type "linear predictor", "risk", and "terms" are all relative to the sample from which they came. By default, the reference value for each of these is the mean covariate within strata. The primary underlying reason is statistical: a Cox model only predicts relative risks between pairs of subjects within the same strata, and hence the addition of a constant to any covariate, either overall or only within a particular stratum, has no effect on the fitted results. Returned value: a vector or matrix of predictions, or a list containing the predictions (element "fit") and their standard errors (element "se.fit") if the se.fit option is TRUE.
predict(object, newdata,
    type=c("lp", "risk", "expected", "terms", "survival"),
    se.fit=FALSE, na.action=na.pass, terms=names(object$assign), collapse,
    reference=c("strata", "sample"),  ...)

type:

library(coxph)
fit <- coxph(Surv(time, status) ~ age , lung)
fit
#  Call:
#  coxph(formula = Surv(time, status) ~ age, data = lung)
#       coef exp(coef) se(coef)    z     p
# age 0.0187      1.02   0.0092 2.03 0.042
#
# Likelihood ratio test=4.24  on 1 df, p=0.0395  n= 228, number of events= 165 
fit$means
#      age 
# 62.44737 

# type = "lr" (Linear predictor)
as.numeric(predict(fit,type="lp"))[1:10]   
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518  0.21626733
# [7]  0.10394626  0.16010680 -0.17685643 -0.02709500
0.0187 * (lung$age[1:10] - fit$means)
# [1]  0.21603421  0.10383421 -0.12056579 -0.10186579 -0.04576579  0.21603421
# [7]  0.10383421  0.15993421 -0.17666579 -0.02706579
fit$linear.predictors[1:10]
# [1]  0.21626733  0.10394626 -0.12069589 -0.10197571 -0.04581518
# [6]  0.21626733  0.10394626  0.16010680 -0.17685643 -0.02709500

# type = "risk" (Risk score)
> as.numeric(predict(fit,type="risk"))[1:10]
 [1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342 1.1095408
 [8] 1.1736362 0.8379001 0.9732688
> exp((lung$age-mean(lung$age)) * 0.0187)[1:10]
 [1] 1.2411448 1.1094165 0.8864188 0.9031508 0.9552657 1.2411448
 [7] 1.1094165 1.1734337 0.8380598 0.9732972
> exp(fit$linear.predictors)[1:10]
 [1] 1.2414342 1.1095408 0.8863035 0.9030515 0.9552185 1.2414342
 [7] 1.1095408 1.1736362 0.8379001 0.9732688

threshold/cutoff

  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5882539/ An optimal threshold on the score to separate patients into low- and high-risk groups was determined using the MaxStat package to select the cutoff value producing the maximal log-rank score in the training cohort.
  • maxstat: Maximally Selected Rank Statistics (cf the matrixStats: Functions that Apply to Rows and Columns of Matrices (and to Vectors) package).

Survival risk prediction

  • Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data Simon 2011. The authors have noted the CV has been used for optimization of tuning parameters but the data available are too limited for effective into training & test sets.
    • The CV Kaplan-Meier curves are essentially unbiased and the separation between the curves gives a fair representation of the value of the expression profiles for predicting survival risk.
    • The log-rank statistic does not have the usual chi-squared distribution under the null hypothesis. This is because the data was used to create the risk groups.
    • Survival ROC curve can be used as a measure of predictive accuracy for the survival risk group model at a certain landmark time.
    • The ROC curve can be misleading. For example if re-substitution is used, the AUC can be very large.
    • The p-value for the significance of the test that AUC=.5 for the cross-validated survival ROC curve can be computed by permutations.
    • Cross-validated estimates of survival risk discrimination can be pessimistically biased if the number of folds K is too small for the number of events, and the variance of the cross-validated risk group survival curves or time-dependent ROC curves will be large, particularly when K is large and the number of events is small. For example, for the null simulations of Figure 3, there are several cases in which the cross-validated Kaplan–Meier curve for the low-risk group is below that for the high-risk group.
    • (class data) For small sample sizes of fewer than 50 cases, they recommended use of leave-one-out cross-validation to minimize mean squared error of the estimate of prediction error.
    • (survival data) Subramanian and Simon (Stat Med) recommended use of 5- or 10-fold cross-validation for a wide range of conditions.
    • Fig 1: KM substitution. 10 null data.
    • Fig 2: KM test data. 10 null data.
    • Fig 3: KM 10-fold CV. One null data.
    • Fig 4A: KM Shedden data resubstitution.
    • Fig 4B: KM Shedden data. CV
    • Fig 5A: Resubstitution time-dep ROC. Shedden.
    • Fig 5B: CV time-dep ROC. Shedden.
    • Fig 6A: KM clinical covariates only
    • Fig 6B: KM combined
    • Fig 7. Time-dep ROC from covariates only and combined.
  • Some cites: Automated identification of stratifying signatures incellular subpopulations Tibshirani 2014.
  • Measure of assessment for prognostic prediction
0/1 Survival
Sensitivity [math]\displaystyle{ P(Pred=1|True=1) }[/math] [math]\displaystyle{ P(\beta' X \gt c | T \lt t) }[/math]
Specificity [math]\displaystyle{ P(Pred=0|True=0) }[/math] [math]\displaystyle{ P(\beta' X \le c | T \ge t) }[/math]

Survival time prediction

Assessing the performance of prediction models

Hazard ratio

hazard.ratio()

hazard.ratio(x, surv.time, surv.event, weights, strat, alpha = 0.05, 
             method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)

Odds ratio 優勢比/比值比/發生比

D index

D.index()

D.index(x, surv.time, surv.event, weights, strat, alpha = 0.05, 
        method.test = c("logrank", "likelihood.ratio", "wald"), na.rm = FALSE, ...)

AUC

See ROC curve.

Comparison:

Definition Interpretation
Two class [math]\displaystyle{ P(Z_{case} \gt Z_{control}) }[/math] the probability that a randomly selected case will have a higher test result (marker value) than a randomly selected control. It represents a measure of concordance between the marker and the disease status. ROC curves are particularly useful for comparing the discriminatory capacity of different potential biomarkers. (Heagerty & Zheng 2005)
Survival data [math]\displaystyle{ P(\beta' Z_1 \gt \beta' Z_2|T_1 \lt T_2) }[/math] (Roughly speaking) the probability of concordance between predicted and observed responses. The probability that the predictions for a random pair of subjects are concordant with their outcomes. (Heagerty & Zheng 2005). (Precisely) fraction of pairs in your data, where the observation with the higher survival time has the higher probability of survival predicted by your model.

p95 of Heagerty and Zheng (2005) gave a relationship of C-statistic:

[math]\displaystyle{ C = P(M_j \gt M_k | T_j \lt T_k) = \int_t \mbox{AUC(t) w(t)} \; dt }[/math]

where M is the marker value and [math]\displaystyle{ w(t) = 2 \cdot f(t) \cdot S(t) }[/math]. When the interest is in the accuracy of a regression model we will use [math]\displaystyle{ M_i = Z_i^T \beta }[/math].

The time-dependent AUC is also related to time-dependent C-index. [math]\displaystyle{ C_\tau = P(M_j \gt M_k | T_j \lt T_k, T_j \lt \tau) = \int_t \mbox{AUC(t)} \mbox{w}_{\tau}(t) \; dt }[/math] where [math]\displaystyle{ w_\tau(t) = 2 \cdot f(t) \cdot S(t)/(1-S^2(\tau)) }[/math].

Integrated brier score (≈ "mean squared error" of prediction for survival data)

Assessment and comparison of prognostic classification schemes for survival data Graf et al Stat. Med. 1999 2529-45, Consistent Estimation of the Expected Brier Score in General Survival Models with Right‐Censored Event Times Gerds et al 2006.

  • Because the point predictions of event-free times will almost inevitably given inaccurate and unsatisfactory result, the mean square error of prediction [math]\displaystyle{ \frac{1}{n}\sum_1^n (T_i - \hat{T}(X_i))^2 }[/math] method will not be considered. See Parkes 1972 or Henderson 2001.
  • Another approach is to predict the survival or event status [math]\displaystyle{ Y=I(T \gt \tau) }[/math] at a fixed time point [math]\displaystyle{ \tau }[/math] for a patient with X=x. This leads to the expected Brier score [math]\displaystyle{ E[(Y - \hat{S}(\tau|X))^2] }[/math] where [math]\displaystyle{ \hat{S}(\tau|X) }[/math] is the estimated event-free probabilities (survival probability) at time [math]\displaystyle{ \tau }[/math] for subject with predictor variable [math]\displaystyle{ X }[/math].
  • The time-dependent Brier score (without censoring)
[math]\displaystyle{ \begin{align} \mbox{Brier}(\tau) &= \frac{1}{n}\sum_1^n (I(T_i\gt \tau) - \hat{S}(\tau|X_i))^2 \end{align} }[/math]
  • The time-dependent Brier score (with censoring, C is the censoring variable)
[math]\displaystyle{ \begin{align} \mbox{Brier}(\tau) = \frac{1}{n}\sum_i^n\bigg[\frac{(\hat{S}_C(t_i))^2I(t_i \leq \tau, \delta_i=1)}{\hat{S}_C(t_i)} + \frac{(1 - \hat{S}_C(t_i))^2 I(t_i \gt \tau)}{\hat{S}_C(\tau)}\bigg] \end{align} }[/math]

where [math]\displaystyle{ \hat{S}_C(t_i) = P(C \gt t_i) }[/math], the Kaplan-Meier estimate of the censoring distribution with [math]\displaystyle{ t_i }[/math] the survival time of patient i. The integration of the Brier score can be done by over time [math]\displaystyle{ t \in [0, \tau] }[/math] with respect to some weight function W(t) for which a natual choice is [math]\displaystyle{ (1 - \hat{S}(t))/(1-\hat{S}(\tau)) }[/math]. The lower the iBrier score, the larger the prediction accuracy is.

  • Useful benchmark values for the Brier score are 33%, which corresponds to predicting the risk by a random number drawn from U[0, 1], and 25% which corresponds to predicting 50% risk for everyone. See Evaluating Random Forests for Survival Analysis using Prediction Error Curves by Mogensen et al J. Stat Software 2012 (pec package). The paper has a good summary of different R package implementing Brier scores.

R function

Papers on high dimensional covariates

  • Assessment of survival prediction models based on microarray data, Bioinformatics , 2007, vol. 23 (pg. 1768-74)
  • Allowing for mandatory covariates in boosting estimation of sparse high-dimensional survival models, BMC Bioinformatics , 2008, vol. 9 pg. 14

Kendall's tau, Goodman-Kruskal's gamma, Somers' d

Concordance index/C-index/C-statistic interpretation and R packages

  • Pitfalls of the concordance index for survival outcomes Hartman 2023
  • The area under ROC curve (plot of sensitivity of 1-specificity) is also called C-statistic. It is a measure of discrimination generalized for survival data (Harrell 1982 & 2001). The ROC are functions of the sensitivity and specificity for each value of the measure of model. (Nancy Cook, 2007)
    • The sensitivity of a test is the probability of a positive test result, or of a value above a threshold, among those with disease (cases).
    • The specificity of a test is the probability of a negative test result, or of a value below a threshold, among those without disease (noncases).
    • Perfect discrimination corresponds to a c-statistic of 1 & is achieved if the scores for all the cases are higher than those for all the non-cases.
    • The c-statistic is the probability that the measure or predicted risk/risk score is higher for a case than for a noncase.
    • The c-statistic is not the probability that individuals are classified correctly or that a person with a high test score will eventually become a case.
    • C-statistic is a rank-based measure. The c-statistic describes how well models can rank order cases and noncases, but not a function of the actual predicted probabilities.
  • How to interpret the output for calculating concordance index (c-index)? [math]\displaystyle{ P(\beta' Z_1 \gt \beta' Z_2|T_1 \lt T_2) }[/math] where T is the survival time and Z is the covariates.
    • It is the fraction of pairs in your data, where the observation with the higher survival time has the higher probability of survival predicted by your model.
    • High values mean that your model predicts higher probabilities of survival for higher observed survival times.
    • The c index estimates the probability of concordance between predicted and observed responses. A value of 0.5 indicates no predictive discrimination and a value of 1.0 indicates perfect separation of patients with different outcomes. (p371 Harrell 1996)
  • Drawback of C-statistics:
    • Even though rank indexes such as c are widely applicable and easily interpretable, they are not sensitive for detecting small differences in discrimination ability between two models. This is due to the fact that a rank method considers the (prediction, outcome) pairs (0.01,0), (0.9, 1) as no more concordant than the pairs (0.05,0), (0.8, 1). A more sensitive likelihood-ratio Chi-square-based statistic that reduces to R2 in the linear regression case may be substituted. (p371 Harrell 1996)
    • If the model is correct, the likelihood based measures may be more sensitive in detecting differences in prediction ability, compared to rank-based measures such as C-indexes. (Uno 2011 p 1113)
  • What is Harrell’s C-index? C = #concordant pairs / (# concordant pairs + # discordant pairs)
  • http://dmkd.cs.vt.edu/TUTORIAL/Survival/Slides.pdf
  • Concordance vignette from the survival package. It has a good summary of different ways (such as Kendall's tau and Somers' d) to calculate the concordance statistic. The concordance function in the survival package can be used with various types of models including logistic and linear regression.
  • Assessment of Discrimination in Survival Analysis (C-statistics, etc) webpage
  • 5 Ways to Estimate Concordance Index for Cox Models in R, Why Results Aren't Identical?, 计算的5种不同方法及比较. The 5 functions are rcorrcens() from Hmisc, summary()$concordance from survival, survConcordance() from survival, concordance.index() from survcomp and cph() from rms.
    • The timewt option in survival::concordance() function is only applicable to censored data. In this case the default corresponds to Harrell's C statistic, which is closely related to the Gehan-Wilcoxon test; timewt="S" corrsponds to the Peto-Wilcoxon, timewt="S/G" is suggested by Schemper, and timewt="n/G2" corresponds to Uno's C.
    • Uno’s C-statistic, which is implemented in the UnoC() function in the survAUC package in R, is a censoring-adjusted concordance statistic. It is based on inverse-probability-of-censoring weights. The inverse-probability-of-censoring weights adjust for the fact that censored observations contribute less information to the concordance statistic than uncensored observations. This adjustment helps to reduce bias in the concordance statistic due to censoring. How these weights are applied: 1. For each observation in the dataset, calculate the probability of being censored at each time point (The probability of being censored at each time point can then be estimated as one minus the survival function at that time point). 2. Take the inverse of these probabilities to get the weights. 3. Apply these weights when calculating the concordance statistic.
  • Summary of R packages to compute C-statistic
Package Function New data? Comparison
survival summary(coxph(formula, data))$concordance["C"], Cindex() no, yes no
survC1 Est.Cval() no Inf.Cval.Delta(, , , tau)
survAUC UnoC() yes no
survivalROC survivalROC() no no
timeROC ? ? compare()
compareC ? ? compareC()
survcomp concordance.index() ? cindex.comp()
Hmisc rcorr.cens() no no
pec cindex() yes see ?cindex doc

with splitMethod parameter
Note it requires time t
See the warning C-stat eval at t is not proper

C-statistics

  • For two groups data (one with event, one without), C-statistic has an intuitive interpretation: if two individuals are selected at random, one with the event and one without, then the C-statistic is the probability that the model predicts a higher risk for the individual with the event. Analysis of Biomarker Data: logs, odds ratios and ROC curves by Grund 2010
  • C-statistics is the probability of concordance between predicted and observed survival.
  • Comparing two correlated C indices with right‐censored survival outcome: a one‐shot nonparametric approach Kang et al, Stat in Med, 2014. compareC package for comparing two correlated C-indices with right censored outcomes. Harrell’s Concordance. The s.e. of the Harrell's C-statistics can be estimated by the delta method. [math]\displaystyle{ \begin{align} C_H = \frac{\sum_{i,j}I(t_i \lt t_{j}) I(\hat{\beta} Z_i \gt \hat{\beta} Z_j) \delta_i}{\sum_{i,j} I(t_i \lt t_j) \delta_i} \end{align} }[/math] converges to a censoring-dependent quantity [math]\displaystyle{ P(\beta'Z_1 \gt \beta' Z_2|T_1 \lt T_2, T_1 \lt \text{min}(D_1,D_2)). }[/math] Here D is the censoring variable.
  • On the C-statistics for Evaluating Overall Adequacy of Risk Prediction Procedures with Censored Survival Data by Uno et al 2011. Let [math]\displaystyle{ \tau }[/math] be a specified time point within the support of the censoring variable. [math]\displaystyle{ \begin{align} C(\tau) = \text{UnoC}(\hat{\pi}, \tau) = \frac{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i \lt t_{i'}, t_i \lt \tau) I(\hat{\beta}'Z_i \gt \hat{\beta}'Z_{i'}) \delta_i}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i \lt t_{i'}, t_i \lt \tau) \delta_i} \end{align} }[/math], a measure of the concordance between [math]\displaystyle{ \hat{\beta} Z_i }[/math] (the linear predictor) and the survival time. [math]\displaystyle{ \hat{S}_C(t) }[/math] is the Kaplan-Meier estimator for the censoring distribution/variable/time (cf event time); flipping the definition of [math]\displaystyle{ \delta_i }[/math]/considering failure events as "censored" observations and censored observations as "failures" and computing the KM as usual; see p207 of Satten 2001 and the source code from the kmcens() in survC1. Note that [math]\displaystyle{ C_\tau }[/math] converges to [math]\displaystyle{ P(\beta'Z_1 \gt \beta' Z_2|T_1 \lt T_2, T_1 \lt \tau). }[/math]
    • Uno's estimator does not require the fitted model to be correct . See also table V in the simulation study where the true model is log-normal regression.
    • Uno's estimator is consistent for a population concordance measure that is free of censoring. See the coverage result in table IV and V from his simulation study. Other forms of C-statistic estimate population parameters that may depend on the current study-specific censoring distribution.
    • To accommodate discrete risk scores, in survC1::Est.Cval(), it is using the formula [math]\displaystyle{ . \begin{align} \frac{\sum_{i,i'}[ (\hat{S}_C(t_i))^{-2}I(t_i \lt t_{i'}, t_i \lt \tau) I(\hat{\beta}'Z_i \gt \hat{\beta}'Z_{i'}) \delta_i + 0.5 * (\hat{S}_C(t_i))^{-2}I(t_i \lt t_{i'}, t_i \lt \tau) I(\hat{\beta}'Z_i = \hat{\beta}'Z_{i'}) \delta_i ]}{\sum_{i,i'}(\hat{S}_C(t_i))^{-2}I(t_i \lt t_{i'}, t_i \lt \tau) \delta_i} \end{align} }[/math]. Note that pec::cindex() is using the same formula but survAUC::UnoC() does not.
    • If the specified [math]\displaystyle{ \tau }[/math] (tau) is 'too' large such that very few events were observed or very few subjects were followed beyond this time point, the standard error estimate for [math]\displaystyle{ \hat{C}_\tau }[/math] can be quite large.
    • Uno mentioned from (page 95) Heagerty and Zheng 2005 that when T is right censoring, one would typically consider [math]\displaystyle{ C_\tau }[/math] with a fixed, prespecified follow-up period [math]\displaystyle{ (0, \tau) }[/math].
    • Uno also mentioned that when the data is right censored, the censoring variable D is usually shorter than that of the failure time T, the tail part of the estimated survival function of T is rather unstable. Thus we consider a truncated version of C.
    • Heagerty and Zheng (2005) p95 said [math]\displaystyle{ C_\tau }[/math] is the probability that the predictions for a random pair of subjects are concordant with their outcomes, given that the smaller event time occurs in [math]\displaystyle{ (0, \tau) }[/math].
    • real data 1: fit a Cox model. Get risk scores [math]\displaystyle{ \hat{\beta}'Z }[/math]. Compute the point and confidence interval estimates (M=500 indep. random samples with the same sample size as the observation data) of [math]\displaystyle{ C_\tau }[/math] for different [math]\displaystyle{ \tau }[/math]. Compare them with the conventional C-index procedure (Korn).
    • real data 1: compute [math]\displaystyle{ C_\tau }[/math] for a full model and a reduce model. Compute the difference of them ([math]\displaystyle{ C_\tau^{(A)} - C_\tau^{(B)} = .01 }[/math]) and the 95% confidence interval (-0.00, .02) of the difference for testing the importance of some variable (HDL in this case). Though HDL is quite significant (p=0) with respect to the risk of CV disease but its incremental value evaluated via C-statistics is quite modest.
    • real data 2: goal - evaluate the prognostic value of a new gene signature in predicting the time to death or metastasis for breast cancer patients. Two models were fitted; one with age+ER and the other is gene+age+ER. For each model we can calculate the point and interval estimates of [math]\displaystyle{ C_\tau }[/math] for different [math]\displaystyle{ \tau }[/math]s.
    • simulation: T is from Weibull regression for case 1 and log-normal regression for case 2. Covariates = (age, ER, gene). 3 kinds of censoring were considered. Sample size is 100, 150, 200 and 300. 1000 iterations. Compute coverage probabilities and average length of 95% confidence intervals, bias and root mean square error for [math]\displaystyle{ \tau }[/math] equals to 10 and 15. Compared with the conventional approach, the new method has higher coverage probabilities and less bias in 6 scenarios.
  • Statistical methods for the assessment of prognostic biomarkers (Part I): Discrimination by Tripep et al 2010
  • Gonen and Heller 2005 concordance index for Cox models
    • [math]\displaystyle{ P(T_2\gt T_1|g(Z_1)\gt g(Z_2)) }[/math]. Gonen and Heller's c statistic which is independent of censoring.
    • GHCI() from survAUC package. Strangely only one parameter is needed. survAUC allows for testing data but CPE package does not have an option for testing data.
    TR <- ovarian[1:16,]
    TE <- ovarian[17:26,]
    train.fit  <- coxph(Surv(futime, fustat) ~ age,
                        x=TRUE, y=TRUE, method="breslow", data=TR)
    lpnew <- predict(train.fit, newdata=TE)      
    survAUC::GHCI(lpnew) # .8515
    
    lpnew2 <- predict(train.fit, newdata = TR)
    survAUC::GHCI(lpnew2) # 0.8079495
    
    CPE::phcpe(train.fit, CPE.SE = TRUE) 
    # $CPE
    # [1] 0.8079495
    # $CPE.SE
    # [1] 0.0670646
    
    Hmisc::rcorr.cens(-TR$age, Surv(TR$futime, TR$fustat))["C Index"]
    # 0.7654321 
    Hmisc::rcorr.cens(TR$age, Surv(TR$futime, TR$fustat))["C Index"]
    # 0.2345679 
    
  • Uno's C-statistics (2011) and some examples using different packages
    • C-statistic may or may not be a decreasing function of tau. However, AUC(t) may not be decreasing; see Fig 1 of Blanche et al 2018.
      library(survAUC); library(pec)
      set.seed(1234)
      dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001) # simulWebib was defined above
      #     coef exp(coef) se(coef)     z      p
      # x -0.744     0.475    0.269 -2.76 0.0057
      TR <- dat[1:80,]
      TE <- dat[81:100,]
      train.fit  <- coxph(Surv(time, status) ~ x, data=TR)
      plot(survfit(Surv(time, status) ~ 1, data =TR))
      
      lpnew <- predict(train.fit, newdata=TE)
      Surv.rsp <- Surv(TR$time, TR$status)
      Surv.rsp.new <- Surv(TE$time, TE$status)              
      sapply(c(.25, .5, .75),
             function(qtl) UnoC(Surv.rsp, Surv.rsp.new, lpnew, time=quantile(TR$time, qtl)))
      # [1] 0.2580193 0.2735142 0.2658271
      sapply(c(.25, .5, .75), 
             function(qtl) cindex( list(matrix( -lpnew, nrow = nrow(TE))), 
              formula = Surv(time, status) ~ x,
              data = TE, 
              eval.times = quantile(TR$time, qtl))$AppC$matrix)
      # [1] 0.5041490 0.5186850 0.5106746
    • Four elements are needed for computing truncated C-statistic using survAUC::UnoC. But it seems pec::cindex does not need the training data.
      • training data including covariates,
      • testing data including covariates,
      • predictor from new data,
      • truncation time/evaluation time/prediction horizon.
    • (From ?UnoC) Uno's estimator is based on inverse-probability-of-censoring weights and does not assume a specific working model for deriving the predictor lpnew. It is assumed, however, that there is a one-to-one relationship between the predictor and the expected survival times conditional on the predictor. Note that the estimator implemented in UnoC is restricted to situations where the random censoring assumption holds.
    • survAUC::UnoC(). The tau parameter: Truncation time. The resulting C tells how well the given prediction model works in predicting events that occur in the time range from 0 to tau. [math]\displaystyle{ P(\beta'Z_1 \gt \beta' Z_2|T_1 \lt T_2, T_1 \lt \tau). }[/math] Con: no confidence interval estimate for [math]\displaystyle{ C_\tau }[/math] nor [math]\displaystyle{ C_\tau^{(A)} - C_\tau^{(B)} }[/math]
    • pec::cindex(). At each timepoint of eval.times the c-index is computed using only those pairs where one of the event times is known to be earlier than this timepoint. If eval.times is missing or Inf then the largest uncensored event time is used. See a more general example from here
    • Est.Cval() from the survC1 package (the only package gives confidence intervals of C-statistic or deltaC, authored by H. Uno). It doesn't take new data nor the vector of predictors obtained from the test data. Pro: Inf.Cval() can compute the confidence interval (perturbation-resampling based) of [math]\displaystyle{ C_\tau }[/math] & Inf.Cval.Delta() for the difference [math]\displaystyle{ C_\tau^{(A)} - C_\tau^{(B)} }[/math].
      library(survAUC)
      # require training and predict sets
      TR <- ovarian[1:16,]
      TE <- ovarian[17:26,]
      train.fit  <- coxph(Surv(futime, fustat) ~ age, data=TR)
      
      lpnew <- predict(train.fit, newdata=TE)
      Surv.rsp <- Surv(TR$futime, TR$fustat)
      Surv.rsp.new <- Surv(TE$futime, TE$fustat)              
      
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*1) 
      # [1] 0.9761905
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*2) 
      # [1] 0.7308979
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*3) 
      # [1] 0.7308979
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*4) 
      # [1] 0.7308979
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors, time=365.25*5) 
      # [1] 0.7308979
      UnoC(Surv.rsp, Surv.rsp, train.fit$linear.predictors)
      # [1] 0.7308979
      # So the function UnoC() can obtain the exact result as Est.Cval().
      # Now try on a new data set. Question: why do we need Surv.rsp?
      UnoC(Surv.rsp, Surv.rsp.new, lpnew)
      # [1] 0.7333333
      UnoC(Surv.rsp, Surv.rsp.new, lpnew, time=365.25*2)
      # [1] 0.7333333
      
      library(pec)
      cindex( list(matrix( -lpnew, nrow = nrow(TE))), 
              formula = Surv(futime, fustat) ~ age,
              data = TE, eval.times = 365.25*2)$AppC
      # $matrix
      # [1] 0.7333333
      
      library(survC1)
      Est.Cval(cbind(TE, lpnew), tau = 365.25*2, nofit = TRUE)$Dhat
      # [1] 0.7333333
      
      # tau is mandatory (>0), no need to have training and predict sets
      Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*1)$Dhat
      # [1] 0.9761905
      Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*2)$Dhat
      # [1] 0.7308979
      Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*3)$Dhat
      # [1] 0.7308979
      Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*4)$Dhat
      # [1] 0.7308979
      Est.Cval(ovarian[1:16, c(1,2, 3)], tau=365.25*5)$Dhat
      # [1] 0.7308979
      
      svg("~/Downloads/c_stat_scatter.svg", width=8, height=5)
      par(mfrow=c(1,2))
      plot(TR$futime, train.fit$linear.predictors, main="training data", 
           xlab="time", ylab="predictor")
      mtext("C=.731 at t=2", 3)
      plot(TE$futime, lpnew, main="testing data", xlab="time", ylab="predictor")
      mtext("C=.733 at t=2", 3)
      dev.off()
      File:C stat scatter.svg
  • Assessing the prediction accuracy of a cure model for censored survival data with long-term survivors: Application to breast cancer data
  • The use of ROC for defining the validity of the prognostic index in censored data
  • Use and Misuse of the Receiver Operating Characteristic Curve in Risk Prediction Cook 2007
  • Evaluating Discrimination of Risk Prediction Models: The C Statistic by Pencina et al, JAMA 2015
  • Blanche et al(2018) The c-index is not proper for the evaluation of t-year predicted risks
    • There is a bug on script line 154.
    • With a fixed prediction horizon, the concordance index can be higher for a misspecified model than for a correctly specified model. The time-dependent AUC does not have this problem.
    • (page 8) We now show that when a misspecified prediction model satisfies the ranking condition but the true distribution does not, then it is possible that the misspecified model achieves a misleadingly high c-index.
    • The traditional C‐statistic used for the survival models is not guaranteed to identify the “best” model for estimating the risk of t-year survival. In contrast, measures of predicted error do not suffer from these limitations. See this paper The relationship between the C‐statistic and the accuracy of program‐specific evaluations by Wey et al 2018
    • Unfortunately, a drawback of Harrell’s c-index for the time to event and competing risk settings is that the measure does not provide a value specific to the time horizon of prediction (e.g., a 3-year risk). See this paper The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models by Kattan and Gerds 2018.
    • In Fig 1 Y-axis is concordance (AUC/C) and X-axis is time, the caption said The ability of (some variable) to discriminate patients who will either die or be transplanted within the next t-years from those who will be event-free at time t.
    • The [math]\displaystyle{ \tau }[/math] considered here is the maximal end of follow-up time
    • AUC (riskRegression::Score()), Uno-C (pec::cindex()), Harrell's C (Hmisc::rcorr.cens() for censored and summary(fit)$concordance for uncensored) are considered.
    • The C_IPCW(t) or C_Harrell(t) is obtained by artificially censoring the outcome at time t. So C_IPCW(t) is different from Uno's version.

C-statistic limitations

See the discussion section of The relationship between the C‐statistic and the accuracy of program‐specific evaluations by Wey 2018

  • Correctly specified models can have low or high C‐statistics. Thus, the C‐statistic cannot identify a correctly specified model.
  • the traditional C‐statistic used for the survival models is not guaranteed to identify the “best” model for estimating the risk of, for example, 1‐year survival

Importantly, there exists no measure of risk discrimination or predicted error that can identify a correctly specified model, because they all depend on unknown characteristics of the data. For example, the C‐statistic depends on the variability in recipient‐level risk, while measures of squared error such as the Brier Score depend on residual variability.

Analysis of Biomarker Data: logs, odds ratios and ROC curves. This paper does not consider the survival time data. It has some summary about C-statistic (interpretation, warnings).

  • The C-statistic is relatively insensitive to the added contribution of a new marker when the two models, with and without biomarker, estimate risk on a continuous scale. In fact, many new biomarkers provide only minimal increase in the C-statistic when added to the Framingham model for CHD risk.
  • The classical C-statistic assumes that high sensitivity and high specificity are equally desirable. This is not always the case – for example, when screening the general population for a low-prevalence outcome requiring invasive follow-up, high specificity is important, while cancer screening in a high-risk group would emphasize high sensitivity.
  • To achieve a noticeable increase in the C-statistic, a biomarker must have a very strong independent association with the event risk (say ORs of 10 or higher per 1 SD increase).

C-statistic applications

  • Semiparametric Regression Analysis of Multiple Right- and Interval-Censored Events by Gao et al, JASA 2018
  • A c statistic of 0.7–0.8 is considered good, while >0.8 is considered excellent. See this paper. 2018
  • The C statistic, also termed concordance statistic or c-index, is analogous to the area under the curve and is a global measure of model discrimination. Discrimination refers to the ability of a risk prediction model to separate patients who develop a health outcome from patients who do not develop a health outcome. Effectively, the C statistic is the probability that a model will result in a higher-risk score for a patient who develops the outcomes of interest compared with a patient who does not develop the outcomes of interest. See the paper JAMA 2018

C-statistic vs LRT comparing nested models

1. Binary data

# https://stats.stackexchange.com/questions/46523/how-to-simulate-artificial-data-for-logistic-regression
set.seed(666)
x1 = rnorm(1000)           # some continuous variables 
x2 = rnorm(1000)
z = 1 + 2*x1 + 3*x2        # linear combination with a bias
pr = 1/(1+exp(-z))         # pass through an inv-logit function
y = rbinom(1000,1,pr)      # bernoulli response variable
df = data.frame(y=y,x1=x1,x2=x2)
fit <- glm( y~x1+x2,data=df,family="binomial")
summary(fit) 
# Estimate Std. Error z value Pr(>|z|)    
# (Intercept)   0.9915     0.1185   8.367   <2e-16 ***
#   x1            2.2731     0.1789  12.709   <2e-16 ***
#   x2            3.1853     0.2157  14.768   <2e-16 ***
#   ---
#   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# 
# (Dispersion parameter for binomial family taken to be 1)
# 
# Null deviance: 1355.16  on 999  degrees of freedom
# Residual deviance:  582.93  on 997  degrees of freedom
# AIC: 588.93
confint.default(fit)
#                 2.5 %   97.5 %
# (Intercept) 0.7592637 1.223790
# x1          1.9225261 2.623659
# x2          2.7625861 3.608069

# LRT - likelihood ratio test
fit2 <- glm( y~x1,data=df,family="binomial")
anova.res <- anova(fit2, fit)
# Analysis of Deviance Table
# 
# Model 1: y ~ x1
# Model 2: y ~ x1 + x2
#   Resid. Df Resid. Dev Df Deviance
# 1       998    1186.16            
# 2       997     582.93  1   603.23
1-pchisq( abs(anova.res$Deviance[2]), abs(anova.res$Df[2]))
# [1] 0

# Method 1: use ROC package to compute AUC
library(ROC)
set.seed(123)
markers <- predict(fit, newdata = data.frame(x1, x2), type = "response")
roc1 <- rocdemo.sca( truth=y, data=markers, rule=dxrule.sca )
auc <- AUC(roc1); print(auc) # [1] 0.9459085

markers2 <- predict(fit2, newdata = data.frame(x1), type = "response")
roc2 <- rocdemo.sca( truth=y, data=markers2, rule=dxrule.sca )
auc2 <- AUC(roc2); print(auc2) # [1] 0.7259098
auc - auc2 # [1] 0.2199987

# Method 2: use pROC package to compute AUC
roc_obj <- pROC::roc(y, markers)
pROC::auc(roc_obj) # Area under the curve: 0.9459

# Method 3: Compute AUC by hand
# https://www.r-bloggers.com/calculating-auc-the-area-under-a-roc-curve/
auc_probability <- function(labels, scores, N=1e7){
  pos <- sample(scores[labels], N, replace=TRUE)
  neg <- sample(scores[!labels], N, replace=TRUE)
  # sum( (1 + sign(pos - neg))/2)/N # does the same thing
  (sum(pos > neg) + sum(pos == neg)/2) / N # give partial credit for ties
}
auc_probability(as.logical(y), markers) # [1] 0.945964

2. Survival data

library(survival)
data(ovarian)
head(ovarian)
range(ovarian$futime) # [1]   59 1227
plot(survfit(Surv(futime, fustat) ~ 1, data = ovarian))

coxph(Surv(futime, fustat) ~ rx + age, data = ovarian)
#        coef exp(coef) se(coef)     z      p
# rx  -0.8040    0.4475   0.6320 -1.27 0.2034
# age  0.1473    1.1587   0.0461  3.19 0.0014
#
# Likelihood ratio test=15.9  on 2 df, p=0.000355
# n= 26, number of events= 12 

require(survC1)
covs0 <- as.matrix(ovarian[, c("rx")])
covs1 <- as.matrix(ovarian[, c("rx", "age")])
tau=365.25*1
Delta=Inf.Cval.Delta(ovarian[, 1:2], covs0, covs1, tau, itr=200)
round(Delta, digits=3)
#          Est    SE Lower95 Upper95
# Model1 0.844 0.119   0.611   1.077
# Model0 0.659 0.148   0.369   0.949
# Delta  0.185 0.197  -0.201   0.572

Time dependent ROC curves

tdrocc()

Calibration

Graphical calibration curves and the integrated calibration index (ICI) for survival models

Prognostic markers vs predictive markers (and other biomarkers)

Prognostic biomarkers

Detecting prognostic biomarkers of breast cancer by regularized Cox proportional hazards models Li 2021. prognostic risk score (PRS), training, discovery dataset, independent, validation, enrichment analysis, C-index, overlap, GEO

biospear package

Applications based on google scholar on biospear package paper:

Treatment Effect

  • Tian 2014: [math]\displaystyle{ P(T^1 \geq t_0|z) - P(T^{-1} \geq t_0|z) }[/math]
  • Bonetti 2000: Hazard ratio
  • Janes 2014: [math]\displaystyle{ \Delta(Y) = \rho_0(Y) - \rho_1(Y) = P(D=1|T=0, Y) - P(D=1|T=1, Y) }[/math]
    • Subjects with [math]\displaystyle{ \Delta(Y)\lt 0 }[/math] are called marker-negative; standard/controlled treatment is favored.
    • Subjects with [math]\displaystyle{ \Delta(Y)\gt 0 }[/math] are called marker-positive; new treatment is favored. The rule is applying treatment onlyto marker-positive patients. And for this portion of patients, the average benefit of treatment is calculated by [math]\displaystyle{ B_{pos} = E(\Delta(Y) | \Delta(Y) \gt 0) }[/math]. See p103 on the paper.

Subgroup identification

Some packages

personalized package

personalized: Estimation and Validation Methods for Subgroup Identification and Personalized Medicine. Subgroup Identification and Precision Medicine with the {personalized} R Package (youtube)

SurvMetrics

SurvMetrics: Predictive Evaluation Metrics in Survival Analysis

SurvBenchmark

SurvBenchmark: comprehensive benchmarking study of survival analysis methods using both omics data and clinical data

Lasso estimation of hierarchical interactions for analyzing heterogeneity of treatment effect

Lasso estimation of hierarchical interactions for analyzing heterogeneity of treatment effect 2021

Quantifying treatment differences in confirmatory trials under non-proportional hazards

Quantifying treatment differences in confirmatory trials under non-proportional hazards

The source code in Github.

Computation for gene expression (microarray) data

n <- 500
g <- 10000
y <- rexp(n)
status <- ifelse(runif(n) < .7, 1, 0)
x <- matrix(rnorm(n*g), nr=g)
treat <- rbinom(n, 1, .5)
# Method 1
system.time(for(i in 1:g) coxph(Surv(y, status) ~ x[i, ] + treat + treat:x[i, ]))
# 28 seconds 

# Method 2
system.time(apply(x, 1, function(z) coxph(Surv(y, status) ~ z + treat + treat:z)))
# 29 seconds

# Method 3 (Windows)
dyn.load("C:/Program Files (x86)/ArrayTools/Fortran/surv64.dll")			  
tme <- y
sorted <- order(tme)
stime <- as.double(tme[sorted])
sstat <- as.integer(status[sorted])
x1 <- x[,sorted]
imodel <- 1  # imodel=1, fit univariate gene expression. Return p-values vector.
nvar <- 1
system.time(outx1 <- .Fortran("coxfitc", as.integer(n), as.integer(g), as.integer(0),
                 stime, sstat, t(x1), as.double(0), as.integer(imodel), 
                 double(2*n+2*nvar*nvar+3*nvar), logdiff = double(g)))
# 1.69 seconds on R i386
# 0.79 seconds on R x64

# method 4: GSA
genenames=paste("g", 1:g, sep="")
#create some random gene sets
genesets=vector("list", 50)
for(i in 1:50){
  genesetsi=paste("g", sample(1:g,size=30), sep="")
}
geneset.names=paste("set",as.character(1:50),sep="")
debug(GSA.func)
GSA.obj<-GSA(x,y, genenames=genenames, genesets=genesets,  
             censoring.status=status,
             resp.type="Survival", nperms=1)
Browse[3]> str(catalog.unique)
 int [1:1401] 7943 227 4069 3011 8402 1586 2443 2777 673 9021 ...
Browse[3]> system.time(cox.func(x[catalog.unique,], y, censoring.status, s0=0))
# 1.3 seconds
Browse[2]> system.time(cox.func(x, y, censoring.status, s0=0))
# 7.259 seconds

Single gene vs mult-gene survival models

A comparative study of survival models for breast cancer prognostication revisited: the benefits of multi-gene models by Grzadkowski et al 2018. To concordance of biomarker performance, the authors use the Concordance Correlation Coefficient (CCC) as introduced by Lin (1989) and further amended in Lin (2000).

Random papers using C-index, AUC or Brier scores

survex package

survex: model-agnostic explainability for survival analysis

More, Web tools

Others

Landmark analysis

  • A landmark analysis for survival data is a statistical method used in survival analysis. It involves designating a specific time point during the follow-up period, known as the landmark time, and analyzing only those subjects who have survived until the landmark time. Landmark analysis: A primer.
    • This method is often used to estimate survival probabilities in an unbiased way, conditional on the group membership of patients at the landmark time. A small number of index time points are chosen and survival analysis is done on only those subjects who remain event-free at the specified index times and for follow-up beyond the index times. Landmark Analysis at the 25-Year Landmark Point 2011 & A comparison of landmark methods and time-dependent ROC methods to evaluate the time-varying performance of prognostic markers for survival outcomes 2019.
    • Landmark analysis can help avoid certain types of bias, such as the guarantee-time bias or the immortal time bias. It's particularly useful when patient predictions are needed at select times, and it facilitates evaluating trends in performance over time.
    • In the context of survival data, which consist of a distinct start time and end time, landmark analysis provides a valuable tool for understanding and predicting future disease events. It's often used in clinical practice to guide medical decision-making.

TCGA data

Machine learning

Constrained randomization

Constrained randomization to evaulate the vaccine rollout in nursing homes

Principles and Practice of Clinical Research

Clinical trials

Statistical Thinking in Clinical Trials

Fundamental Statistical Concepts in Clinical Trials and Diagnostic Testing

JNM 2021

Statistical Monitoring of Clinical Trials: A Unified Approach

ebook on archive.org.

Analysis of clinical prediction models registered with clinicaltrials.gov

Principles of Clinical Pharmacology

Principles of Clinical Pharmacology

Progressive disease, stable disease

  • RECIST/Response evaluation criteria in solid tumors:
    • CR Complete response: This is the best response. It means that all signs of the cancer have disappeared in the tests. There’s no evidence of disease present.
    • PR Partial response: This means the cancer has significantly reduced in size but is still detectable.
    • SD Stable disease: This means the cancer has neither grown nor shrunk. The disease is stable. SD may or may not be considered as a responder. In some cases, maintaining stable disease might be seen as a good response, especially for cancers that are typically very aggressive or hard to treat.
    • PD Progressive disease: This is the worst response. It means the cancer has grown or spread to other parts of the body.
  • Stable Disease in Cancer Treatment. Stable disease is defined as being a little better than progressive disease (in which a tumor has increased in size by at least 20%) and a little worse than a partial response (wherein a tumor has shrunk by at least 50%).
  • Ideally a drug trial will return results like CR or PR. Responses of SD or PD may indicate that a drug is not an effective treatment for cancer. https://callaix.com/recist

STANDARD OF CARE

Everyone would receive (which may be no therapy) if no biomarker was used. Cf: experimental therapy with effect that might be relatedto the value of a continuous biomarker.

Breast cancer

HER2-positive breast cancer

TNBC (Triple-negative breast cancer)

https://en.wikipedia.org/wiki/TNBC