Old studies reported a worse outcome for second transplant recipient (STR) than for first transplant recipient (FTR) mainly due to non-comparable populations with numbers confounding factors. More recent analysis, based on improved methodology by using multivariate regressions, challenged this generally accepted idea: the poor prognosis for STR is still under debate.
To assess the long-term patient-and-graft survival of STR compared to FTR, we performed an observational study based on the French DIVAT prospective cohort between 1996 and 2010 (N = 3103 including 641 STR). All patients were treated with a CNI, an mTOR inhibitor or belatacept in addition to steroids and mycophenolate mofetil for maintenance therapy. Patient-and-graft survival and acute rejection episode (ARE) were analyzed using Cox models adjusted for all potential confounding factors such as pre-transplant anti-HLA immunization.
We showed that STR have a higher risk of graft failure than FTR (HR = 2.18, p = 0.0013) but that this excess risk was observed after few years of transplantation. There was no significant difference between STR and FTR in the occurrence of either overall ARE (HR = 1.01, p = 0.9675) or steroid-resistant ARE (HR = 1.27, p = 0.4087).
The risk of graft failure following second transplantation remained consistently higher than that observed in first transplantation after adjusting for confounding factors. The rarely performed time-dependent statistical modeling may explain the heterogeneous conclusions of the literature concerning second transplantation outcomes. In clinical practice, physicians should not consider STR and FTR equally.
Citation: Trébern-Launay K, Foucher Y, Giral M, Legendre C, Kreis H, Kessler M, et al. (2012) Poor Long-Term Outcome in Second Kidney Transplantation: A Delayed Event. PLoS ONE 7(10): e47915. doi:10.1371/journal.pone.0047915
Editor: Holger K. Eltzschig, University of Colorado Denver, United States of America
Received: May 18, 2012; Accepted: September 18, 2012; Published: October 23, 2012
Copyright: © Trébern-Launay et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This work was partly supported by the RTRS, the ‘Fondation de Co-opération Scientifique – CENTAURE’ and Roche Laboratory. K. Trébern-Launay is the recipient of a grant for epidemiology and biostatistics research from the RTRS ‘CENTAURE’ and Novartis Pharma. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have the following interests. This work was partly supported by Roche Laboratory, and K. Trébern-Launay is the recipient of a grant for epidemiology and biostatistics research from Novartis Pharma. There are no patents, products in development or marketed products to declare. This does not alter the authors' adherence to all the PLOS ONE policies on sharing data and materials, as detailed online in the guide for authors.
Nowadays, repeat transplantation provides the best chance for long-term survival and quality of life in patients facing allograft loss, as compared to maintenance dialysis therapy , , . This concept was recently supported by Ojo et al.  who showed that repeat transplantation is associated with a reduced mortality compared to remaining on dialysis after a prior graft loss. This benefit is valid despite the fact that re-transplant recipients present a higher risk of death during the first month after the transplant surgery . When considering short and long-term outcomes, graft survival rates following retransplantation have continuously improved in recent years . There is evidence that patients undergoing a third or more transplantation have a worse prognosis , , . However, the poor prognosis of second transplant recipients (STR) remains a matter of debate.
Some previous studies have demonstrated that STR have a lower graft survival than first transplant recipients (FTR) , , , , ,  leading STR to be considered as a higher risk group for graft failure, mainly related to increased levels of preformed HLA antibodies . However, Coupel et al. showed that the difference in long-term graft survival was not significant between STR and FTR when an HLA-DR mismatch was avoided . Recent improvements in immunosuppressive therapy may have contributed to decreasing the difference in outcomes between STR and FTR . When taking into account several confounding factors such as pre-transplant immunization, evidence of an excess risk of graft failure for STR is not clear, as demonstrated by the most recent studies , , , . For Magee et al., after adjustment for donor and recipient factors, the risk of graft failure remained significantly higher for STR than FTR .
Whereas factors influencing second graft survival have been well studied , , , , , , , those related to a possible excess risk of graft failure for STR compared with FTR are not well established , . The objective of our study was not to recommend whether patients should get a second transplant or not. Addressing this important question would require a completely different study design. Indeed, the overall aim of our epidemiological observational cohort study was to provide data from a large multicenter population of kidney transplant recipients in order to clarify the relationship between the graft rank and the long term graft outcomes. For the first time, we adjusted for a large number of covariates at baseline and we modeled the time-dependent relationship between graft rank and graft survival. According to these methodological improvements, we demonstrated that STR have a poorer patient-and-graft survival (PGS) than FTR significant since four years post-transplantation.
Materials and Methods
Data were prospectively collected from the DIVAT (Données Informatisées et VAlidées en Transplantation) French multicentric database . Codes were used to assure donor and recipient anonymity and blind assay. The “Comité National Informatique et Liberté” approved the study (N° CNIL 891735) and written informed consent was obtained from the participants. The data are computerized in real time as well as at each transplant anniversary and are submitted for an annual audit. The cohort consisted of 2462 FTR and 641 STR meeting the following inclusion criteria: (a) adult recipients; (b) transplantations performed between January 1996 and November 2010; and (c) maintenance therapy with calcineurin inhibitors, mammalian target of rapamycin inhibitors or belatacept, in addition to mycophenolic acid (Myfortic® Novartis, France or Cellcept® Roche, France) and steroids. Simultaneous transplantations were excluded. Among 2462 FTR meeting the inclusion criteria, 52 patients were also part of the STR group as they received two transplants during the observation period. These 52 patients, who were included in both cohorts, represented 2% and 8% of the FTR and STR groups respectively. Given the large number of covariates, it is reasonable to assume conditional independence of these patients. We did not exclude these 52 patients as this would have reduced the comparability of the two groups by under-representing the FTR patients with a rapid return-to-dialysis, which would have led to an over-estimation of FTR graft survival.
Clinical variables of interest
To guarantee the comparability between FTR and STR, adjustments were made for all of the following possible pre- or per-transplant immunological and non-immunological confounding factors: transplantation period (before or after 2005, which corresponds to the routine utilization of high sensitivity techniques for panel-reactive antibody, PRA), recipient gender and age, primary diagnosis of end stage renal disease (ESRD), comorbidities, highest historical levels of pre-transplant PRA against class I and II antigens, deceased or living donor status, donor age, cold ischemia time (CIT), HLA-A-B-DR mismatches and induction therapy. The high sensitivity techniques correspond to pre-transplant anti-HLA identification obtained by multiplex screening test (LAT-M; One lambda, Canoga Park, CA).
French law does not authorize the storage of race information (specific authorization may be obtained in specific circumstances, such as for genetics population studies). The induction therapy was differentiated according to its effect on lymphocytes: horse or rabbit antithymocyte globulin antibodies or anti-CD3 antibody were considered as lymphocyte-depleting agents whereas anti-interleukin-2 receptor antibodies (basiliximab) were considered as a non lymphocyte-depleting agent. Since not all of the biopsies were analyzed with the recent Banff classification but as therapeutic strategies were nevertheless mostly based on a histological diagnosis regardless of the time period, and as the therapeutic strategies did not differ according to the graft rank regardless of the period, we opted to grade acute rejection episode (ARE) according to their response to steroid bolus therapy: steroid-sensitive ARE were considered as non-severe, whereas steroid-resistant ARE requiring rescue with additional therapy were considered as severe.
Comparisons of baseline characteristics between the FTR and STR were based on the chi-square test. Different times-to-event distributions were described including the time between the transplantation and: (a) the graft failure, i.e. the first event between the return to dialysis and the patient death with a functioning graft (patient-and-graft survival); (b) the return to dialysis, i.e. patient deaths were censored (graft survival); (c) the patient death with a functioning graft, i.e. the returns to dialysis were censored (patient survival); (d) the first ARE and (e) the first severe ARE, i.e. non-severe ARE were censored. Survival curves were estimated using the Kaplan-Meier estimator. Only the main outcomes, i.e. PGS and ARE/severe ARE occurrences, were analyzed in multivariate. A first selection of covariates using the Log-rank test (p<0.20) was performed before the Cox model (Wald test with p<0.05, step-by-step descending procedure). Cox models were stratified per center. Baseline parameters differentially distributed between FTR and STR were also introduced in the models. Because the Cox model was performed by using all the recipients regardless the graft rank, we did not take into account specific covariates for STR, such as the survival time of the first transplant ,  or the time in dialysis before retransplantation , , . Because the definition of the duration in ESRD is different between FTR and STR, a special attention was paid to ESRD-related comorbidities.
Hazards proportionality was checked by plotting log-minus-log survival curves and by testing the scaled Schoenfeld residuals . Interactions between the graft rank and all the covariates were tested. The possible colinearity between donor type and CIT was also checked. An extended Cox model with time-dependent coefficients was used for non-proportional covariates , . The change time-point of the hazard ratio was estimated by minimizing the Bayesian Information Criteria . In order to evaluate graft survival for the two comparable populations of FTR and STR, we also performed a sub-analysis. According to the independent risk factors for graft failure highlighted by the previous methodology, we identified 486 pairs of FTR and STR. A Kaplan-Meier estimator and the Cox model were also used to evaluate the association between graft rank and graft survival in this sub-analysis.
Statistical analyses were performed using version 2.12.0 of the R software .
Description of the cohort
The demographic and baseline characteristics at the time of transplantation are presented in Table 1. Among the 3103 kidney transplantations, 641 (20.7%) were STR. In both groups, the majority of patients received a transplant from a deceased donor, after a period of dialysis, and the distributions of recipient and donor gender were comparable. STR were younger (p<0.0001) and their transplants came from younger donors (p<0.0001). Recurrent nephropathies (p<0.0001), cardiac disease (p = 0.0007), hepatitis (p<0.0001) and malignancy (p<0.0001) were more frequent among STR. Compared to FTR, STR received better HLA-matched transplants (p<0.0001), but their CIT were longer (p<0.0001) and they were more sensitized, with higher positivity of anti-class I and anti-class II PRA than FTR (p<0.0001). They were also more frequently exposed to induction therapy with a lymphocyte-depleting agent (p<0.0001).
The patient-and-graft survival at 1, 5 and 10 years respectively were: 92%, 79% and 56% for STR and 94%, 83% and 66% for FTR (Figure 1-A). Without any adjustment on confounding factors, STR had a significantly higher risk of graft failure than FTR (p = 0.0127). Approximately beyond 4 years post-transplantation, the difference in survival curves appeared to increase over time. STR also had a significantly lower graft survival than FTR (Figure 1-B, p = 0.0206). However, we could not demonstrate a significant difference between the patient survival (Figure 1-C, p = 0.2890).
(A) Patient-and-graft survival ( = overall graft survival) : patient deaths with a functioning graft are considered as a graft failure (log-rank test: p = 0.0127) (B) Death-censored graft survival: patient deaths with a functioning graft are censored (log-rank test: p = 0.0206) and (C) Patient survival: returns to dialysis are censored (log-rank test: p = 0.2890), for first and second grafts performed in the DIVAT network between January 1996 and November 2010 (Kaplan-Meier estimates). (D) Patient-and-graft survival sub-analysis for a sample of matched first grafts (N = 486) and second grafts (N = 486) for the following risk factors of graft failure: transplantation period, recipient age, history of cardiac disease, anti-class I PRA, recipient/donor relationship, BMI and EBV serology.
The univariate analysis revealed that the relationship between the graft rank and the PGS changed with post transplantation time (p = 0.0125). Assuming that the hazard ratio (HR) associated with the graft rank can be considered constant within the 2 periods; we found that the optimal cutoff point which minimized the Bayesian Information Criterion was 4 years. This model was validated by the analysis of the Schoenfeld's residuals. It was also coherent with Figure 1-A. Of note, graft failure was not significantly associated with the HLA-A-B-DR level (HR = 1.14, p = 0.274) nor with the HLA-DR level (HR = 1.03, p = 0.739).
The multivariate analysis was based on 2772 patients, as 257 FTR and 74 STR presented missing data (Table 2). The risk of graft failure was 2.18 times higher for STR after 4 years of transplantation (p = 0.0013). There was no significant difference before 4 years (HR = 1.05, p = 0.7830). The risk of graft failure was also higher for transplantation before 2005 (HR = 1.32, p = 0.0427), recipient≥55 years of age (HR = 1.49, p = 0.0012), deceased donor (HR = 2.19, p = 0.0015), cardiac disease (HR = 1.34, p = 0.0057), positive anti-class I PRA (HR = 1.43, p = 0.0055), obesity (HR = 1.54, p = 0.0050) and positive donor EBV serology (HR = 1.80, p = 0.0076). Of note, no interaction with the graft rank achieved statistical significance.
The sub-analysis consisted of analyzing 486 pairs of FTR and STR with the same risk factors of graft failure (transplantation period, recipient age, history of cardiac disease, anti-class I PRA, recipient/donor relationship, BMI and EBV serology). The corresponding graft survivals are presented in Figure 1D. This confirmed the time-dependent effect of the graft rank: the risk of graft failure was 2.15 times higher for STR after 4 years of transplantation (95% CI = 1.14–4.08, p = 0.0184) but there was no significant difference before 4 years (HR = 1.11, 95% CI = 0.77–1.58, p = 0.5842).
Acute rejection episode analysis
In order to explain the previous delayed excess risk in the STR group after few years post transplantation, we first made the hypothesis of a higher frequency of ARE or severe ARE in this group, with the associated delayed consequences on the patient-and-graft survival.
The cumulative probability of ARE at 1, 3 and 12 months respectively were 10%, 13% and 19% for STR and 8%, 14% and 20% for FTR (Figure 2-A). The univariate analysis showed no trend for higher ARE occurrence in STR than FTR (p = 0.4420). The multivariate Cox model confirmed this result (Table 3, HR = 1.01, p = 0.9675). ARE occurrence was related to HLA-A-B-DR mismatches (HR = 1.46, p = 0.0004) and anti-class II PRA (HR = 1.29, p = 0.0180). Recipients≥55 years of age (HR = 0.79, p = 0.0173) and recipients with a lymphocyte-depleting therapy (HR = 0.65, p<0.0001) had a lower risk of ARE occurrence.
(A) Cumulative probability of acute rejection episodes for FTR and STR (log-rank test: p = 0.4420) and (B) Cumulative probability of severe acute rejection episodes for FTR and STR (log-rank test: p = 0.0040), for first and second grafts (Kaplan-Meier estimates).
The cumulative probability of severe ARE at 1 and 12 months respectively were 2% and 5% for STR, and 1% and 2% for FTR (Figure 2-B). The univariate analysis showed that STR had a higher risk of severe ARE occurrence than FTR (p = 0.0040), but this significant result was not confirmed by the multivariate Cox model (Table 4, HR = 1.27, p = 0.4087). Severe ARE occurrence was also related to anti-class II PRA (HR = 2.26, p = 0.0027). Of note, recipients transplanted before 2005 (HR = 0.52, p = 0.0329) and recipients of an old donor graft (HR = 0.59, p = 0.3470) had a significantly lower risk of severe ARE occurrence.
Based on an overview of the literature, the prognosis of STR compared to FTR is still unclear. As the demand for kidney transplants largely exceeds the supply, it is important to evaluate the excess risk related to STR and to identify patients with the worst chances of long-term outcome.
In 2003, Coupel et al. compared 233 STR to 1174 FTR and observed no difference in the 10-year survival , probably as STR were younger and had a higher level of HLA-matching than FTR. In 2008, Arnol et al. reported a similar 15-year survival between 81 STR compared to 427 FTR. They also found no differences in the occurrence of ARE between the two groups . From a series of 26 deceased-donor STR versus 140 FTR analyzed in 2009, Gruber et al. also reported no differences in the 8-year survival nor in the occurrence or severity of ARE . In the same year, Wang et al. compared the 5-year PGS of 65 deceased-donor STR versus 613 FTR and likewise, reported no difference . Thus, from these former studies, it appears that STR have a long-term outcome similar to FTR. However, the interpretations of these studies are limited by several factors: the slight number of STR (low statistical power), the monocenter design, the number of adjustment covariates or the short follow-up period. Conversely, in 2007, Magee et al.  compared the 5-year graft survival of a large cohort of kidney recipients (more than 2000 STR versus more than 20000 FTR), from the Organ Procurement and Transplantation Network registry, and reported that even with adjustment for donor and recipient factors, the 5-year risk of graft failure remained significantly higher for repeat kidney transplant recipients (including second transplantations and more) than for FTR. Nevertheless, adjustment factors were limited and the follow-up was short. Moreover none of these studies evaluated the possible time-dependent effect of the graft rank, which is the central assumption of the proportional hazard Cox model.
In this paper, we used a specific methodology for an accurate comparison between FTR and STR, by taking into account all the possible confounding factors and modeling the time-dependent effect of the graft rank. To our knowledge, such an analysis has never been performed. Our results, based on recipients from a large multicenter cohort under similar recent immunosuppressive maintenance therapy, show that STR have a poorer long-term prognosis than FTR. We show for the first time that this risk is delayed and appears significant beyond four years of follow-up. This cut-off definitely does not correspond to a sudden modification of the graft failure risk. We should rather retain that the excess of risk of STR appears after few years of transplantation. This time-dependent association may be a major point as it was only after its introduction that we showed the significant excess risk of graft failure for STR: it may explain that the majority of the other papers did not demonstrate significant correlation between the survival of FTR and STR.
The difference in PGS could have been due to a higher frequency of ARE or severe ARE for STR than for FTR during the follow-up. However, we did not demonstrate such a difference in ARE, nor in severe ARE occurrences. For this last endpoint, we showed that STR tended to have a higher risk of severe ARE than FTR. The lack of statistical power (only 96 severe ARE were observed in the whole cohort) may explain why this finding did not reach statistical significance.
As always in observational studies, there are several limitations to this study. (i) The use of different techniques for PRA identification may introduce a bias, limited by the fact that STR and FTR were compared over the same period/center and by adjusting on the year of transplantation. (ii) It was not possible to include the causes of graft loss in our analyses (immunologic versus non-immunologic causes) since this the collection of this information has only recently been initiated. (iii) It was unfortunately not possible to adjust for the pretransplant duration of dialysis and the duration of first transplant survival, as only covariates common to FTR and STR can be taken into account in a Cox model. To overcome this difficulty, we adjusted for the comorbidities at transplantation. (iv) Adjustment for long-term immunosuppression regimens was not done, as it is more a reflection of a therapeutic adaptation to a clinical situation and it depends on the center, on the clinician and on the therapeutic advances. (v) A possible effect of the transplantation policy might introduce some bias that is overcome by the adjustment in the multivariate model and by the matched case-control design in the sub-analysis model. (vi) Our study also failed to eliminate the effects of some confounding factors such as medication compliance; as in every large-sized cohort, this information cannot be realistically collected. (vii) Delayed graft function (DGF) was not included as a covariate in the analysis as only pre- and per-transplant covariates were taken into account. However, an additional analysis including DGF did not provide new possible explanations for the different first and second transplant outcomes (data not shown). (vii) Although all ARE were biopsy-proven, a small number were classified using the most recent Banff criteria. It will take a few years before we are able to explore the possible link between biopsy-proven antibody-mediated ARE occurrence and a worse outcome. (viii) Finally, due to the long-term follow-up period, the information about preformed DSA was available for only a very small part of our cohort, although this covariate is suspected to be related to risk of graft failure.
In conclusion, this observational study on a large multicenter cohort confirmed other findings showing that STR have a lower patient-and-graft survival compared to FTR. However, this study eliminates some confounding factors from the current literature. The excess risk of graft failure for STR was delayed after several years post transplantation. This effect did not seem to be related to a higher frequency of ARE or severe ARE for second grafts. Regardless of the limitations of such an observational cohort; the current study supports the hypothesis of a higher propensity for STR to develop donor specific antibodies post-transplantation. These findings justify further expensive systematic and prospective monitoring of antibodies in both populations. Further investigations are still needed to understand the biological/immunological mechanisms underlying graft failure, to identify patients specifically at risk of graft failure and also to provide a strategy for improving outcome in STR. But in practice, physicians should not consider second and first kidney transplant recipients equally.
We wish to thank members of the clinical research assistant team (S. Le Floch, J. Posson, C. Scellier, V. Eschbach, K. Zurbonsen, C. Dagot, F. M'Raiagh, V. Godel, X. Longy).
Conceived and designed the experiments: MG JD JPS. Performed the experiments: KTL. Analyzed the data: KTL YF MG JD JPS. Contributed reagents/materials/analysis tools: CL HK MK ML NK LR VG GM EM. Wrote the paper: KTL YF MG JD JPS.
- 1. Rao PS, Schaubel DE, Wei G, Fenton SSA (2006) Evaluating the survival benefit of kidney retransplantation. Transplantation 82: 669–674. doi: 10.1097/01.tp.0000235434.13327.11
- 2. Ojo A, Wolfe RA, Agodoa LY, Held PJ, Port FK, et al. (1998) Prognosis after primary renal transplant failure and the beneficial effects of repeat transplantation: multivariate analyses from the United States Renal Data System. Transplantation 66: 1651–1659. doi: 10.1097/00007890-199812270-00014
- 3. Rao PS, Schaubel DE, Jia X, Li S, Port FK, et al. (2007) Survival on dialysis post-kidney transplant failure: results from the Scientific Registry of Transplant Recipients. Am J Kidney Dis 49: 294–300. doi: 10.1053/j.ajkd.2006.11.022
- 4. Sola E, Gonzalez-Molina M, Cabello M, Burgos D, Ramos J, et al. (2010) Long-term improvement of deceased donor renal allograft survival since 1996: a single transplant center study. Transplantation 89: 714–720. doi: 10.1097/tp.0b013e3181c892dd
- 5. Gruber SA, Brown KL, El-Amm JM, Singh A, Mehta K, et al. (2009) Equivalent outcomes with primary and retransplantation in African-American deceased-donor renal allograft recipients. Surgery 146: 646–652. doi: 10.1016/j.surg.2009.05.020
- 6. Hagan C, Hickey DP, Little DM (2003) A single-center study of the technical aspects and outcome of third and subsequent renal transplants. Transplantation 75: 1687–1691. doi: 10.1097/01.tp.0000062536.34333.bb
- 7. Registry UNOS (Accessed 28 January 2011) Available: www.unos.org. Accessed 2012 Sep 28.
- 8. Gjertson DW (2002) A multi-factor analysis of kidney regraft outcomes. Clin Transpl: 335–349.
- 9. Stratta RJ, Oh CS, Sollinger HW, Pirsch JD, Kalayoglu M, et al. (1988) Kidney retransplantation in the cyclosporine era. Transplantation 45: 40–45. doi: 10.1097/00007890-198801000-00010
- 10. Almond PS, Matas AJ, Gillingham K, Troppmann C, Payne W, et al. (1991) Risk factors for second renal allografts immunosuppressed with cyclosporine. Transplantation 52: 253–258. doi: 10.1097/00007890-199108000-00013
- 11. Kerman RH, Kimball PM, Buren CTV, Lewis RM, DeVera V, et al. (1991) AHG and DTE/AHG procedure identification of crossmatch-appropriate donor-recipient pairings that result in improved graft survival. Transplantation 51: 316–320. doi: 10.1097/00007890-199102000-00008
- 12. Howard RJ, Reed AI, Werf WJVD, Hemming AW, Patton PR, et al. (2001) What happens to renal transplant recipients who lose their grafts? Am J Kidney Dis 38: 31–35. doi: 10.1053/ajkd.2001.25178
- 13. Scornik JC (1995) Detection of alloantibodies by flow cytometry: relevance to clinical transplantation. Cytometry 22: 259–263. doi: 10.1002/cyto.990220402
- 14. Coupel S, Giral-Classe M, Karam G, Morcet JF, Dantal J, et al. (2003) Ten-year survival of second kidney transplants: impact of immunologic factors and renal function at 12 months. Kidney Int 64: 674–680. doi: 10.1046/j.1523-1755.2003.00104.x
- 15. Arnol M, Prather JC, Mittalhenkle A, Barry JM, Norman DJ (2008) Long-term kidney regraft survival from deceased donors: risk factors and outcomes in a single center. Transplantation 86: 1084–1089. doi: 10.1097/tp.0b013e318187ba5c
- 16. Wang D, Xu TZ, Chen JH, Wu WZ, Yang SL, et al. (2009) Factors influencing second renal allograft survival: a single center experience in China. Transpl Immunol 20: 150–154. doi: 10.1016/j.trim.2008.09.010
- 17. Magee JC, Barr ML, Basadonna GP, Johnson MR, Mahadevan S, et al. (2007) Repeat organ transplantation in the United States, 1996–2005. Am J Transplant 7: 1424–1433. doi: 10.1111/j.1600-6143.2007.01786.x
- 18. Rigden S, Mehls O, Gellert R (1999) Factors influencing second renal allograft survival. Scientific Advisory Board of the ERA-EDTA Registry. European Renal Association-European Dialysis and Transplant Association. Nephrol Dial Transplant 14: 566–569. doi: 10.1093/ndt/14.3.566
- 19. Abouljoud MS, Deierhoi MH, Hudson SL, Diethelm AG (1995) Risk factors affecting second renal transplant outcome, with special reference to primary allograft nephrectomy. Transplantation 60: 138–144. doi: 10.1097/00007890-199507000-00005
- 20. Miles CD, Schaubel DE, Jia X, Ojo AO, Port FK, et al. (2007) Mortality experience in recipients undergoing repeat transplantation with expanded criteria donor and non-ECD deceased-donor kidneys. Am J Transplant 7: 1140–1147. doi: 10.1111/j.1600-6143.2007.01742.x
- 21. Ladrière M, Foucher Y, Legendre C, Kamar N, Garrigue V, et al.. (2010) The western europe cohort of kidney transplanted recipients - the DIVAT network. Clinical transplants: 460–461.
- 22. Grambsch P, Therneau T (1994) Proportional hazards tests and diagnostics based on weighted residuals. Biometrika 81: 515–526. doi: 10.1093/biomet/81.3.515
- 23. Klein JP, Moeschberger ML (1997) Survival analysis: techniques for censored and truncated data. New York: Springer-verlag.
- 24. Therneau TM, Grambsch PM (2000) Modeling survival data: extending the cox model. New York: Springer-Verlag.
- 25. Volinsky CT, Raftery AE (2000) Bayesian information criterion for censored survival models. Biometrics 56: 256–262. doi: 10.1111/j.0006-341x.2000.00256.x
- 26. Team RDC (2010) R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.