Poor Long-Term Outcome in Second Kidney Transplantation: A Delayed Event

Background Old studies reported a worse outcome for second transplant recipient (STR) than for first transplant recipient (FTR) mainly due to non-comparable populations with numbers confounding factors. More recent analysis, based on improved methodology by using multivariate regressions, challenged this generally accepted idea: the poor prognosis for STR is still under debate. Methodology To assess the long-term patient-and-graft survival of STR compared to FTR, we performed an observational study based on the French DIVAT prospective cohort between 1996 and 2010 (N = 3103 including 641 STR). All patients were treated with a CNI, an mTOR inhibitor or belatacept in addition to steroids and mycophenolate mofetil for maintenance therapy. Patient-and-graft survival and acute rejection episode (ARE) were analyzed using Cox models adjusted for all potential confounding factors such as pre-transplant anti-HLA immunization. Results We showed that STR have a higher risk of graft failure than FTR (HR = 2.18, p = 0.0013) but that this excess risk was observed after few years of transplantation. There was no significant difference between STR and FTR in the occurrence of either overall ARE (HR = 1.01, p = 0.9675) or steroid-resistant ARE (HR = 1.27, p = 0.4087). Conclusions The risk of graft failure following second transplantation remained consistently higher than that observed in first transplantation after adjusting for confounding factors. The rarely performed time-dependent statistical modeling may explain the heterogeneous conclusions of the literature concerning second transplantation outcomes. In clinical practice, physicians should not consider STR and FTR equally.


Introduction
Nowadays, repeat transplantation provides the best chance for long-term survival and quality of life in patients facing allograft loss, as compared to maintenance dialysis therapy [1,2,3]. This concept was recently supported by Ojo et al. [2] who showed that repeat transplantation is associated with a reduced mortality compared to remaining on dialysis after a prior graft loss. This benefit is valid despite the fact that re-transplant recipients present a higher risk of death during the first month after the transplant surgery [1]. When considering short and long-term outcomes, graft survival rates following retransplantation have continuously improved in recent years [4]. There is evidence that patients undergoing a third or more transplantation have a worse prognosis [5,6,7]. However, the poor prognosis of second transplant recipients (STR) remains a matter of debate.
Some previous studies have demonstrated that STR have a lower graft survival than first transplant recipients (FTR) [2,8,9,10,11,12] leading STR to be considered as a higher risk group for graft failure, mainly related to increased levels of preformed HLA antibodies [13]. However, Coupel et al. showed that the difference in long-term graft survival was not significant between STR and FTR when an HLA-DR mismatch was avoided [14]. Recent improvements in immunosuppressive therapy may have contributed to decreasing the difference in outcomes between STR and FTR [8]. When taking into account several confounding factors such as pre-transplant immunization, evidence of an excess risk of graft failure for STR is not clear, as demonstrated by the most recent studies [1,5,15,16]. For Magee et al., after adjustment for donor and recipient factors, the risk of graft failure remained significantly higher for STR than FTR [17].
Whereas factors influencing second graft survival have been well studied [8,9,14,16,18,19,20], those related to a possible excess risk of graft failure for STR compared with FTR are not well established [15,17]. The objective of our study was not to recommend whether patients should get a second transplant or not. Addressing this important question would require a completely different study design. Indeed, the overall aim of our epidemiological observational cohort study was to provide data from a large multicenter population of kidney transplant recipients in order to clarify the relationship between the graft rank and the long term graft outcomes. For the first time, we adjusted for a large number of covariates at baseline and we modeled the timedependent relationship between graft rank and graft survival. According to these methodological improvements, we demonstrated that STR have a poorer patient-and-graft survival (PGS) than FTR significant since four years post-transplantation.

Study population
Data were prospectively collected from the DIVAT (Données Informatisées et VAlidées en Transplantation) French multicentric database [21]. Codes were used to assure donor and recipient anonymity and blind assay. The ''Comité National Informatique et Liberté'' approved the study (Nu CNIL 891735) and written informed consent was obtained from the participants. The data are computerized in real time as well as at each transplant anniversary and are submitted for an annual audit. The cohort consisted of 2462 FTR and 641 STR meeting the following inclusion criteria: (a) adult recipients; (b) transplantations performed between January 1996 and November 2010; and (c) maintenance therapy with calcineurin inhibitors, mammalian target of rapamycin inhibitors or belatacept, in addition to mycophenolic acid (MyforticH Novartis, France or CellceptH Roche, France) and steroids. Simultaneous transplantations were excluded. Among 2462 FTR meeting the inclusion criteria, 52 patients were also part of the STR group as they received two transplants during the observation period. These 52 patients, who were included in both cohorts, represented 2% and 8% of the FTR and STR groups respectively. Given the large number of covariates, it is reasonable to assume conditional independence of these patients. We did not exclude these 52 patients as this would have reduced the comparability of the two groups by underrepresenting the FTR patients with a rapid return-to-dialysis, which would have led to an over-estimation of FTR graft survival.

Clinical variables of interest
To guarantee the comparability between FTR and STR, adjustments were made for all of the following possible pre-or pertransplant immunological and non-immunological confounding factors: transplantation period (before or after 2005, which corresponds to the routine utilization of high sensitivity techniques for panel-reactive antibody, PRA), recipient gender and age, primary diagnosis of end stage renal disease (ESRD), comorbidities, highest historical levels of pre-transplant PRA against class I and II antigens, deceased or living donor status, donor age, cold ischemia time (CIT), HLA-A-B-DR mismatches and induction therapy. The high sensitivity techniques correspond to pre-transplant anti-HLA identification obtained by multiplex screening test (LAT-M; One lambda, Canoga Park, CA).
French law does not authorize the storage of race information (specific authorization may be obtained in specific circumstances, such as for genetics population studies). The induction therapy was differentiated according to its effect on lymphocytes: horse or rabbit antithymocyte globulin antibodies or anti-CD3 antibody were considered as lymphocyte-depleting agents whereas antiinterleukin-2 receptor antibodies (basiliximab) were considered as a non lymphocyte-depleting agent. Since not all of the biopsies were analyzed with the recent Banff classification but as therapeutic strategies were nevertheless mostly based on a histological diagnosis regardless of the time period, and as the therapeutic strategies did not differ according to the graft rank regardless of the period, we opted to grade acute rejection episode (ARE) according to their response to steroid bolus therapy: steroidsensitive ARE were considered as non-severe, whereas steroidresistant ARE requiring rescue with additional therapy were considered as severe.

Statistical analysis
Comparisons of baseline characteristics between the FTR and STR were based on the chi-square test. Different times-to-event distributions were described including the time between the transplantation and: (a) the graft failure, i.e. the first event between the return to dialysis and the patient death with a functioning graft (patient-and-graft survival); (b) the return to dialysis, i.e. patient deaths were censored (graft survival); (c) the patient death with a functioning graft, i.e. the returns to dialysis were censored (patient survival); (d) the first ARE and (e) the first severe ARE, i.e. nonsevere ARE were censored. Survival curves were estimated using the Kaplan-Meier estimator. Only the main outcomes, i.e. PGS and ARE/severe ARE occurrences, were analyzed in multivariate. A first selection of covariates using the Log-rank test (p,0.20) was performed before the Cox model (Wald test with p,0.05, step-bystep descending procedure). Cox models were stratified per center. Baseline parameters differentially distributed between FTR and STR were also introduced in the models. Because the Cox model was performed by using all the recipients regardless the graft rank, we did not take into account specific covariates for STR, such as the survival time of the first transplant [15,18] or the time in dialysis before retransplantation [10,15,19]. Because the definition of the duration in ESRD is different between FTR and STR, a special attention was paid to ESRD-related comorbidities.
Hazards proportionality was checked by plotting log-minus-log survival curves and by testing the scaled Schoenfeld residuals [22]. Interactions between the graft rank and all the covariates were tested. The possible colinearity between donor type and CIT was also checked. An extended Cox model with time-dependent coefficients was used for non-proportional covariates [23,24]. The change time-point of the hazard ratio was estimated by minimizing the Bayesian Information Criteria [25]. In order to evaluate graft survival for the two comparable populations of FTR and STR, we also performed a sub-analysis. According to the independent risk factors for graft failure highlighted by the previous methodology, we identified 486 pairs of FTR and STR. A Kaplan-Meier estimator and the Cox model were also used to evaluate the association between graft rank and graft survival in this sub-analysis.
Statistical analyses were performed using version 2.12.0 of the R software [26].

Description of the cohort
The demographic and baseline characteristics at the time of transplantation are presented in Table 1. Among the 3103 kidney transplantations, 641 (20.7%) were STR. In both groups, the majority of patients received a transplant from a deceased donor, after a period of dialysis, and the distributions of recipient and donor gender were comparable. STR were younger (p,0.0001) and their transplants came from younger donors (p,0.0001). Recurrent nephropathies (p,0.0001), cardiac disease (p = 0.0007), hepatitis (p,0.0001) and malignancy (p,0.0001) were more frequent among STR. Compared to FTR, STR received better HLA-matched transplants (p,0.0001), but their CIT were longer (p,0.0001) and they were more sensitized, with higher positivity of anti-class I and anti-class II PRA than FTR (p,0.0001). They were also more frequently exposed to induction therapy with a lymphocyte-depleting agent (p,0.0001).

Survival analysis
The patient-and-graft survival at 1, 5 and 10 years respectively were: 92%, 79% and 56% for STR and 94%, 83% and 66% for FTR (Figure 1-A). Without any adjustment on confounding factors, STR had a significantly higher risk of graft failure than FTR (p = 0.0127). Approximately beyond 4 years post-transplantation, the difference in survival curves appeared to increase over time. STR also had a significantly lower graft survival than FTR (Figure 1-B, p = 0.0206). However, we could not demonstrate a significant difference between the patient survival ( Figure 1-C, p = 0.2890).
The univariate analysis revealed that the relationship between the graft rank and the PGS changed with post transplantation time (p = 0.0125). Assuming that the hazard ratio (HR) associated with Table 1. Demographic and baseline characteristics of primary and second transplants performed in the DIVAT network between January 1996 and November 2010.

Characteristics
All grafts (N = 3103) First graft (N = 2462) Second graft (N = 641) the graft rank can be considered constant within the 2 periods; we found that the optimal cutoff point which minimized the Bayesian Information Criterion was 4 years. This model was validated by the analysis of the Schoenfeld's residuals. It was also coherent with The multivariate analysis was based on 2772 patients, as 257 FTR and 74 STR presented missing data ( Table 2). The risk of graft failure was 2.18 times higher for STR after 4 years of transplantation (p = 0.0013). There was no significant difference before 4 years (HR = 1.05, p = 0.7830). The risk of graft failure was also higher for transplantation before 2005 (HR = 1.32, p = 0.0427), recipient$55 years of age (HR = 1.49, p = 0.0012), deceased donor (HR = 2.19, p = 0.0015), cardiac disease The sub-analysis consisted of analyzing 486 pairs of FTR and STR with the same risk factors of graft failure (transplantation period, recipient age, history of cardiac disease, anti-class I PRA, recipient/donor relationship, BMI and EBV serology). The corresponding graft survivals are presented in Figure 1D. This confirmed the time-dependent effect of the graft rank: the risk of graft failure was 2.15 times higher for STR after 4 years of transplantation (95% CI = 1.14-4.08, p = 0.0184) but there was no significant difference before 4 years (HR = 1.11, 95% CI = 0.77-1.58, p = 0.5842).

Acute rejection episode analysis
In order to explain the previous delayed excess risk in the STR group after few years post transplantation, we first made the hypothesis of a higher frequency of ARE or severe ARE in this group, with the associated delayed consequences on the patientand-graft survival.
The cumulative probability of severe ARE at 1 and 12 months respectively were 2% and 5% for STR, and 1% and 2% for FTR (Figure 2-B). The univariate analysis showed that STR had a higher risk of severe ARE occurrence than FTR (p = 0.0040), but this significant result was not confirmed by the multivariate Cox model (Table 4, HR = 1.27, p = 0.4087). Severe ARE occurrence was also related to anti-class II PRA (HR = 2.26, p = 0.0027). Of note, recipients transplanted before 2005 (HR = 0.52, p = 0.0329) and recipients of an old donor graft (HR = 0.59, p = 0.3470) had a significantly lower risk of severe ARE occurrence.

Discussion
Based on an overview of the literature, the prognosis of STR compared to FTR is still unclear. As the demand for kidney transplants largely exceeds the supply, it is important to evaluate the excess risk related to STR and to identify patients with the worst chances of long-term outcome.
In 2003, Coupel et al. compared 233 STR to 1174 FTR and observed no difference in the 10-year survival [14], probably as STR were younger and had a higher level of HLA-matching than FTR. In 2008, Arnol et al. reported a similar 15-year survival between 81 STR compared to 427 FTR. They also found no differences in the occurrence of ARE between the two groups [15].    Transplantation Network registry, and reported that even with adjustment for donor and recipient factors, the 5-year risk of graft failure remained significantly higher for repeat kidney transplant recipients (including second transplantations and more) than for FTR. Nevertheless, adjustment factors were limited and the follow-up was short. Moreover none of these studies evaluated the possible time-dependent effect of the graft rank, which is the central assumption of the proportional hazard Cox model. In this paper, we used a specific methodology for an accurate comparison between FTR and STR, by taking into account all the possible confounding factors and modeling the time-dependent effect of the graft rank. To our knowledge, such an analysis has never been performed. Our results, based on recipients from a large multicenter cohort under similar recent immunosuppressive maintenance therapy, show that STR have a poorer long-term prognosis than FTR. We show for the first time that this risk is delayed and appears significant beyond four years of follow-up. This cut-off definitely does not correspond to a sudden modification of the graft failure risk. We should rather retain that the excess of risk of STR appears after few years of transplantation. This time-dependent association may be a major point as it was only after its introduction that we showed the significant excess risk of graft failure for STR: it may explain that the majority of the other papers did not demonstrate significant correlation between the survival of FTR and STR.
The difference in PGS could have been due to a higher frequency of ARE or severe ARE for STR than for FTR during the follow-up. However, we did not demonstrate such a difference in ARE, nor in severe ARE occurrences. For this last endpoint, we showed that STR tended to have a higher risk of severe ARE than FTR. The lack of statistical power (only 96 severe ARE were observed in the whole cohort) may explain why this finding did not reach statistical significance.
As always in observational studies, there are several limitations to this study. (i) The use of different techniques for PRA identification may introduce a bias, limited by the fact that STR and FTR were compared over the same period/center and by adjusting on the year of transplantation. (ii) It was not possible to include the causes of graft loss in our analyses (immunologic versus non-immunologic causes) since this the collection of this information has only recently been initiated. (iii) It was unfortunately not possible to adjust for the pretransplant duration of dialysis and the duration of first transplant survival, as only covariates common to FTR and STR can be taken into account in a Cox model. To overcome this difficulty, we adjusted for the comorbidities at transplantation. (iv) Adjustment for long-term immunosuppression regimens was not done, as it is more a reflection of a therapeutic adaptation to a clinical situation and it depends on the center, on the clinician and on the therapeutic advances. (v) A possible effect of the transplantation policy might introduce some bias that is overcome by the adjustment in the multivariate model and by the matched case-control design in the sub-analysis model. (vi) Our study also failed to eliminate the effects of some confounding factors such as medication compliance; as in every large-sized cohort, this information cannot be realistically collected. (vii) Delayed graft function (DGF) was not included as a covariate in the analysis as only pre-and per-transplant covariates were taken into account. However, an additional analysis including DGF did not provide new possible explanations for the different first and second transplant outcomes (data not shown). (vii) Although all ARE were biopsy-proven, a small number were classified using the most recent Banff criteria. It will take a few years before we are able to explore the possible link between biopsy-proven antibodymediated ARE occurrence and a worse outcome. (viii) Finally, due to the long-term follow-up period, the information about preformed DSA was available for only a very small part of our cohort, although this covariate is suspected to be related to risk of graft failure.
In conclusion, this observational study on a large multicenter cohort confirmed other findings showing that STR have a lower patient-and-graft survival compared to FTR. However, this study eliminates some confounding factors from the current literature. The excess risk of graft failure for STR was delayed after several years post transplantation. This effect did not seem to be related to a higher frequency of ARE or severe ARE for second grafts. Regardless of the limitations of such an observational cohort; the current study supports the hypothesis of a higher propensity for STR to develop donor specific antibodies post-transplantation. These findings justify further expensive systematic and prospective monitoring of antibodies in both populations. Further investigations are still needed to understand the biological/immunological mechanisms underlying graft failure, to identify patients specifi-cally at risk of graft failure and also to provide a strategy for improving outcome in STR. But in practice, physicians should not consider second and first kidney transplant recipients equally.