Pre-Transplant Donor-Specific T-Cell Alloreactivity Is Strongly Associated with Early Acute Cellular Rejection in Kidney Transplant Recipients Not Receiving T-Cell Depleting Induction Therapy

Preformed T-cell immune-sensitization should most likely impact allograft outcome during the initial period after kidney transplantation, since donor-specific memory T-cells may rapidly recognize alloantigens and activate the effector immune response, which leads to allograft rejection. However, the precise time-frame in which acute rejection is fundamentally triggered by preformed donor-specific memory T cells rather than by de novo activated naïve T cells is still to be established. Here, preformed donor-specific alloreactive T-cell responses were evaluated using the IFN-γ ELISPOT assay in a large consecutive cohort of kidney transplant patients (n = 90), to assess the main clinical variables associated with cellular sensitization and its predominant time-frame impact on allograft outcome, and was further validated in an independent new set of kidney transplant recipients (n = 67). We found that most highly T-cell sensitized patients were elderly patients with particularly poor HLA class-I matching, without any clinically recognizable sensitizing events. While one-year incidence of all types of biopsy-proven acute rejection did not differ between T-cell alloreactive and non-alloreactive patients, Receiver Operating Characteristic curve analysis indicated the first two months after transplantation as the highest risk time period for acute cellular rejection associated with baseline T-cell sensitization. This effect was particularly evident in young and highly alloreactive individuals that did not receive T-cell depletion immunosuppression. Multivariate analysis confirmed preformed T-cell sensitization as an independent predictor of early acute cellular rejection. In summary, monitoring anti-donor T-cell sensitization before transplantation may help to identify patients at increased risk of acute cellular rejection, particularly in the early phases after kidney transplantation, and thus guide decision-making regarding the use of induction therapy.


INTRODUCTION
Outstanding progress has been made in recent decades in assessing the humoral alloimmune sensitization against donor HLA antigens in kidney transplant patients, and has led to a major reduction in acute antibody-mediated rejection (ABMR) rates immediately after transplantation. However, no comparable success has been achieved in the monitoring of the anti-donor T-cell immune response. As a consequence, acute T-cell mediated rejection (TCMR) is still an unpredictable event, and this uncertainty negatively affects decision-making in daily clinical practice.
In fact, there is a considerable inconsistency between what we know from basic immune biology and what we have learnt from clinical transplantation. It is well accepted that T cells are key initiators, mediators and effectors of the alloimmune response, thus playing a key role in allograft rejection [1][2][3]. In fact, alloreactive memory/effector T cells are considered the hallmark of adaptive immunity since, compared to their naïve counterparts, they are long lived, can be fully reactivated with less co-stimulation, are less susceptible to novel immunosuppressants and are directly influenced by heterologous immunity [4][5][6][7][8][9][10][11]. Bearing this in mind, the impact of pre-transplant T-cell sensitization is more likely to take place during the initial period after transplantation, since preformed memory T cells are ready to cross-react to donor alloantigens, ultimately leading to allograft rejection.
Importantly, monitoring T-cell sensitization against donor or even a panel of reactive antigens has been shown to be feasible and reliable using the highly sensitive IFN-γ ELISPOT assay, and has also been shown to correlate with worse allograft function after kidney transplantation [12][13][14][15][16][17]. In this regard, our group recently reported the results of a non-randomized prospective clinical trial [18], monitoring anti-donor cellular alloreactivity in 60 kidney transplant recipients both before and six months after transplantation, with the aim of guiding immunosuppression for a calcineurin-inhibitor (CNI)-based or a CNI-free immunosuppressive regimen [18]. Interestingly, while very low rates of biopsy-proven acute rejection (BPAR) were obtained in both groups, even among T-cell sensitized individuals receiving CNI drugs a strong association was observed between 6-month persistence or de novo donor-specific T-cell alloreactivity and subclinical TCMR in protocol biopsies, suggesting a specific time-frame relationship between preformed donor-specific memory T cells or de novo alloreactive naïve T cells and their impact on kidney allograft outcome.
Here, we analyzed the presence of pre-transplant donor-specific T-cell sensitization in a large consecutive cohort of 90 kidney transplant recipients in whom the type of immunosuppression was given without knowing their baseline anti-donor T-cell sensitization status and the data obtained was further validated in a new independent group of kidney transplant recipients (n = 67). We aimed to investigate the main clinical variables associated with cellular sensitization and the specific post-transplant time-frame in which preformed donor-specific (d-s) memory T cells may negatively challenge allograft outcome.

Patients
Ninety adult kidney transplant recipients from our Renal Transplant Unit at Bellvitge University Hospital (n = 90) attended between 2011 and 2012 were retrospectively analyzed on the basis of the availability of donor stimulator cells consisting of donor splenocytes in deceaseddonor transplants or of donor peripheral blood mononuclear cells (PBMC) in living-donor transplants. The Bellvitge University Hospital ethics committee specifically approved the study, and all patients gave written informed consent. None of the transplant recipients had received immunosuppression before providing PBMCs to perform the ELISPOT assay. The pre-transplant ELISPOT result was not available to the clinicians before or after transplantation, and so had no influence on the choice of immunosuppression and clinical management after the transplantation. The mean time of follow-up of this cohort was 21 months (range 6-48 months).
Main baseline demographic variables in all patients were collected at the time of enrollment and included donor source and age, time on dialysis, number of human antigen leukocyte (HLA) mismatches, number of previous transplants, presence of donor-specific circulating antibodies by solid phase assays (Luminex 1 ), cause of end-stage renal disease (ESRD) and recipient ethnicity (caucasian or otherwise). Furthermore, most relevant clinical variables associated with clinical transplant outcome such as type of maintenance IS (CNI-based or not), steroid withdrawal, use and type of induction therapy (T-cell depletion or not), delayed graft function (DGF), six and twelve-month allograft function (both the estimated glomerular filtration rate, eGFR, ml/min; and serum creatinine, μmol/L), CMV infection, biopsy-proven acute rejection (BPAR), type (antibody-mediated, ABMR; or T-cell mediated, TCMR) and time of rejection (early, <2 months or late, >2 months) were also pooled together for the analysis. DGF was defined as the need for dialysis during the first week after transplantation. The use of either T-cell depleting or monoclonal antibody induction therapy was based on the observed presumable risk derived from the percentage of PRA and HLA mismatch data. CNI were used as maintenance IS treatment in all but two patients who received mTor-inhibitors (Everolimus) as main immunosuppression.
Estimated glomerular filtration rate (eGFR) was calculated using the simplified Modification of Diet in Renal Disease equation (MDRD). All patients in the study showed a negative pre-transplant Complement-dependent cytotoxic (CDC) cross-match test.
A second independent validation cohort of kidney transplant patients (n = 67) was used to evaluate the predictive value of pre-transplant donor-specific T-cell alloreactivity on the advent of early T-cell mediated allograft rejection. These set of patients were consecutively transplanted from November 2011 to October 2013. The clinical management of all these patients was done irrespective of the pre-transplant donor-specific T-cell Elispot result, as the test was not done.

HLA typing
Automated nucleotide sequencing was performed from genomic DNA by selective amplification (PCR) of target exons from each locus for a particular allele. Loci sequenced included HLA class I (A, B, and C) and II (DRB1/3/4/5 and DQA1/DQB1). Nucleotide sequencing was done as previously described [19].

Alloantibody detection
Screening for circulating anti-HLA class I and II alloantibodies in peripheral blood was done by FlowPRA screening beads (One-lambda Inc.). Antibody specificities of positive samples were determined as previously described [20] using the LabScreen Single Antigen assay (Onelambda Inc.).

IFN-γ ELISPOT assays and donor and recipient cell source
Donor-specific IFN-γ ELISPOT assays were done following recently described techniques [18,21]. Peripheral blood samples were obtained in heparinized tubes from renal transplant recipients before kidney transplantation. Donor cells were obtained from donor spleens or peripheral blood mononuclear cells (PBMCs) in deceased and living donors respectively. PBMCs and splenocytes were isolated by standard Ficoll density gradient centrifugation and were frozen in liquid nitrogen and subsequently used for the IFN-γ ELISPOT assay. Deceased-donor splenocytes were CD2-depleted (Easysep 1 Human CD2 Selection kit, StemCell, France) and living-donor PBMCs were CD3-depleted (human CD3+ Cell Depletion Cocktail, RosetteSep 1 kit, StemCell, France) and tested in triplicate wells with respective recipient PBMCs. A positive d-s ELISPOT test was considered as 25 IFN-γ spots/3x10 5 PBMC.

Renal allograft histology
Renal allograft biopsies were performed in patients undergoing acute clinical graft dysfunction. All renal histologies were analyzed following the Banff '09 score, and histological analyses were blindly evaluated.

Statistical analysis
All data are presented as mean ± sd. Groups were compared using the X 2 -test for categorical variables and the one-way analysis of variance or Student's t-test for normally distributed data for quantitative variables, and the nonparametric Kruskal-Wallis or Mann-Whitney U-test for non-normally distributed variables.
A sensitivity/specificity ROC curve analysis was done to determine the ELISPOT test value for the prediction of the advent of early T-cell mediated acute rejection. Stepwise linear regression and binary logistic regression analysis were performed to determine the independent correlation of several independent variables with the presence of early cellular acute rejection. The statistical significance level was defined as p<0.05.

Main clinical and demographic variables associated with anti-donor T-cell sensitization
The main baseline clinical and demographic variables were assessed among all patients for their association with pre-transplant T-cell sensitization. As shown in Table 1, 37/90 (41.1%) had detectable anti-donor alloreactivity pre-transplantation whereas 53/90 (58.9%) did not. No differences were found regarding the number of previous transplants, cause of ESRD, ethnicity or time on dialysis. No association was observed with the degree of pre-transplant humoral sensitization of either donor or non-donor-specific anti-class I and II HLA antibodies. Interestingly, anti-donor T-cell alloreactive patients tended to be older individuals receiving allografts from older deceased donors with low HLA class I matching (Fig. 1). Regarding the validation cohort, there were more T-cell sensitized patients than non T-cell sensitized, but were all of them comparable regarding main clinical and immunological characteristics.
Two patients died during the first year; neither presented pre-formed donor-specific T-cell sensitization. One died of a cardiac arrest nine months after transplantation and the other of an opportunistic infection caused by pneumocystis juvencii. Three allografts were lost within the first year, one due to a vein thrombosis, another due to unresolved urine leakage (both were non-sensitized cellular transplant individuals), and a third in a highly T and B-cell sensitized patient due to a severe BPAR (mixed ABMR and TCMR) who did not respond to intensive rescue immunosuppressive treatment.

Preformed anti-donor T-cell alloreactivity is associated with early acute TCMR and worse allograft function after kidney transplantation
T-cell alloreactive and non-alloreactive patients were comparable in terms of main baseline clinical variables such as the use and type of induction therapy, the choice of maintenance immunosuppression (principally based on CNI drugs, either Cyclosporine-A or Tacrolimus) as well as regarding steroid withdrawal (Table 2). High pre-transplant anti-donor alloreactive patients showed higher incidence of DGF but similar incidence of CMV infection. When the global incidence of BPAR was analyzed, no differences were found between T-cell alloreactive and non-alloreactive recipients (Fig. 2); neither ABMR nor TCMR were associated with preformed donor-specific T-cell alloreactivity. Similarly, no association was observed between pre-transplant T-cell ELISPOT and either graft loss or patient death. In order to establish whether any specific time frame could most likely distinguish all TCMR events significantly associated with pre-transplant T-cell sensitization, a receiver-operating characteristic (ROC) curve analysis was carried out taking into account all time-points of BPAR episodes and the pre-transplant ELISPOT data. Interestingly, the first eight weeks after transplant proved to be the most accurate time frame for identifying TCMR episodes that were related to the pre-transplant donor-specific T-cell immune response, with a notably high sensitivity and specificity (sensitivity = 75%, specificity = 80%) (AUC = 0.701, p = 0.065) (Fig. 3). Now, using this time-period as a binary variable, a significant association was obtained between the pre-  transplant anti-donor T-cell sensitization and the occurrence of early TCMR, but it did not influence the occurrence of other rejection events (p = 0.022) (Fig. 2). Even though pre-transplant anti-donor alloreactive patients showed worse 6 and 12-month allograft function, these values were probably influenced by the significant higher donor age of alloreactive transplant recipients as compared to non-alloreactive recipients (data not shown). Nonetheless, a numerically worse allograft function was consistently observed among pretransplant alloreactive patients when stratified both by type of kidney transplant (living or deceased donors) as well as by donor age (older or younger than 50 years old) (data not shown).

T-cell depletion provides protection against early TCMR in highly donor-specific T-cell alloreactive patients
Whether the use and type of induction therapy could impact on preformed highly alloreactive anti-donor T-cell frequency was further evaluated. While the use of any type of induction therapy did not discriminate those patients developing any kind of BPAR (data not shown, p>0.05), patients receiving T-cell depletion with rATG showed a significantly lower incidence of early TCMR as compared to those that did not (1/26; 3,8% vs 14/64; 22%, respectively, p = 0.038) (Fig. 5a). Furthermore, when only highly donor-specific alloreactive patients were analyzed in relation to their type of induction therapy, eight out of 22 (36.4%) patients receiving anti-IL2R (basiliximab) developed early TCMR, compared with only one out of 12 (8.3%) patients receiving rATG (0.07) (Fig. 5b). Of note, no protective effect by rATG was observed when non T-cell alloreactive transplant patients were evaluated (4/38; 10.5%, non-sensitized patients not receiving rATG experienced early TCMR vs 1/15, 6% developed early TCMR, p = NS).

Absence of Pre-transplant d-s T-cell alloreactivity as a significant protective variable for developing early TCMR
The most relevant clinical variables influencing the outcome of early TCMR (ethnicity, cold ischemia time, use of T-cell depletion, number of HLA mismatches, the advent of DGF, type of donor as well as pre-transplant T-cell sensitization) were assessed with univariate and multivariate Cox regression analysis. Although non-caucasian ethnicity, longer ischemia time, non-use of T-cell depletion and a positive pre-transplant ELISPOT were associated with an increased risk of early TCMR, only the presence of baseline anti-donor T-cell sensitization and non-use of T-cell depletion as induction therapy were significant independent correlates of risk for early TCMR (Table 3).

Verification of pre-transplant anti-donor T-cell sensitization as a risk factor for TCMR in a new cohort of kidney transplant patients
The impact of preformed donor-specific T-cell alloreactivity was validated in a second independent cohort of kidney transplant patients with similar baseline clinical and immunological characteristics. As shown in Table 4, although a rather low specificity and positive predictive value was obtained, a high sensitivity and negative predictive value (88.9% and 95.2%, respectively) was observed for the prediction of early TCMR (Table 4). Similarly to the training set, highly alloreactive patients receiving T-cell depletion as induction therapy showed lower incidence of TCMR as compared to those receiving either basiliximab or no induction therapy (0/4 (0%) vs 12/38 (31,6%) vs 1/4 (25%), p = 0.06, respectively).

DISCUSSION
In this study, we show that high frequencies of donor-specific alloreactive memory/effector T cell responses before kidney transplantation is frequent among patients waiting for a kidney allograft. Furthermore, we report for the first time that this preformed anti-donor T-cell sensitization seems to have a direct negative impact on kidney allograft outcome by favoring TCMR during the early period of time after transplantation, especially among highly T-cell sensitized individuals against donor antigens and patients not receiving T-cell induction therapy. Of note, this observation was further validated in a subsequent independent cohort of kidney transplant patients with high sensitivity and negative predictive value. Unlike acute antibody-mediated rejection (ABMR), TCMR remains an unpredictable process in clinical practice since no accurate immune monitoring is currently done. However, in recent years, a strong deleterious association has been observed between the presence of alloreactive memory T cells and clinical and also subclinical immune-mediated allograft injury or allograft dysfunction in humans [12,13,22,23]. In this study, using the highly sensitive IFN-γ ELISPOT assay, we emphasize that these immune-mediated events driven by preformed donor-specific memory T-cell clones may not be predictable on the basis of epidemiological backgrounds and may occur during the first weeks after transplantation. This is significant since it suggests that delayed clinical or subclinical TCMR might be driven by either persistent or rather, by de novo naïve T-cell activation, potentially reflecting insufficient immunosuppressive exposure. In fact, in the multivariate analysis performed, the presence of high frequencies of circulating donor-specific memory/effector T cells prior to transplant surgery was shown to be an independent predictor of early TCMR, but not of all the immune-mediated clinical events occurring later on. A plausible explanation of the relatively frequent detection of anti-HLA cellular alloreactivity before kidney transplantation may be heterologous immunity, in which T cells, initially primed against infectious agents and environmental antigens, crossreact with allogeneic MHC molecules [24,25]. Nevertheless, whether this or other mechanisms are responsible for the presence of alloreactive memory T-cell responses in the absence of clear allogeneic sensitization deserves further investigation.
A conclusion that can be drawn from our study is the importance of an optimal HLA matching, especially within class I molecules between donor/recipient pairs in order to reduce the chance that pre-formed memory T cells recognize donor alloantigens. Likewise, we recently showed that the higher the number of HLA mismatches, the greater the frequency of donorspecific memory T-cell responses, particularly among T-cell subsets primed by the direct pathway of alloantigen presentation [26]. Even though not tested in our study, this finding and those reported by others [12][13][14][15][16][17] suggest a role for both CD8+ and CD4+ memory T cells in the anti-donor allogeneic immune response. Furthermore, as previously shown [18,26,27], the pre-transplant anti-donor humoral allosensitization did not illustrate the allospecific cellular immune response, emphasizing the importance of monitoring both effector mechanisms of adaptive immunity before transplantation.
Even though older individuals showed increased frequencies of alloantigen-specific memory T cells as compared to younger patients, younger T-cell alloreactive individuals seem to have a more effective anti-donor effector immune response, as shown by the significantly higher incidence of early TCMR particularly among younger alloreactive T-cell sensitized patients. A plausible explanation is that although aged recipients exhibit higher numbers of memory T cells with a broader antigen repertoire, these cells have a significantly poorer capacity to mount effective recall effector responses compared with young memory/effector T cells [28][29][30]. Interestingly, Hricik and colleagues [31] recently reported the increased risk of TCMR and poorer graft function in patients receiving kidney allografts from older deceased donors, suggesting that the more inflammatory milieu triggered in these grafts could facilitate a much more effective effector T-cell response among T-cell sensitized individuals due to the higher immunogenicity. In our study, although a similar higher T-cell sensitization was found among individuals receiving older allografts, the most relevant variable influencing the outcome was recipient's age. This observation stresses the fact that transplant rejection is much more likely to happen in highly T-cell sensitized younger individuals, whereas in older sensitized transplant patients it would be facilitated if receiving highly immunogenic tissues from older donors.
An important issue still under debate in clinical kidney transplantation is the choice and type of induction therapy. In our study we found that the use of T-cell depletion as induction therapy with rATG, in combination with CNI drugs, seemed to provide significantly better protection for T-cell sensitized patients against developing TCMR than the use of anti-IL2 receptor monoclonal antibodies or no induction therapy. Indeed, we found a significant reduction of early TCMR among highly T-cell sensitized patients receiving rATG as compared to transplant recipients using other type of induction treatment or no induction, whereas the use of rATG did not provide any benefit as compared to other induction treatments when given in non T-cell alloreactive individuals thus, suggesting the usefulness of this assay to individualize the use of such aggressive induction therapy. This finding is in fact in line with previous reports from others and us [18,32,33] advocating the use of induction therapy, particularly T-cell depletion, to prevent post-transplant T-cell mediated rejection among highly T-cell sensitized individuals.
The new observations reported in this work underline the high interest of this assay for being implemented in clinical practice as it provides crucial information to transplant clinicians not currently available; on the one hand, the likelihood of T-cell mediated rejection in the very early phases after transplantation that may not be inferred with patients' baseline clinical background on the other, help further refine decision-making regarding the use of T-cell depletion as induction therapy, particularly among the most fragile transplant population, that is in the older range of age, in whom in absence of humoral sensitization, this potent immunosuppression could be safely avoided.
A main limitation of this study is its retrospective nature. However, the multivariate analysis performed, together with the results obtained, which corroborate those of some previous reports, strengthen our observations and should alert the transplant community to the urgent need to perform prospective, observational studies as well as interventional trials to test our results.
The high sensitivitiy and particularly high negative predictive value of the ELISPOT test for early TCMR is of great relevance due to its capacity to rule out the disease. Therefore, patients with no evidence of T-cell sensitization would be suitable candidates to enroll in such clinical trials, which should ideally be conducted under the auspices of international collaborative networks.

CONCLUSIONS
High levels of donor-specific alloreactive memory T cells may be relatively frequent prior to transplantation, despite the absence of any clinically recognizable sensitizing events. They may directly facilitate the advent of TCMR early after transplantation, particularly in patients not receiving T-cell depleting agents as induction therapy. Therefore, screening for anti-donor T-cell sensitization should be seriously considered in kidney transplant patients. At this point, prospective randomized trials are clearly warranted.