Relative Perioperative Bradycardia Does Not Lead to Adverse Outcomes After Cardiac Transplantation
Abstract
Since the effects of bradycardia after cardiac transplantation are not known, we tested the hypothesis that perioperative bradycardia would lead to an increase in adverse outcomes after cardiac transplantation. We conducted a retrospective case control study with inclusion criterion of a heart rate (HR) less than 80 bpm during the 1st week after transplantation. Control patients were matched for gender, age and time since transplantation. We identified 34 patients as having perioperative bradycardia out of the 174 who underwent cardiac transplantation between 1994 and 1997. The results demonstrated no significant differences in donor ischemic times (180 vs. 183, p = 0.88), operative surgeon (p = 0.62) or pretransplant cardiac disease (p = 0.81) between groups. Bradycardic patients were more likely to be on pretransplant amiodarone (RR = 20.4, p < 0.001). Perioperative bradycardia did not lead to increases in cellular rejection (p = 0.72) or vasculopathy (p = 0.79). The patients prescribed pretransplant amiodarone (n = 14) had a trend toward delayed time to first rejection episode (31.0 vs. 15.5 days, median, p = 0.07). In conclusion, perioperative bradycardia does not increase adverse outcomes after cardiac transplantation and is associated with pretransplant use of amiodarone. Amiodarone may modify the recipients' immune response by delaying the occurrence of rejection.
Introduction
The incidence of bradyarrhythmias after cardiac transplantation has been reported to be between 15% and 44%(1–4). The reported causes of bradyarrhythmias include surgical trauma to the sinus node artery, increased donor ischemic time and lower temperature of cardioplegic solution (1,2,4–7). Most episodes resolve spontaneously although they may require treatment with theophylline, and 6–23% of patients require the implantation of a permanent pacemaker (1–3,8). Overall, the prognosis of these patients does not appear to be adversely affected (9). Early bradycardia after cardiac transplantation may be a clinical marker of poorer donor preservation since several studies have correlated worsening donor ischemia time to the development of bradycardia after transplantation (1,4–6). As poorer donor procurement may increase allograft ischemia and reperfusion injury, we investigated whether early bradycardia after cardiac transplantation would lead to increased adverse outcomes after cardiac transplantation. Currently, there are no reports linking early post-transplantation bradycardia to changes in the incidence of acute cellular rejection or chronic vasculopathy for a period of up to 5 years after transplantation. Thus, the current study was undertaken to identify causal factors for the development of bradycardia (operationally defined in this study as a heart rate less than 80 bpm) during the 1st week after transplantation (perioperative bradycardia), and to investigate whether perioperative bradycardia leads to an increased incidence of cellular rejection, rejection with hemodynamic compromise and allograft vasculopathy.
Materials and Methods
Study population and design
The study was a retrospective, case-controlled chart review of patients undergoing cardiac transplantation at the University of Alabama at Birmingham (UAB) between 1994 and 1997. All charts and 12-lead electrocardiograms were reviewed for each patient. Inclusion criteria were as follows: (i) nonpaced heart rate (HR) less than 80 beats per minute (bpm) during the 1st week after transplantation; (ii) description in the patient's discharge note of bradycardia within the 1st week after transplantation. Exclusion criteria included: (i) age less than 18 years and (ii) previous cardiac transplantation. The patients included in the study (bradycardic group) were matched with an equal number of nonbradycardic (control group) patients who were matched by age, gender and time in days since transplantation, and screened for the same inclusion and exclusion criteria as the bradycardic patients. Furthermore, to minimize selection bias, controls were chosen by individuals with no knowledge of clinical outcome.
Risk factors for perioperative bradycardia
To investigate the potential causes for perioperative bradycardia the following were collected: (i) etiology of pretransplant cardiac disease – patients were classified as ischemic, nonischemic or other; (ii) donor ischemic time; (iii) pretransplant bradycardic medications; and (iv) operative surgeon.
Outcome factors
The following primary outcome factors were obtained: (i) number and severity (mean International Society for Heart and Lung Transplantation (ISHLT) score) of acute cellular rejection episodes; (ii) evidence of vasculopathy noted by coronary angiography and the number of stenosis > 70%; (iii) number of rejection episodes complicated by hemodynamic compromise; and (iv) time in days until first treated and first 3A (by ISHLT biopsy grading system) rejection occurred. Other nonprimary end-points included mortality, number of patients retransplanted and number of patients undergoing pacemaker implantation.
Statistical analysis
Baseline characteristics for the two groups were compared using either the t-test for continuous variables or chi-square test for all categorical variables except pretransplant beta blockers. Because there were few patients in the study on pretransplant beta blockers, Fisher's exact test was used to compare the two groups. We examined the relationship between each of the predisposing factors and risk of perioperative bradycardia by fitting a logistic regression model.
Univariate analyses were performed by using the t-test to evaluate the implications of bradycardia on number and severity of rejection episodes, after applying a log transformation to the number of rejection episodes. A chi-square test was used to evaluate the implications of bradycardia on vasculopathy and number of patients with at least one episode of hemodynamic compromise. Fisher's exact test was used to compare the number of patients with vascular stenosis > 70% between the two groups. Finally, we used the log-rank test for univariate comparisons of the time to first treated and first 3A rejection among the groups. Univariate analyses comparing the outcomes on the basis of amiodarone usage were performed in a similar manner.
To assess for potential confounding effects of donor ischemic time and OKT3 usage on the relationship between bradycardia and/or amiodarone usage and time to rejection, Cox regression models were used. Such models allow the effect of each variable to be ascertained after controlling for the other variables in the model. For each model, an interaction between time to event and donor ischemic time was included in the model to account for the fact that the proportional hazards assumption is questionable for this variable.
Results
Baseline characteristics
Between 1994 and 1997, 174 patients underwent cardiac transplantation at UAB. At the beginning of 1996, all patients underwent transplantation via the bicaval approach; prior to this, patients were transplanted with atrial anastomosis by the Lower and Shumway technique (10). Twenty-two patients in each group underwent transplantation using the bicaval technique; the others received atrial anastomosis. All patients were maintained on a triple immunosuppressive regimen including cyclosporine, prednisone and azathioprine. In some cases mycophenolate mofetil was used instead of azathioprine. Muromonab (OKT3) was used for refractory rejection or as cyclosporine-sparing therapy in patients with renal dysfunction early after transplantation. There were no differences between the groups with respect to OKT3 use (p = 0.65). The majority of patients (> 90%) in each group (and subgroups) were taking either azathioprine or mycophenolate mofetil, cyclosporine, and prednisone. Thirty-four patients were identified as having perioperative bradycardia, resulting in an incidence for the total study population of 19.5%. All rhythms were sinus or sinus bradycardia, except four patients who had junctional rhythms. An equal number of controls were identified, as noted above. Table 1 summarizes the baseline characteristics for bradycardics vs. control patients. As expected by design, the perioperative heart rate was significantly different between the two groups at 1 week (71 bradycardic vs. 108 bpm control, p < 0.001). However, there were no significant differences in the heart rates at 3 weeks post transplantation (92 vs. 96 bpm, p = 0.16). Furthermore, as expected due to the matching of the groups, no differences were observed between the groups with respect to age, gender, and time since transplantation to the end of the study (12/31/00).
Bradycardic n=34 | Control n=34 | p-value | |
---|---|---|---|
Heart rate<1week bpm | 71±12.1 | 108±16.6 | <0.001 |
Heart rate at 3weeks bpm | 92±14.0 | 96±9.9 | 0.16 |
Age (years) | 53±10.5 | 53±11.7 | 0.90 |
Male:female | 28:6 | 28:6 | 1.00 |
Time since transplantation (days) | 1562±534.6 | 1639±437.4 | 0.52 |
Pre- and peri-transplant variables
Table 2 illustrates the relationship between the pre- and peri-transplant variables and risk of bradycardia. There was a significant difference in the use of pretransplant amiodarone, with 13 out 34 patients in the bradycardic group using amiodarone before transplantation, as opposed to only one patient in the control group (RR = 20.4, p < 0.01). No differences between the two groups were observed with respect to use of pretransplant digoxin (p = 0.15) or beta blockers (p = 0.48). There were no differences between the two groups with respect to mean donor ischemic time (180 vs. 183 min, p = 0.88) or operating surgeon (p = 0.62). Two patients in the bradycardic group were on procainamide and one patient was on propafenone. No patients in the control group were on type 1-antiarrhythmic agents. Twenty patients in the bradycardic group had an ischemic cardiomyopathy vs. 21 in the control group (p = 0.81). Each group had one patient with congenital heart disease, and there was one patient in the bradycardic group with hypertrophic cardiomyopathy.
Bradycardicn=34 | Control n=34 | p-value | |
---|---|---|---|
Pretransplant ischemic heart disease | 59% | 62% | 0.81 |
Pretransplant beta blockers | 9% | 18% | 0.48 |
Pretransplant digoxin | 44% | 62% | 0.15 |
Pretransplant amiodarone | 38% | 3% | <0.01 |
Donor ischemic time (min) | 181±60.0 | 183±71.3 | 0.88 |
Post transplant OKT3 | 26% | 32% | 0.65 |
Effect of bradycardia on post-transplant outcomes
Table 3 compares post-transplant outcomes between the two groups. No significant differences were observed between bradycardic and control patients with regard to number of cellular rejection episodes (p = 0.36), mean severity of cellular rejection episodes (p = 0.72), number of patients with allograft vasculopathy (p = 0.79), number of patients with severe stenosis (p = 1.00), or number of rejection episodes complicated by hemodynamic compromise (p = 0.23). Lastly, there was no difference in the left and right ventricular ejection fractions within 1 week after transplantation (p = 0.43 and p = 0.62, respectively).
Bradycardic n=34 | Control n=34 | p-value | |
---|---|---|---|
Number of rejection episodes | 5.8±4.5 | 6.2±3.5 | 0.36 |
Severity of rejection episodes | 2.3±0.56 | 2.2±0.63 | 0.72 |
Number of patients affected by allograft vasculopathy | 10 | 9 | 0.79 |
Patients with vascular stenosis >70% | 3 | 4 | 1.0 |
Number of episodes of hemo- dynamically compromising rejection | 5 | 9 | 0.23 |
1, 2 demonstrate Kaplan–Meier plots of time to 1st treated rejection (Figure 1) and time to 1st 3A rejection (Figure 2). There was no difference in the median survival time to 1st treated rejection episode (23.0 days in bradycardic group vs. 17.5 days in control group, p = 0.53) or in time to 1st 3 A (42.5 days bradycardic group vs. 38.5 days control, p = 0.96). Therefore, these data suggest that perioperative bradycardia is not a marker for a different outcome after cardiac transplantation.

Kaplan–Meyer plot of time in days to first treatable rejection episodes in bradycardic and control patients. The plot shows that there is no difference between the bradycardic and control groups [23.0 median survival time (days) in the bradycardic group vs. 17.5 days in the control group, p = 0.53].

Kaplan–Meyer plot of time to first 3A rejection between the bradycardic and control groups. There was no statistical difference between the groups [42.5 median survival time (days) bradycardic group vs. 38.5 days control, p = 0.96].
Effect of amiodarone usage on post-transplant outcomes
As amiodarone possesses immune-altering effects (11–13), we performed a subanalysis to compare the outcome status for patients receiving pretransplant amiodarone vs. patients not receiving amiodarone. No significant differences were observed between patients receiving amiodarone vs. those not receiving amiodarone with regard to number of cellular rejection episodes (p = 0.19), mean severity of cellular rejection score (p = 0.25), number of patients with allograft vasculopathy (p = 1.00), or number of patients with severe stenosis (stenosis > 70%, p = 0.33). Also, there was no difference in the left and right ventricular ejection fractions, measured by echocardiography, within the first week after transplantation between the amiodarone and non-amiodarone groups (p = 0.75 and 0.69, respectively). As stated previously, > 90% of patients in both groups were taking imuran or mycophenolate mofetil, cyclosporine and steroids. There was a delay due to amiodarone usage for both time to first treated rejection (31.0 vs. 15.5 days, p = 0.07, see Figure 3) and time to first 3A rejection (74.5 vs. 35.5 days, p = 0.14, see Figure 4), although this did not reach a traditional level of statistical significance. Lastly, fewer patients receiving pretransplant amiodarone (1/14) developed hemodynamic compromising rejection compared to 13/54 in the group without amiodarone (p = 0.27). Although nonsignificant due to the small sample size, this suggests a protective role of amiodarone that would be consistent with its effect on cellular rejection.

Kaplan–Meyer plot of time to 1st treated rejection between those patients taking pretransplant amiodarone and those patients not taking amiodarone (32 vs. 15.5 days median survival time, p = 0.07).

Kaplan–Meyer plot of time to 1st 3A rejection comparing those patients taking pretransplant amiodarone and those patients not taking amiodarone (74.5 vs. 35.5 days median survival time, p = 0.14).
As stated above, 13 of the 14 patients taking amiodarone were in the bradycardic group. Hence, it is impossible to determine whether there is an interaction between bradycardia and amiodarone usage. In other words, we cannot determine whether there is a potential protective effect of amiodarone usage among nonbradycardics as well or, rather, if the protective effect is only apparent in bradycardics. When the analysis was restricted to bradycardic patients, patients who received pretransplant amiodarone had a delay in time to 1st treated rejection episode compared to bradycardic patients not taking amiodarone (33 vs. 15 days, median values, p = 0.02) and time to 1st treated 3A rejection (88 days vs. 35 days, median values, p = 0.04).
Multivariable comparisons of amiodarone usage on time to rejection
During our analysis of baseline variables, we identified two that were related to outcome and pretransplant amiodarone use: OKT3 usage and donor ischemic time. Since it is possible that these variables could confound the relationship between amiodarone and time to rejection, we fitted Cox regression models including terms for OKT3 usage, donor ischemic time, and amiodarone usage. After controlling for donor ischemic time and OKT3 usage, there remained a trend toward a small protective effect of pretransplant amiodarone use on the time to 1st treated rejection for all patients (HR = 0.43, p = 0.03), although the effect was weaker than that suggested by the univariate analysis. However, after controlling for the other variables, there was no effect associated with amiodarone usage with regards to time to 1st 3A rejection (p = 0.29).
Other outcomes
There were three deaths in the bradycardic group. The deaths occurred secondary to varicella pneumonia, disseminated Aspergillus, and post-transplant lymphoproliferative disorder. There were two deaths in the control group from pulmonary emboli and refractory rejection. In the bradycardic group, one patient had a positive T-cell cross-match and underwent a retransplantation for subsequent severe allograft dysfunction. One patient underwent retransplantation in the control group for allograft vasculopathy. One patient in the bradycardic group had a left ventricular assist device prior to transplantation vs. two patients in the control group. Pacemakers were implanted in two patients in the bradycardic group (implantations occurred during the initial hospitalization) vs. none in the control group. Thirteen patients in the bradycardic group were treated with theophylline compared to two patients in the control group (p = 0.001). Diltiazem, which is used both to control post-transplant hypertension and to increase cyclosporine levels, was used in nine bradycardic patients and 18 controls during the initial hospitalization (p = 0.03). As stated above, there were no differences between the groups in respect to the use of OKT3 (p = 0.65, Table 2).
Discussion
Several findings in our study are in general agreement with previous reports. Earlier studies have shown an incidence of bradycardia between 15% and 44%(1–4). This is similar to our study population with an incidence of 19.5%. In previous reports the most common rhythm disorder found was sinus node dysfunction or junctional bradycardia (1–3,14). This also agrees with our data, as the majority of the bradycardic group (30/34 patients) manifested sinus rhythm or sinus bradycardia. We demonstrate, in agreement with previously published reports, that perioperative bradycardia in most cases resolves by the third week after transplantation (1–6). Our rate of pacemaker implantation of 5.8% (2/34 patients) is also similar to other studies (1,3,5,6).
The hypothesis in our study was that early bradycardia might be a marker for a poorer outcome after cardiac transplantation. The rationale for this was that several studies have shown that increased donor ischemic time was correlated to the development of bradycardia (1,5,6). The mechanism for this effect may be the release of adenosine in the allograft in response to injury or reperfusion (15). The transplanted, denervated heart has been shown to be supersensitive to the effects of adenosine (16) which can suppress sinus node automaticity and AV node conduction. The role of adenosine in perioperative bradycardia is supported by studies showing improvement in heart rates using adenosine antagonists like theophylline (14,17). Since increased allograft ischemia leads to increased reperfusion injury, likely due in part to the up-regulation of allograft adhesion molecules (18), we investigated whether early bradycardia, a possible surrogate of poorer donor procurement, leads to an increase in adverse outcomes. Contrary to our hypothesis, the study demonstrates that perioperative bradycardia does not lead to any increases in acute cellular rejection, rejection with hemodynamic compromise or allograft vasculopathy for a period of 3–5 years after transplantation. However, since intravascular ultrasound, a more sensitive indicator of vasculopathy than angiography, was not used to document evidence of chronic rejection, we cannot exclude the possibility that early bradycardia may have an impact in the development of allograft vasculopathy. Our results extend the findings of other studies that have shown that perioperative bradycardia is not associated with rejection within the first months after transplantation (1,7,19). The data also agree with the study by Heinz et al. which demonstrated that bradycardia was not associated with a decreased survival several years after transplantation (9). Note that bradycardia occurring late after transplantation (months or years) can be the result of severe rejection and has a poor prognosis (20). Our data demonstrate that donor ischemic time, operative technique (bicaval and biatrial anastomosis transplantation techniques were equally represented in both bradycardic and control groups), or surgeon are not associated with the development of perioperative bradycardia.
It is possible that the patients in our study who were taking pretransplant amiodarone were in fact an ‘innocent’ group of patients that may have masked a sicker group with more ominous causes of perioperative bradycardia. To evaluate this possibility, we compared the control group vs. the bradycardic non-amiodarone patients. However, we did not find a significant difference between time to 1st rejection episode (17.5 months vs. 15 months, respectively) or to 1st 3A rejection (38.5 vs. 35 months, respectively) between the groups. In addition, there was no difference in the total number of rejection episodes (p = 0.72) or the severity of rejection episodes between these two groups (p = 0.18). Thus, our original assumption that perioperative bradycardia is a surrogate for poorer donor procurement and implantation is not supported by our findings.
Our data demonstrate that pretransplant use of amiodarone is a strong risk factor for the development of perioperative bradycardia. Amiodarone, an anti-arrhythmic agent with multiple different actions, has a half-life of approximately 30 days. In recent years several studies have demonstrated that it be can be safely used in patients with left ventricular dysfunction (21,22) and it has been increasingly used in patients with heart failure for control of atrial and ventricular arrhythmias. Its main bradycardic effects are mediated through its beta receptor blocking activity. Given that the effects of amiodarone will last several weeks after transplantation (23), our finding that pretransplant amiodarone is associated with bradycardia is plausible, and agrees with several studies which demonstrated that pretransplant amiodarone leads to lower post-transplant heart rates (23–26). The study by Montero et al. (2) looked at the effect of pretransplant amiodarone on pacing requirements after transplantation. This study found that patients requiring pacing for more than 24 h after transplantation had an 11% increase in the use of pretransplant amiodarone vs. cardiac transplant recipients not requiring pacing, although the difference was not statistically significant. Our study did not group patients according to pacing requirements, but addressed a broader population of patients with a ‘relative bradycardia’ of less than 80 bpm within the first week after transplantation. Although this is not a classical definition of bradycardia, we felt that this was physiologically relevant in transplanted hearts which normally manifest heart rates above 90 bpm (27). However, it is possible that severe perioperative bradycardia (heart rate < 60 bpm) after transplantation may be due to an operative problem or severe rejection and may be associated with a poorer outcome.
An interesting and unexpected finding in our study was the subanalysis of patients on pretransplant amiodarone. The data suggest that patients on pretransplant amiodarone had a possible delay in onset to the first treatable rejection episode and occurrence of rejection with hemodynamic compromise, although these results were only a trend and did not reach statistical significance. This implies a possible immunomodulatory role for amiodarone. Several studies have demonstrated that amiodarone can effect cellular and innate immunity, responses to infections and the development of autoimmunity (11–13,28). Thus, amiodarone may have immune-altering properties that may interfere in a recipient's response to alloantigens. Another potential explanation for the amiodarone effect is the interaction between amiodarone and cyclosporine. Several studies have suggested that amiodarone reduces clearance of cyclosporine via interference with the cytochrome P-450 enzyme system (29,30). Thus, it is possible that amiodarone could delay the onset of acute cellular rejection by increasing cyclosporine levels. However, we looked at cyclosporine levels in the bradycardic patients taking pretransplant amiodarone vs. the bradycardic patients not on amiodarone 1 month after transplantation and found no significant difference between the groups (p = 0.69). Thus, our study does not provide evidence of significant amiodarone–cyclosporine interactions at 1 month post transplantation.
It should also be noted that extensive earlier studies from the Cardiac Transplant Research Database (CTRD) which addressed pretransplant risk factors for rejection did not identify amiodarone as a risk factor for time to first rejection, cumulative rejection, late rejection, recurrent rejection or the incidence of rejection associated with hemodynamic compromise, or rejection death (31–34). Our results on the effect on amiodarone and rejection phenomena should be considered an interesting observation and should initiate further studies in a broader population to definitively determine if amiodarone alters the post-transplantation immune response in the current era, especially given the introduction of new immunosuppressive regimens (35).
In summary, our study demonstrates that perioperative bradycardia is not a marker for increases in acute cellular or chronic rejection after cardiac transplantation. Amiodarone appears to be a significant risk factor for the development of perioperative bradycardia. Our results suggest a possible immunomodulatory role for amiodarone. However, a broader study in an unselected patient population transplanted in the current era is needed to further assess this effect.
Limitations of this study
As in any retrospective study, one is limited by the information available and it is possible that eligible patients escaped the entry criteria. Specifically, since we only considered patients with nonpaced electrocardiograms, a patient who was bradycardic and atrially paced throughout the whole of the 1st week post transplant could have escaped the entry criteria if the treating physician did not designate an indication of ‘post-operative bradycardia’ or ‘chronotropic incompetence’ in the medical record. As it is the policy of our transplant group to use such terminology routinely, we do not believe this to be a problem with the analysis but this cannot be totally excluded. Furthermore, patients were identified by an ECG taken at one point in time, not by continuous telemetry data, as telemetry strops were not often available. Hence, it is possible that patients in the control group manifested heart rates less than 80 bpm at some point during the first week. In addition, several donor factors that may have had an important impact on the development of perioperative bradycardia (donor age, highest dose of inotropes, lowest blood pressure, and mode of brain death) were not available from the data records used to collect information in this study. However, for any unavailable data there was no bias involved and this does not affect the validity of the data collected.
Acknowledgments
The authors would like to thank the members of the Heart Transplantation Team at the University of Alabama at Birmingham for their ideas and review of the concept of this study, and their help with access to data contained in the Heart Transplant Database at UAB. Specifically, we thank James K. Kirklin, David C. McGiffin, Barry K. Rayburn, Brian A. Foley, and Mark Aaron. In addition, we thank David C. Naftel, PhD, and the staff of the UAB Transplant Analysis Statistical Center for their help, most importantly in the identification of the appropriate control population.