A fresh take on whole blood
Fresh whole blood (FWB) transfusion disappeared from civilian medicine decades ago. Storing blood as components was found to allow optimal biologic preservation ex vivo, to facilitate exacting quality control and targeted replacement therapy and to help preserve resources. FWB has continued to be used by the US military however, particularly in remote locations where apheresis platelet (aPLT) collections are impractical. The “walking blood bank,” where prescreened donors can be called upon at a moment's notice to provide blood for their fellow soldiers, is a well-established model within the military. It is a truism that blood bank physicians prefer quality-controlled, nucleic acid–tested, stored blood components over FWB. As Israeli Defense Forces consultant Dr Uri Martinowitz once opined, “The best way to cause a heart attack in a blood banker is to tell them, use fresh blood!”1 Some surgeons though, both in the military and in civilian communities, contend that FWB remains the best therapy for severely bleeding patients. “Whole blood is what they are losing” begins the argument. With stored red blood cells (RBCs) coming under fire for potentially causing harm to patients, the drumbeats for FWB have grown louder in some quarters. During the Iraq and Afghanistan wars, FWB was often the only product available at forward combat support hospitals. Some investigators are now asking whether FWB is clinically superior to the traditional combination of RBCs, aPLTs, and fresh-frozen plasma (FFP) and should be provided even when these components are available. A related question is whether FWB causes less harm to patients than components that undergo prolonged storage in the blood bank.
In this issue of TRANSFUSION, Perkins and colleagues2 report a retrospective analysis of trauma patients who had been massively transfused at a single US combat support hospital in Baghdad, Iraq, from 2004 through 2006. The investigators wanted to learn whether survival was higher among massively transfused patients who had received FWB rather than aPLTs. The study population was composed almost entirely of young adult males with penetrating injuries. During resuscitation, 85 patients had been transfused with FWB but not aPLTs, while 284 had received aPLTs but not FWB. Patients in both arms had also been transfused with RBCs, plasma, and cryoprecipitate under a standardized massive transfusion protocol.1 On admission, patients in the FWB arm had significantly higher injury severity scores, more pronounced acidosis, and lower PLT counts. Glasgow Coma Scores, international normalized ratio, and other variables were similar between the two arms. Of note, almost all patients receiving FWB were treated during the first 10 to 11 months of the 36-month study period. In November 2004, aPLTs became available at the study hospital, and FWB use there plummeted. The critical question was, had switching from FWB to aPLTs reduced survival? The investigators compared 24- and 30-hour survival between the FWB and aPLT study arms and found it to be equivalent. The authors concluded that it is premature to recommend FWB for the routine management of civilian trauma. However, where blood components are unavailable, FWB is a feasible alternative.2
FWB and component therapy have been compared directly in only a handful of randomized controlled trials. The first was reported in 1988 by Mohr and colleagues,3 who randomly assigned 27 adult cardiac surgical patients to receive either 1 unit of FWB or 10 units of PLT concentrates at the conclusion of cardiopulmonary bypass (CPB). Platelet aggregation in vitro improved significantly in the FWB group but not in the PLT group. However, 24-hour postoperative blood loss was not different between the two study arms. Manno and coworkers4 randomly assigned 161 pediatric cardiac surgical patients to be transfused after CPB with either: 1) “very fresh” WB (<6 hr old), 2) WB 24 to 48 hours old, or 3) reconstituted WB (control product composed of 1 unit each of RBCs, FFP, and PLT concentrate). Mean 24-hour postoperative blood loss was lower in each of the FWB arms compared with the control arm (50.9 mL/kg vs. 44.8 mL/kg vs. 74.2 mL/kg, p = 0.03). PLT aggregation in vitro was inferior in the control arm. Friesen and colleagues5 reported a study of 32 infants undergoing cardiac surgery who were randomized to receive autologous WB or standard components for post-CPB transfusions. No difference in 24-hour blood loss was observed. Triulzi and colleagues6 published a similar study in adult cardiac surgical patients in 1995. Seventy patients were randomly assigned to receive autologous WB, autologous PLT-rich plasma (PRP), or standard components after CPB (control). Patients who received autologous WB or PRP had significantly less postoperative mediastinal drainage than controls; however, there was no difference in the proportion of patients requiring RBC transfusion. More recently, Friesen and colleagues5 randomly assigned 64 infant cardiac surgical patients to receive either FWB or components for CPB circuit priming. Twenty-four-hour chest tube drainage was significantly lower in the FWB group relative to the control group (7.7 mL/kg vs. 11.8 mL/kg, p = 0.03). These results, as well as those of Manno and coworkers4 conflict with those of a 2004 study published in the New England Journal of Medicine by Mou and colleagues.7 Two hundred children less than 1 year old undergoing cardiac surgery were randomized to either FWB or reconstituted WB for CPB circuit priming. The primary end point was a composite of survival and length of stay in the intensive care unit (ICU). Survival was equivalent, but ICU length of stay was significantly longer in the FWB arm. Postoperative chest tube output and transfusion requirements were not different. In summary, some studies in cardiac surgery suggest that FWB might provide better hemostatic activity than component therapy, but the data are limited and conflicting. No randomized controlled trials comparing FWB with components have been performed in the setting of trauma.
The study by Perkins and coworkers2 is a sequel of sorts to a 2009 article published by the same group. Spinella and colleagues8 reported a retrospective analysis of a separate set of 354 trauma patients who had received at least 1 unit of RBCs in Iraq or Afghanistan between 2004 and 2007. Two study arms were defined: patients who had received FWB, RBCs, and plasma but not aPLTs and patients who had received RBCs, plasma, and aPLTs but not FWB. In striking contrast to the current report, in the 2009 study FWB was found to confer a 13% survival benefit relative to component therapy. Why the two studies arrived at such different answers is probably due at least in part to a survivorship bias complicating the 2009 study. Survivorship bias is considered to have confounded many prior studies of trauma resuscitation, such as those comparing patients receiving high versus low ratios of FFP to RBCs.9
The concept of survivorship bias is perhaps best illustrated by an historical anecdote concerning bombers during World War II. Aerial combat missions were risky, and planes flying out of British air bases were often shot down. Many planes made it back to base only after being hit with bullets fired by German anti-aircraft guns. The Allies decided to reinforce the planes with armor plating in an attempt to boost survival. To minimize weight, they decided to add armor only where it would provide the greatest benefit. So workers rigorously tabulated the locations of bullet holes on a large number of planes. They found that most bullet holes were distributed on the wings and tail surfaces, while only a few bullet holes were located on the cockpit or engines. The engineers recommended adding armor plating to the wings and tail surfaces. Mathematician Abraham Wald pointed out that this was exactly the wrong approach. The armor plating needed to go where the fewest bullets had hit. Planes hit in the cockpit or engines tended to be the ones that had failed to return home, so they had been excluded from analysis.10
In the study by Perkins and colleagues,2 the investigators realized that it always takes longer to issue FWB (a minimum of 30 min) than aPLTs (approx. 5 min). Many trauma patients die within the first hour after admission.11 Thus, severely injured patients might live long enough to receive aPLTs, but not long enough to receive FWB, artificially inflating survival in the FWB arm. To adjust for this, the investigators excluded any patient that died within 30 minutes of admission and also excluded any patient that failed to receive at least 10 units. These two criteria had the effect of removing almost all early deaths from both study arms, thereby attenuating a possible survivorship bias. In contrast, the study by Spinella and coworkers8 included patients transfused with only 1 unit of RBCs, potentially creating a survival bias favoring the FWB arm.
In their discussion, Spinella and coworkers8 speculated that the lower survival that they observed among patients receiving components might be attributable at least in part to adverse effects of the anticoagulant and additive solutions used to manufacture and preserve blood products. They hypothesized that alternatively, the lower survival might have been due to the biochemical and biomechanical derangements known to occur in blood components during storage. Their discussion touched on a wider debate that has been going on for many years and has been reignited recently: Is old blood bad for patients? The basic argument is as follows:
- 1
RBCs and other components were licensed for use without the sort of safety and clinical outcomes studies that would be required of new therapeutic agents coming onto the market today. The maximum 42-day storage limit for RBCs, for instance, was based on radiolabeling studies in volunteers, where success was defined simply as 75% or more recovery of transfused cells at 24 hours. The primary objective of RBC transfusion, improved oxygen delivery to the tissues, was not examined when RBCs were licensed.
- 2
Myriad in vitro changes accumulate during RBC storage, the “RBC storage lesion.” Intracellular 2,3-diphosphoglycerate acid (2,3-DPG), essential for off-loading oxygen from hemoglobin (Hb), drops to undetectable levels after a couple of weeks of refrigerated storage. ATP levels fall as well. The extracellular pH drops. RBC membranes become more fragile and less deformable. Extracellular potassium and free Hb levels rise, as do levels of histamine, soluble lipids, interleukin-1, tumor necrosis factor, and so forth. Like bananas that turn from green to yellow to black, it is supposed, RBCs grow progressively less healthy and more toxic the longer they are stored.
But many variables comprising the RBC storage lesion (e.g., 2,3-DPG exhaustion) are reversed after transfusing RBCs into a physiologic environment. So is transfusing stored blood components directly harmful to patients? Many studies over the years have purported to show that it is. The most prominent recent article to address this topic was published in 2008 by Koch and colleagues.12 They reported a single-center, retrospective analysis of 6002 cardiac surgery patients who had received RBC units stored for 14 days or less or for greater than 14 days. Patients receiving older units were reported to have significantly higher in-hospital mortality (2.8% vs. 1.7%, p = 0.004). While this study is provocative, it is important to be cautious about attributing cause and effect in any retrospective study because of the inherent difficulty in eliminating biases and confounding factors. In the study by Koch and colleagues,12 for example, there were more patients in the “old blood” group who received 10 or more RBC units than in the “fresh blood” group. Receiving a high number of transfusions is an important potential confounder, as recipients of more blood tend to have more severe illness and, consequently, lower survival.
In spite of its limitations, the study by Koch and coworkers12 generated a great deal of publicity and discussion. Conflicting studies showing no relationship between the age of blood and survival have also been published recently,13,14 to considerably less fanfare. The data generated by Perkins and coworkers2 deserve to be acknowledged and included in the ongoing conversation about stored blood. Here is a study of a hospital that abruptly switched from using blood that was as fresh as it gets (FWB was typically transfused within 8 hr of collection) to using stored RBC units that were 33 days old on average . . . and there was no discernable impact on survival among hundreds of massively transfused trauma patients. Itself retrospective, the study by Perkins and coworkers2 is far from definitive, but it does reinforce the idea that a state of real equipoise currently exists regarding the question of fresh versus old blood.15 Hopefully, ongoing randomized controlled trials (RECESS16 in the United States and ABLE17 and ARIPI18 in Canada) will provide answers about the safety of stored RBCs. Data from well-designed prospective trials will also be needed for us to understand what role, if any, FWB should play in the care of trauma patients. For now, it is reasonable to continue regarding FWB as a niche product for extreme environments, and one that appears to be no greater than the sum of its parts.
CONFLICT OF INTEREST
None.