Saturday, January 28, 2012

What to do, now that AEDs don't seem to work?

Now that in-hospital use of AEDs has been discredited, my fear is that recognition of the very real problem of delayed in-hospital defibrillation will fade, due to thinking along these lines:
  1. If AEDs don't help, there's really nothing to be done, or
  2. If AEDs don't help, maybe there wasn't a real problem in the first place.
Regarding the first statement: Simultaneously with the AHA's recognition of the problem, they began to promote AEDs for in-hospital use. This was comfortable for the community of resuscitation specialists, avoiding unpleasant thoughts about lives that might have been saved if the problem had been recognized years earlier. The implicit assumption was that nothing could have been done anyway until the advent of this new technology. If that assumption is accepted, we are left in a state of impotence.

Adding to this obstacle to progress is the likely stance of the AHA over the next few (or many) years. The fraternity of emergency cardiac care specialists that write the guidelines are historically very slow and hesitant to change. Once a recommendation is in the guidelines, a mountain of opposing evidence is often required to remove it—no matter whether the evidence supporting the recommendation is flimsy or nonexistent (see Lilly Fowler's FairWarning article under Links in the right column and the AHA's response here. Given the power of the AHA to set resuscitation standards, I'm afraid that researchers won't get serious about exploring other approaches until the AED recommendation is dropped, which may take a long time.

Regarding the second statement: Ironically, the journal article that has caused the biggest stir in the past several years by highlighting the problem of delayed in-hospital defibrillation (Chan PS, et al., Delayed time to defibrillation after in-hospital cardiac arrest) can be seen as minimizing the problem; this is also true of a number of other articles reporting data on time intervals to first defibrillation. The reason is that the time-interval data are grossly inaccurate—i.e., too short (see “Getting good time-interval data” below). The Chan article raises the alarm that 30% of shocks take longer than two minutes (!). Readers with real experience of the difficulties of code response might reasonably conclude that such response times are pretty darn good—if they accept the reported data at face value (though in my view anyone with such experience who thinks much about it should question the validity of the data). A good example of someone who should know better concluding from the Chan article that there is little room for improvement can be found here; the author goes on to restate the old co-morbidity excuse: in-hospital survival from shockable arrhythmias is low because the victims were so sick to begin with.

No comments: