Reader Comments

Post a new comment on this article

Possible notoriety bias and effect on reporting rates

Posted by arcox on 10 Nov 2011 at 15:50 GMT

In addition, while reporting rates for these side effects are not known, only a small fraction of adverse events that occur are reported in a voluntary system.

Under-reporting of suspected adverse effects within spontaenous reporting systems, such as, Medwatch is common. Hazell and Shakir's systematic review of the under-reporting of ADRs found a median level of reporting of 6%, with an interquartile range of 2%-18% showing some variability.1

It is also worth noting that varenicline reporting may have been subject to a notoriety bias, either due to an increased vigilance following safety alerts issued by the FDA, or media reporting. In September 2007, a musician's aggressive behaviour led to his accidental shooting. The incident was associated with his use of varenicline, and widely reported.2 Media attention could therefore be responsible for some of the disproportionality in reports.

1. Hazell L, Shakir SAW. Under-reporting of adverse drug reactions: A systematic review. "Drug Safety"

2. Girlfriend Believes Chantix Contributed to Texas Musician's Death. ABC News. [accessed on 10th Nov 2011]

No competing interests declared.

Reporting rates may indeed vary

DrugSafetyResearch replied to arcox on 16 Nov 2011 at 15:38 GMT

The author’s well-reasoned comment raises an interesting question about voluntary adverse event reporting. The reporting rate may indeed vary between drugs, be different for various side effects, and change over time. We sought to adjust for this by using the entire time period the agents were available (dealing with fluctuations over time), and by comparing like events (our endpoint was the same for all drugs). Finally we adjusted for the possibility that the reporting rates for the drugs might be different by using the disproportionality measure in which the primary comparison is other reports for the same drug.

“Notoriety bias” (or in the FDA jargon “stimulated reporting”) is a concept that is too imprecise to measure objectively. How big does the “stimulus” have to be? How long does it last? Is it specific (the publicity in 2007 was over an aggressive act but many reports were about depression or suicidal behavior)? Once a side effect becomes well-known through warnings or publicity, shouldn’t the “wave” of reporting reverse to lower reporting rates later because people become less likely to report an already well-known side-effect? Should any trend be adjusted for prescription volume?

The 8.4 odds ratio for varenicline compared to nicotine products is likely an underestimate. If the study period were limited to the 4 years that varenicline was marketed, or based on a denominator of prescription/consumption data, the results almost surely would have been more unfavorable to varenicline. Thus, a generous adjustment for a possible but still unproven possibility of a higher reporting rate for varenicline is already built into the estimate. But the author is correct in the suggestion that the reporting rate for suicidal behavior and depression might be higher for varenicline than for nicotine replacement products.

Competing interests declared: As noted in the paper I have been an expert in both civil and criminal legal actions involving varenicline and psychiatric side effects.