What Guidelines? Never Saw Them!

Fretheim and colleagues' study confirms [1] yet again how difficult it is to change clinical practice. As a manager with a “big organization” National Health Service background, more recently working in practices with general practitioners (GPs), I always was impressed by GPs' shameless ability to ignore incoming paperwork, often to the extent of trashing the envelope unopened, especially the big brown ones from the local health authority. 
 
I now work for a pharmaceutical company (not in my mind a competing interest, but I have declared it as such in the interests of transparency) where the ability to change practice is a key skill. When I compare the pharmaceutical company approach to that of the health authority, there are some key differences. It is easy to deride industry’s glossy advert approach, but this is simply the tip of the communications iceberg. In addition to the glossy adverts, there is immense attention by drug companies to detail. The key evidence-based messages are tested, honed, and polished and are then delivered via several channels repetitively. The channels include face-to-face delivery by the “detailers”. Progress is tracked meticulously. 
 
This expensive but effective industry effort contrasts with the typical approach taken by the health authorities and other non-industry groups. The process of creating such guidelines has usually been so slow and difficult in the gestation that their credibility among GPs is low even before the rather dull photocopied paper is issued (by post, big brown envelope) and meets its predictable fate. 
 
Fretheim and colleagues say “Key components were an educational outreach visit with audit and feedback, and computerized reminders linked to the medical record system. Pharmacists conducted the visits”. This feels a bit like a policing approach (another profession watching the GP) allied to computer direction, none of which feel particularly user-friendly. A non-audit nurse (or ex-drug-rep) calling briefly but frequently using the usual pens/mugs/Post-its reminder freebies might get a better response. 
 
One study by Eve et al [2] did look at applying pharmaceutical company techniques to clinical change (“selling” Triple A therapy). It seemed to work. It may be worth bringing it back into focus.

Because the increased prescriptions of antidepressants are correlated to increased medical visits, it is tempting to conclude, as Hockey did [1], that decreased suicides are a function of greater recognition of depression. It should be noted that the biggest cause of suicide is clinical major depression and increased visits do not treat that; antidepressants do.
In a comprehensive review of the literature on the role of long-term antidepressant use to prevent relapse of major depression, Geddes et al. [2] reported that "data were pooled from 31 randomised trials (4410 participants). Continuing treatment with antidepressants reduced the odds of relapse by 70% (95% CI 62-78; 2p<0.00001) compared with treatment discontinuation. The average rate of relapse on placebo was 41% compared with 18% on active treatment". We therefore conclude that just seeing a doctor is in the long term not protective against major depression and its consequences, such as suicide. The weight of existing data supports a positive effect of antidepressants. It is plausible that effective longterm treatment of depression by other methods might also be benefi cial.
In response to the query from Reidak regarding Eli Lilly, I must say that I completely disclose all my activities, and that is why Aasa Reidak was able to write her letter [3]. I had published before on this topic in Nature Reviews Drug Discovery and had data (which were widely known to all in the fi eld, including Eli Lilly) that since fl uoxetine was introduced, prescriptions had gone up and suicide rates had gone down. There is nothing really conceptually new there. That was what was presented at one of Eli Lilly's regular weekly scientifi c sessions, which exist at most research institutions, including Lilly Research Laboratories. The paper published here is on the modelling of suicide rates using pre-1988 data to estimate what suicide rates would be now and therefore to predict a potential putative effect of fl uoxetine and other selective serotonin reuptake inhibitors [4] . These mathematical modelling data are new to this paper, and that entire analysis and manuscript content took place without the knowledge, support, or input of Eli Lilly.
The work reported in the article was done in the absence of any confl ict of interest or pharmaceutical industry support. After the paper was submitted for publication in PLoS Medicine, I agreed to provide consultations for Eli Lilly, the manufacturer of fl uoxetine. This has been a minor, occasional role, with very limited compensation. Such a relationship did not exist and was not planned when the work was done or the article written and submitted, and it is being disclosed here in the interests of transparency.

Crossing the Language Limitations Zhenglun Pan, Jin Gao
We read with great interest your editorial "The Impact Factor Game" [1]. We noticed that many of the journals indexed by the Science Citation Index (SCI) pay considerable attention to impact factors and declare their fi gures on their journals' Web sites. We believe the game has become a most infl uential one in today's scientifi c evaluation system. For example, some of China's universities have adopted it as a core factor in the evaluation of the quality of research articles and recommend that students who are pursuing a doctorate publish at least one so-called "SCI-indexed paper".
In total, 6,090 journals are indexed by SCI, most of which are published in English. However, there are many more scientifi c journals in the world. Over 6,300 local scientifi c journals are published here in China, but Chinese journals are rare in the SCI database and most of them have no impact factors.
Some may argue that the SCI database only includes the high-quality journals, but this is not necessarily the case. As a paper published in PLoS Medicine [2] has shown: "PubMedindexed Chinese studies did worse than Chinese studies not indexed in PubMed in defi ning disease with specifi c criteria (17/20 [85%] versus 137/141 [97%], respectively; exact p = 0.042), and in ascertaining the eligibility of controls (13/20 [65%] versus 129/141 [92%], respectively". The quality of an article is not determined by its language of publication.
Language accounts for much in today's database, especially when we are searching it for evidence. Language bias should not be neglected. A language revolution could contribute to scientifi c progress. What Guidelines? Never Saw Them!

A. D. Gowdy
Fretheim and colleagues' study confi rms [1] yet again how diffi cult it is to change clinical practice. As a manager with a "big organization" National Health Service background, more recently working in practices with general practitioners (GPs), I always was impressed by GPs' shameless ability to ignore incoming paperwork, often to the extent of trashing the envelope unopened, especially the big brown ones from the local health authority.
I now work for a pharmaceutical company (not in my mind a competing interest, but I have declared it as such in the interests of transparency) where the ability to change practice is a key skill. When I compare the pharmaceutical company approach to that of the health authority, there are some key differences. It is easy to deride industry's glossy advert approach, but this is simply the tip of the communications iceberg. In addition to the glossy adverts, there is immense attention by drug companies to detail. The key evidencebased messages are tested, honed, and polished and are then delivered via several channels repetitively. The channels include face-to-face delivery by the "detailers". Progress is tracked meticulously.
This expensive but effective industry effort contrasts with the typical approach taken by the health authorities and other non-industry groups. The process of creating such guidelines has usually been so slow and diffi cult in the gestation that their credibility among GPs is low even before the rather dull photocopied paper is issued (by post, big brown envelope) and meets its predictable fate.
Fretheim and colleagues say "Key components were an educational outreach visit with audit and feedback, and computerized reminders linked to the medical record system. Pharmacists conducted the visits". This feels a bit like a policing approach (another profession watching the GP) allied to computer direction, none of which feel particularly user-friendly. A non-audit nurse (or ex-drug-rep) calling briefl y but frequently using the usual pens/mugs/Post-its reminder freebies might get a better response.
One study by Eve et al [2] did look at applying pharmaceutical company techniques to clinical change ("selling" Triple A therapy). It seemed to work. It may be worth bringing it back into focus. We thank A. D. Gowdy for his comments [1] on our article [2]. He suggests that the National Health Service and other health-care providers have a lot to learn from the pharmaceutical industry. Where is the evidence?
We are not aware of data that convincingly demonstrate the impact of outreach visits by pharmaceutical representatives. Gowdy indicates that such information exists ("Progress is tracked meticulously"). We would very much like to see it!
We have had informal discussions with executives from companies in Norway, and we have been struck by how they themselves question the effectiveness of their marketing strategies. At a recent conference in Denmark, the medical director of a major pharmaceutical company gave a talk on the impact of industry marketing on prescribing habits [3]. He had no other data to show than a handful of anecdotes, and when questioned about this he insisted that neither he nor his marketing department was aware of more rigorous evaluations.
The degree of interaction between the pharmaceutical industry and the medical profession is associated with differences in prescribing patterns [4]. Thus, what the pharmaceutical industry is doing in terms of marketing does seem to work, at least to some extent. However, the marketing effort made by industry is massive and includes a wide range of interventions. It is diffi cult to know what the relative merit of each component is.
Even more diffi cult to estimate is the cost-effectiveness of various marketing strategies. Considering that the pharmaceutical industry spends a fi ve-digit amount ($US) per doctor per year on marketing alone [4], the industry should achieve substantial effects to compare favourably with, for instance, our results: We spent $US500 per doctor and achieved a doubling of thiazide prescriptions [5].
The only study cited by Gowdy did indeed show promising results. However, changes in prescribing were compared between practices that chose to participate in the programme and practices that chose not to [6], and whether this is a fair comparison is uncertain. Moreover, he does not put this study into the context of a systematic review of the relevant research.
Gowdy thinks our intervention sounds like "a policing approach". This does not fi t with our perception. The doctors were satisfi ed with the chance of meeting an industry-independent source of information and appreciated the opportunity to refl ect on their own practice in light of the information and feedback that we provided them.
Gowdy's use of the term "evidence-based" when describing the messages conveyed by pharmaceutical companies begs a brief comment. Several investigators have assessed the quality of advertisements and promotional material distributed by the pharmaceutical industry. They consistently conclude with a word of caution against basing clinical practice on claims made by pharmaceutical companies [7][8][9][10].
Atle Fretheim (atle.fretheim@nokc.no) Andrew D. Oxman Norwegian Knowledge Centre for Health Services Oslo, Norway the statements made by Haagmans and Osterhaus, I am compelled to provide an alternative view. I completely agree with the stance that an animal model which mimics the severe disease observed in human cases is needed. However, the continued use of nonhuman primates in these studies is simply not warranted. Indeed, multiple groups have tried unsuccessfully to reproduce this model, including Lawler and colleagues [2] . For example, I attended the WHO meeting on SARS in Rotterdam in February, 2004, at which Steven Jones, of the National Microbiology Laboratory, Winnipeg, Canada, said: "If I were one of those monkeys, maybe I'd just take a Tylenol" [3].
With my colleagues, I conducted a study in which both rhesus and cynomolgus macaques were infected with SARS-CoV. I did not see any clinical signs of disease or marked lung pathology [4]. A study by Subbarao and colleagues had similar fi ndings: "SARS coronavirus (SARS-CoV) administered intranasally and intratracheally to rhesus, cynomolgus and African Green monkeys (AGM) replicated in the respiratory tract but did not induce illness" [5].
Perhaps the most interesting issue is that Lawler and colleagues clearly state that "SARS-CoV infection of cynomolgus macaques did not reproduce the severe illness seen in the majority of adult human cases of SARS" [2]. To my knowledge, only Osterhaus's laboratory and laboratories from China have reported severe disease in SARS-CoV infected macaques. Osterhaus mentions that the variability in results may be due to factors such as the strain of virus used, and this is certainly true. However, he has not released the virus isolate used in these studies to me or my colleagues in spite of requests. Given that so many groups (e.g., the Centers for Disease Control and Prevention, the United States Army Medical Research Institute of Infectious Diseases, the National Institute of Allergy and Infectious Diseases, etc.) with excellent scientifi c skills and credentials have reported contradictory results with at least two strains of SARS-CoV, it is troublesome that the use of nonhuman primates in SARS pathogenesis, vaccine, and therapeutic testing continues.