Quantcast
Channel: Journal practices – HealthNewsReview.org
Viewing all 27 articles
Browse latest View live

NEJM reignites conflict-of-interest debate with reader poll

$
0
0

The following is a guest blog post from Kathlyn Stone, who is an Associate Editor for HealthNewsReview.org.


money in a pillAfter reading the NEJM’s recent reader poll on conflict-of-interest rules, one gets the sense that the NEJM is still trying to frame the discussion. The poll appears as a follow-up to Lisa Rosenbaum’s widely panned 3-part series on physician-industry ties and a related editorial by the NEJM’s editor-in-chief, Jeffrey Drazen, both of which suggest that criticisms of physician-industry financial ties are overblown.

The poll presents three hypothetical potential contributors to the journal and asks readers to put themselves in the “role of editor and help us decide” about these potential contributors’ suitability as NEJM review authors. Each of the three hypothetical experts has some type of financial arrangement with the pharmaceutical industry – either royalty payments, speaking fees, or commercially supported research at a university that covers everything except the researcher’s salary.

Noticeably absent was a “Case #4” describing a potential author with no conflict of interest. This is striking considering that during the 1990s and up until 2002, the NEJM would not publish editorials or review articles by authors with any conflict of interest (COI).

More than 30 readers – most of whom identified themselves as physicians – commented on the poll and used the space afforded to comment on the Rosenbaum series as well – and it seems many still aren’t buying what NEJM is selling.

David Newman, an emergency medicine physician in New York, finds the NEJM’s efforts to recalibrate thinking regarding COI “disappointing and tone deaf.”

“Pharmaceutical company money, and the purchase of influence, has been the single most powerful distorting force in healthcare in a generation—this is undisputed. This overwhelming and uncontested monetary force has come between guideline panels and recommendations, government agencies and quality markers, healthcare policies and subjects, and doctors and patients—this too is undisputed (there is endless literature, ignored by Dr. Rosenbaum, to demonstrate these influences). There is a reason more than two thirds of Americans are taking a prescription drug, an embarrassing statistic.

Those who accept pharmaceutical money in any form are, typically, good people with high morals and benevolent intentions. Please do not let us conflate their intentions with their impact. And let us not turn back the clock on attitudes toward financial COIs.”

Newman also asks why NEJM’s poll didn’t offer a fourth alternative.

“The only reason to choose any of the individuals in these cases would be if there were no available alternatives.”

Rohan Perera, a cardiovascular disease physician in Port Jefferson, NY, suggests the NEJM has failed in its efforts to draw a parallel between blockbuster-hungry Big Pharma and Jonas Salk, the inventor of the polio vaccine who refused to patent the vaccine and thus enrich himself.

“Dr. Rosebaum’s greenwashing of a very powerful stakeholder in health care deserves some outing. Edward Bernays, I believe pioneered the use of psychology to garner sympathy for the devil. One does not have the luxury of space to quote case by case of thousands of documented industry malfeasance suffice to say that Big PhRMA and allied giants of capital are no Jonas Salks.”

Piero Baglioni, an endocrinologist in the UK, voiced the concern of other commenters that the journal was not giving equal voice to those with an opposing view on COI issues.

“I belive [SIC] that Dr Drazen would show aequanimitas if (and only if) he allowed an authority of the same level of Dr Rosenbaum to write a 3-article series in the Journal on the reasons (and there are many) why physicians should be very careful in their «dance with the porcupine». I am afraid this will not happen.”

Others have written more extensive responses to NEJM’s push to embrace the inevitability and positivity of physician-industry ties. These include Susan Molchan’s two HealthNewsReview.org articles, “Criticism of NEJM’s defense of industry-physician relations” and “Responding to parts 2-3 of New England Journal of Medicine’s series on pharma-MD relations,” as well as Larry Husten writing in Forbes, “No, Pharmascolds Are Not Worse Than The Pervasive Conflicts Of Interest They Criticize.” Contributors to the Lown Institute website including Shannon Brownlee, Vikas Saini, and Vinay Prasad have also weighed in on the debate, with Saini describing the series as NEJM’s “audition for the role of the Fox News of healthcare.”

More recently, on his PulmCrit blog, Josh Farkas at the University of Vermont equates NEJM’s recent focus on physician-industry ties to a concerted “media campaign” designed to change attitudes. He writes,

“Perhaps the most interesting component of the media campaign is the reader poll about the adequacy of various hypothetical authors for a review article. Three potential authors are described, all of whom have significant COIs. The design of this poll itself is biased, by presenting no authors without COIs.  A more transparent approach might be to simply ask readers “do you think review article authors should be allowed to have COIs?”

If anything, the NEJM’s efforts to “greenwash” the potential harms of industry influence on scientific medical publishing has pushed more individuals to articulate their views on COI. And it’s also drawn attention to the fact that NEJM itself is not without conflict when reporting on such matters–all the more reason for its editors to be vigilant in protecting against commercial bias. As the always-quotable Richard Lehman pointed out in his journal review at the BMJ, the only conflicts declared by Drazen and Rosenbaum are their affiliations with NEJM.

“But that is a pretty massive conflict, isn’t it?” Lehman wrote. “How much of the NEJM‘s income comes from reprint sales to the pharmaceutical industry? Sorry, I didn’t catch that… commercial confidentiality?—ah, I see.”

 

Addendum on June 3:  See this update – “Former NEJM editors slam ‘backtrack’ on conflict of interest” from The BMJ.

The post NEJM reignites conflict-of-interest debate with reader poll appeared first on HealthNewsReview.org.


Former NEJM editors slam “backtrack” on conflict of interest

$
0
0

 Conflict of Interest Yesterday, HealthNewsReview.org Associate Editor Kathlyn Stone summarized ongoing reaction to the New England Journal of Medicine’s long-winded series justifying closer ties between physicians and the pharmaceutical industry. She noted that the NEJM itself had pioneered today’s conflict of interest disclosure policies, and that throughout the 1990s, its editors would not publish editorials or review articles by authors with any financial interests related to the content of the article.

Today, three former NEJM editors who upheld those policies are wading into the debate with a very pointed commentary, “Justifying conflicts of interest in medical journals: a very bad idea.”

Writing in The BMJ, Robert Steinbrook, Jerome Kassirer, and Marcia Angell said it was “sad that the medical journal that first called attention to the problem of financial conflicts of interest among physicians would now backtrack so dramatically and indulge in personal attacks on those who disagree.”

They note that there is an extensive body of literature — largely overlooked by NEJM correspondent Lisa Rosenbaum and editor Jeffrey Drazen — attesting to the negative impact of physician conflicts of interest on medicine and medical journals. And they are concerned that Rosenbaum and Drazen seem to willfully ignore such compelling evidence.

“Judges are expected to recuse themselves from hearing a case in which there are concerns that they could benefit financially from the outcome. Journalists are expected not to write stories on topics in which they have a financial conflict of interest. The problem, obviously, is that their objectivity might be compromised, either consciously or unconsciously, and there would be no easy way to know whether it had been. Yet Rosenbaum and Drazen seem to think it is insulting to physicians and medical researchers to suggest that their judgment can be affected in the same way. Doctors might wish it were otherwise, but none of us is immune to human nature.”

They add that Rosenbaum uses shoddy logic and invents non-existent reasons to justify her opposition to conflict of interest policies and regulations.

“No one is proposing that ‘we prevent the dissemination of expertise, thwart productive collaborations, or dissuade patients from taking effective drugs,’ or allow ‘true experts to be replaced on advisory panels, as authors of reviews and commentaries, in other capacities of authority by people whose key asset is being conflict-free.’ Where is the evidence of ‘a loud chorus of shaming,’ or ‘a stifling of honest discourse,’ or that ‘the license to trample the credibility of physicians with industry ties has silenced debate?’ Silliness and fear mongering about straw men are masquerading as scholarly analysis.”

As editors, they say it was “sometimes difficult, but nearly always possible, to find outstanding authors with the needed expertise and without a conflict of interest to write editorials and review articles.” And they laud The BMJ for implementing a “zero tolerance policy” on educational articles by authors with industry ties. They predict that the NEJM’s new approach could herald a decline in journal quality or, perhaps, help galvanize strong opposition.

“In 1990, it was a bad idea for authors of editorials, review articles, and other opinion articles in medical journals to have financial conflicts of interest. A quarter of a century later, it is a very bad idea. The articles by Rosenbaum and the supportive editorial by Drazen could presage a further weakening of the conflict of interest policy at the NEJM, or they could serve as a wake-up call for all medical journals and the profession. It is time to move forward, not backward.”

The piece is worth reading in its entirety, as is a related editorial by current BMJ editors Elizabeth Loder, Catherine Brizzell, and Fiona Godlee, who say they are “deeply troubled by a possible retreat from policies that prevent experts with relevant commercial ties from authoring commentary or review articles.”

While Dr. Susan Molchan – one of our editorial contributors – was one of the first to register opposition to the NEJM’s soft-pedaling on conflict of interest, it is important for leaders like these editors to weigh in forcefully as they did with their conclusion: “It is a mistake by NEJM to suggest that rigorous standards should be revisited. To do so would undermine the trustworthiness of medical journals and be a disservice to clinical practice and patient safety.”

(Publisher’s note for journalists:  We offer a list of industry-independent experts to help you do your work. But we wonder why we’ve seen almost no mainstream news media coverage of this continuing NEJM controversy.)

The post Former NEJM editors slam “backtrack” on conflict of interest appeared first on HealthNewsReview.org.

Weak reporting of limitations of observational research

$
0
0

OBSERVATIONAL-STUDIES-298x300A research letter in this week’s JAMA Internal Medicine addresses an issue that has become a pet peeve of ours: the failure of medical journal articles, journal news releases, and subsequent news releases, to address the limitations of observational studies. Observational studies, although important, cannot prove cause-and-effect; they can show statistical association but that does not necessarily equal causation.

The authors, from the University of Auckland in New Zealand, analyzed a combined 538 documents including major medical journal articles, accompanying editorials in those journals, news releases by those journals, and news stories written about all of the preceding.

Why?

They wrote:

“Observational research is abundant and influences clinical practice, in part via publication in high-impact journals and dissemination by news media. However, it frequently generates unreliable findings. Inherent methodologic limitations that generate bias and confounding mean that causal inferences cannot reliably be drawn. Study limitations may be inadequately acknowledged and accompanied by disclaimers that diminish their importance.”

Here’s what they found:

“Any study limitation was mentioned in 70 of 81 (86%) source article Discussion sections, 26 of 48 (54%) accompanying editorials, 13 of 54 (24%) journal press releases, 16 of 81 (20%) source article abstracts (of which 9 were published in the Annals of Internal Medicine), and 61 of 319 (19%) associated news stories. An explicit statement that causality could not be inferred was infrequently present: 8 of 81 (10%) source article Discussion sections, 7 of 48 (15%) editorials, 2 of 54 (4%) press releases, 3 of 81 (4%) source article abstracts, and 31 of319 (10%) news stories contained such statements.”

Graphically, it looked like this in the published JAMA Internal Medicine research letter:

Screen Shot 2015-06-04 at 6.45.53 PM

That is an awful report card.

Why does it matter?  The authors summarize nicely:

“A possible consequence of inadequate reporting of limitations of observational research is that readers consider the reported associations to be causal, promoting health practices based on evidence of modest quality. Up to 50% of such practices prove ineffective when tested in randomized clinical trials. Giving greater prominence to the limitations of observational research, particularly in the publication abstract and journal press releases,might temper this enthusiasm and reduce the need for subsequent reversals of practice.”

We’ve written about dozens and dozens of examples of news stories and other media messages that have failed to address the limitations of observational studies, thereby misleading the public.

We’ve criticized major medical journal news releases for doing so – The BMJ and The Lancet, for example.

For years, we’ve posted a primer on this site for journalists, news release writers and the general public, to help them understand the limitations.  The primer is entitled, “Does the Language Fit the Evidence? Association Versus Causation.”

The exaggeration should stop.  Observational studies play an important role.  But communicators should not try to make them more than what they are.

 

The post Weak reporting of limitations of observational research appeared first on HealthNewsReview.org.

Skinny jeans & nerve damage case studies have haunted me throughout my career

$
0
0

31 years ago, as a young medical news reporter for CNN, I was upset because a story I’d been working on was bumped from a newscast in favor of a story about a JAMA journal article of a single case study, “Tight-jeans meralgia: hot or cold,” about a woman experiencing nerve problems attributed to wearing jeans that were too tight.  A single case study dominated the news. I remember it like it was yesterday. (Actually, there had been a prior report that year in JAMA, “Meralgia Paresthetica and Tight Trousers,” in which the author wrote, “I have seen this problem in a number of truck drivers, most of whom were obese and wore tight-fitting denim jeans.”)

George Santayana famously wrote: “Those who cannot remember the past are condemned to repeat it.”

I do remember the past, and I’m still doomed to repeat it.

skinny jeans news release w:borderToday, news organizations across the globe are reporting on a single case study, “Fashion victim: rhabdomyolysis and bilateral peroneal and tibial neuropathies as a result of squatting in ‘skinny jeans’ ,” in a journal published by BMJ, The Journal of Neurology Neurosurgery & Psychiatry. Undoubtedly, a news release from BMJ raised journalists’ interest in this pressing international health issue.  The news release read:

Squatting in ‘skinny’ jeans for a protracted period of time can damage muscle and nerve fibres in the legs, making it difficult to walk, reveals a case study published online in the Journal of Neurology Neurosurgery & Psychiatry.

Doctors describe a case of a 35 year old woman who arrived at hospital with severe weakness in both her ankles. The previous day she had been helping a relative move house, and had spent many hours squatting while emptying cupboards.

She had been wearing tight ‘skinny’ jeans and recalled that these had felt increasingly tight and uncomfortable as the day wore on.

Later that evening, she experienced numbness in her feet and found it difficult to walk, which caused her to trip and fall. Unable to get up, she spent several hours lying on the ground before she was found.

Her calves were so swollen that her jeans had to be cut off her. She couldn’t move her ankles or toes properly and had lost feeling in her lower legs and feet.

Investigations revealed that she had damaged muscle and nerve fibres in her lower legs as a result of prolonged compression while squatting, which her tight jeans had made worse, the doctors suggest.

The jeans had prompted the development of compartment syndrome—reduced blood supply to the leg muscles, causing swelling of the muscles and compression of the adjacent nerves.

She was put on an intravenous drip and after 4 days she could walk unaided again, and was discharged from hospital.

One woman.

Better in a few days.

International news.

skinny jeans news

TIME – “Here’s How Skinny Jeans Are Hurting Your Health” (Publisher’s note:  not my jeans.  Maybe my genes, but not my jeans.)

CBS News reported, “Jeans that feel vacuum-sealed can suck the life out of the body.” (Publisher’s note:  suck the life?  Really? Get a life.  Get some real news.)

The Los Angeles Times had this headline: “Fashionistas, beware: Hazards of skinny jeans revealed in new study.”

The Washington Post reported, “Blue jeans have been linked with health hazards for decades.”  (Publisher’s note:  Yes, just about the span of my failing-to-forget career. Note:  I did not say my unforgettable career. I wish I could forget some parts of it.)

Nancy Shute at NPR added this perspective:

In some cases, the only damage tight clothes will do is to your wallet.

Last fall, the Federal Trade Commission ordered to two companies to stop selling caffeine-infused shapewear, saying that the amped-up skivvies would not, as one firm claimed, “reduce the size of your hips by up to 2.1 inches and your thighs by up to one inch, and would eliminate or reduce cellulite and that scientific tests proved those results.”

The FTC disputed that claim, and ordered the companies to fork over $1.5 million to customers who had been lured in by the promise of effortless shrinkage.

I could go on with many more examples, but I’ve been squatting in jeans that suddenly feel too tight the whole time I’ve been writing this.

Time to stretch my legs – and my mind – in search of truly important health care news.  Somewhere, right?

Addendum on June 25:

Craig Silverman, founding editor of BuzzFeed Canada, interviewed me for his piece, “Why That Warning About The Dangers Of Skinny Jeans May Be A Big Fat Nothing.”

The post Skinny jeans & nerve damage case studies have haunted me throughout my career appeared first on HealthNewsReview.org.

Medical journal news releases CAN make a difference

$
0
0

Hot red peppers 410x273This week The BMJ sent journalists a news release, “Regular consumption of spicy foods linked to lower risk of death.” The second paragraph – the third sentence overall – of the news release read: “This is an observational study so no definitive conclusions can be drawn about cause and effect, but the authors call for more research that may “lead to updated dietary recommendations and development of functional foods.”

If you go to the journal article on which the news release is based, you see that the seeds of appropriate explanation were planted further upstream.

In the conclusion paragraph of the published study manuscript, the researchers wrote:

“given the observational nature of this study, it is not possible to make a causal inference.”

Did that clarity – that emphasis on the fact that association ≠ causation – make a difference in subsequent news stories based on the study or on the news release?  It appears that may be the case in this instance.

  • TIME.com included this:  “More research is needed to make any causal case for the protective effects of chili—this does not prove that the spicy foods were the reason for the health outcomes.”
  • The New York Times Well blog had a line: “The authors drew no conclusions about cause and effect.”
  • With even greater emphasis, the Los Angeles Times reported: “Although the study included nearly half a million volunteers who were tracked for a total of 3.5 million person-years, the researchers emphasized that they couldn’t show a causal relationship between eating spicy foods and living longer.”
  • In The Washington Post: “The researchers said that while it isn’t possible to draw any conclusions about whether eating spicy foods causes you live longer from their work that more studies are needed to look at this link in more depth.”
  • From HealthDay: “However, the study authors cautioned that their investigation was not able to draw a direct cause-and-effect link between the consumption of spicy foods and lower mortality. They could only find an association between these factors.”
  • CBSNews.com stated: “The authors emphasize that this is an observational study so no definitive cause and effect relationship can be drawn.”

The BMJ logoWe’ve had a long-running challenge to news release writers for The BMJ and for news releases for others of the ~50 journals that BMJ publishes, to consistently state the limitations of observational studies that they write about.  And we’ve brought this up with other journals as well.

In this latest chapter, kudos to The BMJ. The words matter.  What the researcher-authors submit matters. The journal’s editorial scrutiny matters.  The accuracy of the news releases matters as well.

 

 

Addendum on August 6:  On the other hand, The CBS Evening News last evening aired a piece that was not a good example of how to report on studies. It used this lead-in: “The secret to youth may not lie in a fountain, but in a frying pan, loaded with spices,” and never recovered.

-0-

But after praising The BMJ‘s news release above, see what happened next, on a BMJ news release about a study concerning young fathers and risk of early death.  Almost a polar opposite!

 

The post Medical journal news releases CAN make a difference appeared first on HealthNewsReview.org.

Troubled BMJ news release on young fathers & early death risk

$
0
0

OBSERVATIONAL-STUDIES-298x300Shortly after praising a news release by The BMJ earlier today for emphasizing the limitations of an observational study, another news release for another journal published by BMJ is at the other end of the spectrum.

Fatherhood at young age linked to greater likelihood of mid-life death,” is the headline of a news release about a study in the Journal of Epidemiology & Community Health.

It is another big observational study, for which the published conclusion is: “The findings suggest a causal effect of young fatherhood on mortality and highlight the need to support young fathers in their family life to improve health behaviours and health.”

ScienceMediaCentre.org published this reaction from a professor of Applied Statistics:

“One problem is that it is an observational study and it’s always hard to work out what is causing what from observational studies. The authors are careful in their wording, and don’t go further than saying that the findings suggest the association between young fatherhood and midlife mortality is likely to be causal. The basic problem is that we can’t be sure whether it is the early fatherhood causing the increased mortality, in that if the young fathers had started their families later in life but nothing before that changed, then fewer of them would have died in middle age. Maybe there is something else that caused them to have children when they were young, and independently caused more of them to die in middle age.”

I don’t think the researchers were so careful in their wording – “suggest a causal effect”?  What does that mean?

And the wording of the news release was also not so careful, allowing a researcher quote to be published, unchallenged:

“The findings of our study suggest that the association between young fatherhood and mid life mortality is likely to be causal.”

Again, what does “likely to be causal” mean?

And, whereas in the earlier example of study authors being clear about the limits of observational data – and then the journal news release re-emphasizing that – quality news stories picked up on the clues, in this case just the opposite happened.

The published manuscript used this confusing language suggesting a causal effect, which was echoed by the news release, which was echoed or at least inadequately addressed in much news coverage.  Examples:

  • The Los Angeles Times converted the findings in to “expert advice for would-be fathers:  If possible, wait to have kids until you hit your mid-twenties.”  No mention of the limits of observational studies.  The headline actually read, “To boost odds of a long life, men should delay fatherhood until age 25.”  Should delay.  Really?
  • The Washington Post actually asked, “Could fatherhood literally kill you?”
  • Thank goodness that Randy Dotinga, for HealthDay, on his own, inserted this: “the researchers only found an association, not a cause-and-effect link, between age of fatherhood and age at death.”

So – two different studies and two different news releases treated very differently by the same organization on the same day – with predictably quite different results in ensuing news coverage.

Addendum:  On Twitter, Dr. C. Michael Gibson, Harvard prof and founder of Wikidoc.org, referred to all of this with the hashtag #LOSER, standing for Limited Observational Study Exercise Restraint.

The post Troubled BMJ news release on young fathers & early death risk appeared first on HealthNewsReview.org.

“Disingenuous denial” of medical research conflicts of interest

$
0
0

COI as CONFLICT OF INTERESTA Viewpoint article in the Journal of the American Medical Association (JAMA), “Confluence, Not Conflict of Interest: Name Change Necessary,” caught the eye of Dr. Richard Lehman, who writes the wonderful journal review blog for The BMJ.

First, an excerpt from the JAMA piece to give you a sense of what it’s about:

“The term conflict of interest is pejorative. It is confrontational and presumptive of inappropriate behavior. Rather, the focus should be on the objective, which is to align secondary interests with the primary objective of the endeavor—to benefit patients and society—in a way that minimizes the risk of bias. A better term—indicative of the objective—would be confluence of interest, implying an alignment of primary and secondary interests. In this regard, the individuals and entities liable to bias extend far beyond the investigator and the sponsor; they include departments, research institutes, and universities. The potential for bias also extends to nonprofit funders, such as the National Institutes of Health and foundations, as well as to journals that might, for example, generate advertising revenue from sponsors.”

Lehman called this article “disingenuous denial.” He wrote:

“I think it marks a low point for JAMA. It aligns the journal with the disingenuous deniers who pretend that conflicts of interest don’t arise when authors and investigators write about work that they have a vested interest in promoting. It joins together JAMA with the New England Journal of Medicine which took a similar stance in a series of opinion papers earlier this year. This is a sort of Republican Tea Party of the soul, where you know you are saying something false and daring people to contradict you, knowing that their very engagement is a form of legitimation.”

We wrote about the New England Journal of Medicine series earlier this year.

 

The post “Disingenuous denial” of medical research conflicts of interest appeared first on HealthNewsReview.org.

Top journal editors resist transparency

$
0
0

the blue folder with golden hinged lockIt’s difficult to make a case for hiding or obscuring information about health and the medicines we take, but it seems the editors of two top medical journals are doing just that. The decisions of these editors substantially affect the quality of medical research studies reported, what public relations officials communicate about those studies, and what news stories eventually say about the research to patients and the public.

Annals — outcomes bait and switch

A basic tenet of clinical trials, of Clinical Trials 101 so to speak, it to state ahead of time, in writing, and publicly, in print or on a website, what it is you are testing. What is your endpoint or outcome measure? Is it number of deaths, heart attacks, blood glucose level, depression score? At 3 months, 6 months, one year? Stating your outcome measure — that is, fixing your goalpost ahead of time–keeps people from cheating (being biased, whether consciously or not)—from changing outcomes as the trial goes along to match what the data might be telling them. Professionals run these trials; they know this and there are trial registries where they can document their planned outcomes. Hence the concern with the following pronouncement from the helm of the Annals of Internal Medicine:

 “On the basis of our long experience reviewing research articles, we have learned that prespecified outcomes or analytic methods can be suboptimal or wrong.”

Of course not even journal editors’ “long experience” can trump decades of statistical methods. Unfortunately, this attitude has contributed to the contamination and mistrust of the medical literature as detailed by Dr. John Ioannidis and others;

The editors’ statement was part of a response to the Centre for Evidence-Based Medicine’s Outcome Monitoring Project (COMPARE) critique of two trials published in the Annals that did not report outcomes consistent with what had been promised/pre-specified. The Centre, based at Oxford University, has taken on the challenging and laudable project of monitoring the number of correctly reported outcomes in clinical trials published in the top five medical journals. When a problem is identified, the Centre team submits a letter to the journal for publication and hopes for a response and remediation of the problem. Outcome switching appears to be rampant. Based on the 67 articles published in the top five journals in October and November 2015, all but nine had changed their outcomes without telling their readers.

It’s not that outcomes can never be switched after a protocol begins, most clinical trialists agree, including those at COMPARE (as they make clear on their site); one simply has to be clear about why and when the outcome was changed and document the change.

Many inconsistencies appear in the letter from the Annals editors. For example, COMPARE uses standard, accepted criteria in determining whether endpoints have been pre-specified in a protocol;  it’s misleading to call these criteria a “rigid evaluation” as the editors state. Nor did I see any evidence on the COMPARE web site of “labeling any discrepancies as possible evidence of research misconduct.” These editors seem to miss the point of the COMPARE project and by extension of the CONSORT guidelines, established almost 20 years ago as standards for reporting trials and that I thought journal editors agreed to adhere to. Dr. Ben Goldacre of COMPARE wrote a detailed response addressing additional inconsistencies and misunderstandings

NEJM – parasites and blindness

Separately but in a seeming complement to the Annals editors’ position on outcomes switching, the editors of the New England Journal of Medicine came out against data sharing, just as efforts to make data more available and transparent to more people have been making significant progress and the dire need for replication of studies is being increasingly recognized. The NEJM’s retrograde stance on sharing calls to mind their puzzlingly enthusiastic endorsement of tight physician-pharma relations.

What’s wrong with data sharing? The NEJM editors fret that “someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters.” It seems the wise journal editors want to shelter the scientific community and the public from the idiots who “may not understand” a dataset (or perhaps shelter their pharma advertisers?). But part of the point of science is to foster differing points of view. The editors give no further reason for their concern at this point but go on to consider questions of data combination relevant to meta-analyses–irrelevant when it comes to looking at single datasets.

Medical research has indeed become something of a swamp, and the NEJM editors warn of the emergence of “research parasites” who “use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited.” Stealing research productivity? I’d like an example or two where this has happened, as many datasets have been made public at this point.

The editors also seem blind to some larger points, such as that trying to disprove or replicate what others have done is a big part of what science is supposed to be about! No one is proposing that an investigator collect data and post it for all the world to see immediately. I would think most people would agree that time would be allowed for a primary team to publish its results. In the case of clinical trials, most immediately for drugs or tests that are marketed, I think it only fair that data be made available so that prescribing physicians have full information about what they are prescribing, and patients about what they are being prescribed. To his credit, NEJM editor Dr. Jeffrey Drazen just today published a clarification on that journal’s policy of data sharing specifically for clinical trials. He said that the journal would require authors to commit to making data available within 6 months of publication. With more eyes on data, tragedies such as those of the over-marketed arthritis drug Vioxx can very likely be truncated before the loss of 55,000 lives and tens of thousands of heart attacks and strokes.

Journal editors certainly have conflicted interests of their own, an obvious one being the huge revenues generated by pharmaceutical company ads and reprints. One is hard pressed to find evidence showing transparency and sharing in science to be a bad thing for anyone without marketing interests. But bias is hard to sort out. Hence statistics and transparency. Cave dwellers I suppose don’t appreciate the shining of light, but many of these have adapted to the point of becoming blind.

Addendum:

John Mandrola, MD, on Twitter, flagged a comment by Dr. David Karpf about problematic aspects of data-sharing proposals. Mandrola’s tweet generated a long stream of responses — for and against — that is worth taking a look at (click below to see the responses).


Dr. Molchan is a psychiatrist and nuclear medicine physician with extensive experience in clinical research at the National Institutes of Health.

The post Top journal editors resist transparency appeared first on HealthNewsReview.org.


Podcast: Rare disease foundation says medical journal misled patients

$
0
0

This is the second in an unplanned, occasional series about real people who are harmed by inaccurate, imbalanced, incomplete, misleading media messages.  The first was about a man with glioblastoma brain cancer.

People with rare diseases may hang on any crumb of possible good news more than anyone else.  Many have learned how to find and scour medical journal articles for signs of hope.

So it is with people who have primary ciliary dyskinesia or PCD, which, as the National Heart, Lung and Blood Institute explains, “is a rare disease that affects tiny, hair-like structures that line the airways. These structures are called cilia. If the cilia don’t work well, bacteria stay in your airways. This can cause breathing problems, infections, and other disorders. PCD mainly affects the sinuses, ears, and lungs. Some people who have PCD have breathing problems from the moment of birth.”

Early this year, the Journal of Medical Genetics published a paper with this headline:  “Gene editing of DNAH11 restores normal cilia motility in primary ciliary dyskinesia.”

JGM logo

Screen Shot 2016-02-10 at 4.37.36 PM

Michele Manion HeadshotMany people with PCD who saw that journal article headline were excited. The news spread through the PCD community on social media like wildfire.  And then today’s podcast guest had to temper that enthusiasm because of what the journal headline did not reveal.  Our guest is Michele Manion, the executive director of the Primary Ciliary Dyskinesia (PCD) Foundation. She delivers another important message about how media messages – including, or especially, those from medical journals – can harm people.

Meghan with FalconIt’s important to put a human face on such stories. The picture at left is of Michele Manion’s daughter, now in her 30s. She has some significant lung damage.  But Manion says that her daughter is not slowed down too much in everyday life.

Manion also sent me group photos of people with PCD and their families – from New York, North Carolina, Minnesota, and this photo of a group in Saint Louis.

St. Louis PCD group

Real people, hanging on every bit of news offering hope about progress in research.  Media messengers, including medical journal editors, should keep these faces in mind before they publish.

Key quotes from Michele Manion in the podcast:

  • “We’re in this awkward position where we want patients to be excited about future of genetic therapies; to me this delegitimizes what can be done with gene editing.”
  • “We don’t want to discourage patients about research but we were in a position to have to do that and have to explain the limitations of what had been demonstrated vs. what had appeared to have been demonstrated. That was challenging.”
  • “I don’t think the intent is to harm patients. I think that’s part of the problem. The patient as the ultimate end user isn’t even part of the equation. That’s not who they’re trying to get to. They’re trying to get to funders, more press for their institution and somewhere in that thread the patient is completely lost.”

Thanks to The National Institute for Health Care Management Foundation for providing us with a grant to produce these podcasts.

Credit:  podcast editor Cristeta Boarini

Musical bridge in this episode: “Fünf Stücke: Lebhaft” by Paul Hindemith, as played by Academy of St. Martin in the Fields Orchestra

iTunes ratings & reviewsPlease note: if you have listened to any of our podcasts and like what you’ve heard, we’d appreciate it if you’d leave a Review and a Rating on the iTunes webpage where our podcasts can be found: https://itun.es/i6S86Qw.  (You need to click on the “View in iTunes” button on the left of that page, then find the Ratings and Reviews tab.)

You can now subscribe to our podcasts on that iTunes page or via this RSS feed:  http://feeds.soundcloud.com/users/soundcloud:users:167780656/sounds.rss

All episodes of our podcasts are archived on this page on HealthNewsReview.org.

The post Podcast: Rare disease foundation says medical journal misled patients appeared first on HealthNewsReview.org.

Xarelto controversy highlights need for more transparency at NEJM

$
0
0

The following is a guest blog post from one of our contributors, Susan Molchan, MD, a psychiatrist in the Washington, DC, area. She’s been closely following, and criticizing, NEJM’s stance on data-sharing and conflicts of interest.


Locked fileLast week, many media outlets, including the New York Times, reported on concerns about the validity of a huge, and hugely important, clinical trial published in the New England Journal of Medicine (NEJM) involving a drug taken by millions to decrease their risk of stroke.

What’s at issue? Whether the object of the trial–the now best-selling blood thinner Xarelto (rivaroxaban)–really does have the advantage of being safer than the traditionally used blood-thinner Coumadin (warfarin). That’s now open to question, as the BMJ reported, because of a malfunctioning blood test device used in the trial. Those in the Coumadin group may have received higher doses of Coumadin than would have been optimal, leading to an increased risk of serious bleeding. Problems with the device were reported as far back as 2002, and it was recalled by the FDA in 2014.

How the problem could have been caught sooner

The Times’s coverage is tough, but there’s one point that they and other media outlets didn’t touch on here that’s important: Had the protocol and data from the trial been made available upon or shortly after publication of the trial, there’s a much better chance the problem would have been caught much sooner.  

The editors of the NEJM and other journals that publish clinical trials now require that authors make their data publicly available within 6 months of publication. However, the data from this trial apparently aren’t going to see the light of day because study sponsor Bayer wants to keep them under wraps. While they’ve agreed to share data moving forward, Bayer told the BMJ that this policy applies only to “study reports for new medicines approved in the US and the EU after January 1, 2014.”

The Xarelto trial authors published a letter in the NEJM on February 25 to try to allay concerns about the blood test device used in the study. But surprisingly, they didn’t include blood test data that some scientists say may help clarify the comparison between the two drugs. The NEJM editors told the Times that they did not know such data existed at the time they published the letter, and — just as surprisingly — they didn’t seem put out by the fact that the Xarelto authors hadn’t shared the data. Dr. Jeffrey Drazen, the NEJM’s editor in chief, “disputed that the editors had been misled about the data,” according to the Times, “and said it was not relevant to the letter that was published.”

Not the first time for NEJM

This isn’t the first time the NEJM has been embroiled in controversy over missing data. The Times referenced a past situation where NEJM editors knew of missing heart attack data from a clinical trial and allowed the study to be published anyway. The drug that was studied, the painkiller Vioxx (rofecoxib), was eventually withdrawn from the market, but not before leading to an estimated 55,000 deaths, as well as thousands of heart attacks and strokes.

As I pointed out in January, the NEJM editors have recently made a case against transparency and data sharing, in opposition to the push for universal acceptance of these principles by many in the research community. After publicly raising concerns that data-sharing would benefit so-called “research parasites” who might have the temerity to use the data “to try to disprove what the original investigators had posited,” Dr. Drazen quickly clarified that his stance against data-sharing didn’t apply to clinical trials such as the Xarelto study. However, I think it’s fair to say that NEJM’s embrace of transparency is — as the Xarelto and Vioxx situations illustrate — both grudging and limited.  It’s also fair to ask whether NEJM’s foot-dragging on this issue is connected to the journal’s cozy relations with industry, which the editors recently defended in a tone-deaf set of editorials.  

Journals make money from big trials like these

The fact is, big successful industry-sponsored trials bring in hundreds of thousands if not millions of dollars in revenue for journals. In the case of Vioxx, the Wall Street Journal noted, the journal sold 900,000 reprints producing revenue of $697,000, most of them bought by the company selling the drug (Merck). In all likelihood the company selling Xarelto also bought plenty of reprints, although the amount is not made public by the journal. Scientists concerned with this obvious conflict of interest, and the publication bias that it can produce (i.e. the tendency of journals to publish studies with positive results while rejecting negative studies), have suggested that the number of reprints sold or revenue garnered from those sales be published with the articles, just as authors must disclose their conflicts of interest.

Instead of trying to guess who knew what and when in terms of device malfunctions and side effects in clinical trials, or waiting for people to be hurt or die with subsequent lawsuits being the only way for the information to come to light, wouldn’t it be better to just put the data out there from the beginning? All of it?

Update 3/10/16: Journalist Larry Husten, who has been following this story all along, pointed us to some of the early reporting on the faulty device — including from Deb Cohen at BMJ as well as David Hilzenrath and Charles Babcock at POGO — that sparked this story and gave it life.

The post Xarelto controversy highlights need for more transparency at NEJM appeared first on HealthNewsReview.org.

Weak reporting of limitations of observational research

$
0
0

OBSERVATIONAL-STUDIES-298x300A research letter in this week’s JAMA Internal Medicine addresses an issue that has become a pet peeve of ours: the failure of medical journal articles, journal news releases, and subsequent news releases, to address the limitations of observational studies. Observational studies, although important, cannot prove cause-and-effect; they can show statistical association but that does not necessarily equal causation.

The authors, from the University of Auckland in New Zealand, analyzed a combined 538 documents including major medical journal articles, accompanying editorials in those journals, news releases by those journals, and news stories written about all of the preceding.

Why?

They wrote:

“Observational research is abundant and influences clinical practice, in part via publication in high-impact journals and dissemination by news media. However, it frequently generates unreliable findings. Inherent methodologic limitations that generate bias and confounding mean that causal inferences cannot reliably be drawn. Study limitations may be inadequately acknowledged and accompanied by disclaimers that diminish their importance.”

Here’s what they found:

“Any study limitation was mentioned in 70 of 81 (86%) source article Discussion sections, 26 of 48 (54%) accompanying editorials, 13 of 54 (24%) journal press releases, 16 of 81 (20%) source article abstracts (of which 9 were published in the Annals of Internal Medicine), and 61 of 319 (19%) associated news stories. An explicit statement that causality could not be inferred was infrequently present: 8 of 81 (10%) source article Discussion sections, 7 of 48 (15%) editorials, 2 of 54 (4%) press releases, 3 of 81 (4%) source article abstracts, and 31 of319 (10%) news stories contained such statements.”

Graphically, it looked like this in the published JAMA Internal Medicine research letter:

Screen Shot 2015-06-04 at 6.45.53 PM

That is an awful report card.

Why does it matter?  The authors summarize nicely:

“A possible consequence of inadequate reporting of limitations of observational research is that readers consider the reported associations to be causal, promoting health practices based on evidence of modest quality. Up to 50% of such practices prove ineffective when tested in randomized clinical trials. Giving greater prominence to the limitations of observational research, particularly in the publication abstract and journal press releases,might temper this enthusiasm and reduce the need for subsequent reversals of practice.”

We’ve written about dozens and dozens of examples of news stories and other media messages that have failed to address the limitations of observational studies, thereby misleading the public.

We’ve criticized major medical journal news releases for doing so – The BMJ and The Lancet, for example.

For years, we’ve posted a primer on this site for journalists, news release writers and the general public, to help them understand the limitations.  The primer is entitled, “Does the Language Fit the Evidence? Association Versus Causation.”

The exaggeration should stop.  Observational studies play an important role.  But communicators should not try to make them more than what they are.

 

Skinny jeans & nerve damage case studies have haunted me throughout my career

$
0
0

31 years ago, as a young medical news reporter for CNN, I was upset because a story I’d been working on was bumped from a newscast in favor of a story about a JAMA journal article of a single case study, “Tight-jeans meralgia: hot or cold,” about a woman experiencing nerve problems attributed to wearing jeans that were too tight.  A single case study dominated the news. I remember it like it was yesterday. (Actually, there had been a prior report that year in JAMA, “Meralgia Paresthetica and Tight Trousers,” in which the author wrote, “I have seen this problem in a number of truck drivers, most of whom were obese and wore tight-fitting denim jeans.”)

George Santayana famously wrote: “Those who cannot remember the past are condemned to repeat it.”

I do remember the past, and I’m still doomed to repeat it.

skinny jeans news release w:borderToday, news organizations across the globe are reporting on a single case study, “Fashion victim: rhabdomyolysis and bilateral peroneal and tibial neuropathies as a result of squatting in ‘skinny jeans’ ,” in a journal published by BMJ, The Journal of Neurology Neurosurgery & Psychiatry. Undoubtedly, a news release from BMJ raised journalists’ interest in this pressing international health issue.  The news release read:

Squatting in ‘skinny’ jeans for a protracted period of time can damage muscle and nerve fibres in the legs, making it difficult to walk, reveals a case study published online in the Journal of Neurology Neurosurgery & Psychiatry.

Doctors describe a case of a 35 year old woman who arrived at hospital with severe weakness in both her ankles. The previous day she had been helping a relative move house, and had spent many hours squatting while emptying cupboards.

She had been wearing tight ‘skinny’ jeans and recalled that these had felt increasingly tight and uncomfortable as the day wore on.

Later that evening, she experienced numbness in her feet and found it difficult to walk, which caused her to trip and fall. Unable to get up, she spent several hours lying on the ground before she was found.

Her calves were so swollen that her jeans had to be cut off her. She couldn’t move her ankles or toes properly and had lost feeling in her lower legs and feet.

Investigations revealed that she had damaged muscle and nerve fibres in her lower legs as a result of prolonged compression while squatting, which her tight jeans had made worse, the doctors suggest.

The jeans had prompted the development of compartment syndrome—reduced blood supply to the leg muscles, causing swelling of the muscles and compression of the adjacent nerves.

She was put on an intravenous drip and after 4 days she could walk unaided again, and was discharged from hospital.

One woman.

Better in a few days.

International news.

skinny jeans news

TIME – “Here’s How Skinny Jeans Are Hurting Your Health” (Publisher’s note:  not my jeans.  Maybe my genes, but not my jeans.)

CBS News reported, “Jeans that feel vacuum-sealed can suck the life out of the body.” (Publisher’s note:  suck the life?  Really? Get a life.  Get some real news.)

The Los Angeles Times had this headline: “Fashionistas, beware: Hazards of skinny jeans revealed in new study.”

The Washington Post reported, “Blue jeans have been linked with health hazards for decades.”  (Publisher’s note:  Yes, just about the span of my failing-to-forget career. Note:  I did not say my unforgettable career. I wish I could forget some parts of it.)

Nancy Shute at NPR added this perspective:

In some cases, the only damage tight clothes will do is to your wallet.

Last fall, the Federal Trade Commission ordered to two companies to stop selling caffeine-infused shapewear, saying that the amped-up skivvies would not, as one firm claimed, “reduce the size of your hips by up to 2.1 inches and your thighs by up to one inch, and would eliminate or reduce cellulite and that scientific tests proved those results.”

The FTC disputed that claim, and ordered the companies to fork over $1.5 million to customers who had been lured in by the promise of effortless shrinkage.

I could go on with many more examples, but I’ve been squatting in jeans that suddenly feel too tight the whole time I’ve been writing this.

Time to stretch my legs – and my mind – in search of truly important health care news.  Somewhere, right?

Addendum on June 25:

Craig Silverman, founding editor of BuzzFeed Canada, interviewed me for his piece, “Why That Warning About The Dangers Of Skinny Jeans May Be A Big Fat Nothing.”

Medical journal news releases CAN make a difference

$
0
0

Hot red peppers 410x273This week The BMJ sent journalists a news release, “Regular consumption of spicy foods linked to lower risk of death.” The second paragraph – the third sentence overall – of the news release read: “This is an observational study so no definitive conclusions can be drawn about cause and effect, but the authors call for more research that may “lead to updated dietary recommendations and development of functional foods.”

If you go to the journal article on which the news release is based, you see that the seeds of appropriate explanation were planted further upstream.

In the conclusion paragraph of the published study manuscript, the researchers wrote:

“given the observational nature of this study, it is not possible to make a causal inference.”

Did that clarity – that emphasis on the fact that association ≠ causation – make a difference in subsequent news stories based on the study or on the news release?  It appears that may be the case in this instance.

  • TIME.com included this:  “More research is needed to make any causal case for the protective effects of chili—this does not prove that the spicy foods were the reason for the health outcomes.”
  • The New York Times Well blog had a line: “The authors drew no conclusions about cause and effect.”
  • With even greater emphasis, the Los Angeles Times reported: “Although the study included nearly half a million volunteers who were tracked for a total of 3.5 million person-years, the researchers emphasized that they couldn’t show a causal relationship between eating spicy foods and living longer.”
  • In The Washington Post: “The researchers said that while it isn’t possible to draw any conclusions about whether eating spicy foods causes you live longer from their work that more studies are needed to look at this link in more depth.”
  • From HealthDay: “However, the study authors cautioned that their investigation was not able to draw a direct cause-and-effect link between the consumption of spicy foods and lower mortality. They could only find an association between these factors.”
  • CBSNews.com stated: “The authors emphasize that this is an observational study so no definitive cause and effect relationship can be drawn.”

The BMJ logoWe’ve had a long-running challenge to news release writers for The BMJ and for news releases for others of the ~50 journals that BMJ publishes, to consistently state the limitations of observational studies that they write about.  And we’ve brought this up with other journals as well.

In this latest chapter, kudos to The BMJ. The words matter.  What the researcher-authors submit matters. The journal’s editorial scrutiny matters.  The accuracy of the news releases matters as well.

 

 

Addendum on August 6:  On the other hand, The CBS Evening News last evening aired a piece that was not a good example of how to report on studies. It used this lead-in: “The secret to youth may not lie in a fountain, but in a frying pan, loaded with spices,” and never recovered.

-0-

But after praising The BMJ‘s news release above, see what happened next, on a BMJ news release about a study concerning young fathers and risk of early death.  Almost a polar opposite!

 

Troubled BMJ news release on young fathers & early death risk

$
0
0

OBSERVATIONAL-STUDIES-298x300Shortly after praising a news release by The BMJ earlier today for emphasizing the limitations of an observational study, another news release for another journal published by BMJ is at the other end of the spectrum.

Fatherhood at young age linked to greater likelihood of mid-life death,” is the headline of a news release about a study in the Journal of Epidemiology & Community Health.

It is another big observational study, for which the published conclusion is: “The findings suggest a causal effect of young fatherhood on mortality and highlight the need to support young fathers in their family life to improve health behaviours and health.”

ScienceMediaCentre.org published this reaction from a professor of Applied Statistics:

“One problem is that it is an observational study and it’s always hard to work out what is causing what from observational studies. The authors are careful in their wording, and don’t go further than saying that the findings suggest the association between young fatherhood and midlife mortality is likely to be causal. The basic problem is that we can’t be sure whether it is the early fatherhood causing the increased mortality, in that if the young fathers had started their families later in life but nothing before that changed, then fewer of them would have died in middle age. Maybe there is something else that caused them to have children when they were young, and independently caused more of them to die in middle age.”

I don’t think the researchers were so careful in their wording – “suggest a causal effect”?  What does that mean?

And the wording of the news release was also not so careful, allowing a researcher quote to be published, unchallenged:

“The findings of our study suggest that the association between young fatherhood and mid life mortality is likely to be causal.”

Again, what does “likely to be causal” mean?

And, whereas in the earlier example of study authors being clear about the limits of observational data – and then the journal news release re-emphasizing that – quality news stories picked up on the clues, in this case just the opposite happened.

The published manuscript used this confusing language suggesting a causal effect, which was echoed by the news release, which was echoed or at least inadequately addressed in much news coverage.  Examples:

  • The Los Angeles Times converted the findings in to “expert advice for would-be fathers:  If possible, wait to have kids until you hit your mid-twenties.”  No mention of the limits of observational studies.  The headline actually read, “To boost odds of a long life, men should delay fatherhood until age 25.”  Should delay.  Really?
  • The Washington Post actually asked, “Could fatherhood literally kill you?”
  • Thank goodness that Randy Dotinga, for HealthDay, on his own, inserted this: “the researchers only found an association, not a cause-and-effect link, between age of fatherhood and age at death.”

So – two different studies and two different news releases treated very differently by the same organization on the same day – with predictably quite different results in ensuing news coverage.

Addendum:  On Twitter, Dr. C. Michael Gibson, Harvard prof and founder of Wikidoc.org, referred to all of this with the hashtag #LOSER, standing for Limited Observational Study Exercise Restraint.

“Disingenuous denial” of medical research conflicts of interest

$
0
0

COI as CONFLICT OF INTERESTA Viewpoint article in the Journal of the American Medical Association (JAMA), “Confluence, Not Conflict of Interest: Name Change Necessary,” caught the eye of Dr. Richard Lehman, who writes the wonderful journal review blog for The BMJ.

First, an excerpt from the JAMA piece to give you a sense of what it’s about:

“The term conflict of interest is pejorative. It is confrontational and presumptive of inappropriate behavior. Rather, the focus should be on the objective, which is to align secondary interests with the primary objective of the endeavor—to benefit patients and society—in a way that minimizes the risk of bias. A better term—indicative of the objective—would be confluence of interest, implying an alignment of primary and secondary interests. In this regard, the individuals and entities liable to bias extend far beyond the investigator and the sponsor; they include departments, research institutes, and universities. The potential for bias also extends to nonprofit funders, such as the National Institutes of Health and foundations, as well as to journals that might, for example, generate advertising revenue from sponsors.”

Lehman called this article “disingenuous denial.” He wrote:

“I think it marks a low point for JAMA. It aligns the journal with the disingenuous deniers who pretend that conflicts of interest don’t arise when authors and investigators write about work that they have a vested interest in promoting. It joins together JAMA with the New England Journal of Medicine which took a similar stance in a series of opinion papers earlier this year. This is a sort of Republican Tea Party of the soul, where you know you are saying something false and daring people to contradict you, knowing that their very engagement is a form of legitimation.”

We wrote about the New England Journal of Medicine series earlier this year.

 


Top journal editors resist transparency

$
0
0

the blue folder with golden hinged lockIt’s difficult to make a case for hiding or obscuring information about health and the medicines we take, but it seems the editors of two top medical journals are doing just that. The decisions of these editors substantially affect the quality of medical research studies reported, what public relations officials communicate about those studies, and what news stories eventually say about the research to patients and the public.

Annals — outcomes bait and switch

A basic tenet of clinical trials, of Clinical Trials 101 so to speak, it to state ahead of time, in writing, and publicly, in print or on a website, what it is you are testing. What is your endpoint or outcome measure? Is it number of deaths, heart attacks, blood glucose level, depression score? At 3 months, 6 months, one year? Stating your outcome measure — that is, fixing your goalpost ahead of time–keeps people from cheating (being biased, whether consciously or not)—from changing outcomes as the trial goes along to match what the data might be telling them. Professionals run these trials; they know this and there are trial registries where they can document their planned outcomes. Hence the concern with the following pronouncement from the helm of the Annals of Internal Medicine:

 “On the basis of our long experience reviewing research articles, we have learned that prespecified outcomes or analytic methods can be suboptimal or wrong.”

Of course not even journal editors’ “long experience” can trump decades of statistical methods. Unfortunately, this attitude has contributed to the contamination and mistrust of the medical literature as detailed by Dr. John Ioannidis and others;

The editors’ statement was part of a response to the Centre for Evidence-Based Medicine’s Outcome Monitoring Project (COMPARE) critique of two trials published in the Annals that did not report outcomes consistent with what had been promised/pre-specified. The Centre, based at Oxford University, has taken on the challenging and laudable project of monitoring the number of correctly reported outcomes in clinical trials published in the top five medical journals. When a problem is identified, the Centre team submits a letter to the journal for publication and hopes for a response and remediation of the problem. Outcome switching appears to be rampant. Based on the 67 articles published in the top five journals in October and November 2015, all but nine had changed their outcomes without telling their readers.

It’s not that outcomes can never be switched after a protocol begins, most clinical trialists agree, including those at COMPARE (as they make clear on their site); one simply has to be clear about why and when the outcome was changed and document the change.

Many inconsistencies appear in the letter from the Annals editors. For example, COMPARE uses standard, accepted criteria in determining whether endpoints have been pre-specified in a protocol;  it’s misleading to call these criteria a “rigid evaluation” as the editors state. Nor did I see any evidence on the COMPARE web site of “labeling any discrepancies as possible evidence of research misconduct.” These editors seem to miss the point of the COMPARE project and by extension of the CONSORT guidelines, established almost 20 years ago as standards for reporting trials and that I thought journal editors agreed to adhere to. Dr. Ben Goldacre of COMPARE wrote a detailed response addressing additional inconsistencies and misunderstandings

NEJM – parasites and blindness

Separately but in a seeming complement to the Annals editors’ position on outcomes switching, the editors of the New England Journal of Medicine came out against data sharing, just as efforts to make data more available and transparent to more people have been making significant progress and the dire need for replication of studies is being increasingly recognized. The NEJM’s retrograde stance on sharing calls to mind their puzzlingly enthusiastic endorsement of tight physician-pharma relations.

What’s wrong with data sharing? The NEJM editors fret that “someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters.” It seems the wise journal editors want to shelter the scientific community and the public from the idiots who “may not understand” a dataset (or perhaps shelter their pharma advertisers?). But part of the point of science is to foster differing points of view. The editors give no further reason for their concern at this point but go on to consider questions of data combination relevant to meta-analyses–irrelevant when it comes to looking at single datasets.

Medical research has indeed become something of a swamp, and the NEJM editors warn of the emergence of “research parasites” who “use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited.” Stealing research productivity? I’d like an example or two where this has happened, as many datasets have been made public at this point.

The editors also seem blind to some larger points, such as that trying to disprove or replicate what others have done is a big part of what science is supposed to be about! No one is proposing that an investigator collect data and post it for all the world to see immediately. I would think most people would agree that time would be allowed for a primary team to publish its results. In the case of clinical trials, most immediately for drugs or tests that are marketed, I think it only fair that data be made available so that prescribing physicians have full information about what they are prescribing, and patients about what they are being prescribed. To his credit, NEJM editor Dr. Jeffrey Drazen just today published a clarification on that journal’s policy of data sharing specifically for clinical trials. He said that the journal would require authors to commit to making data available within 6 months of publication. With more eyes on data, tragedies such as those of the over-marketed arthritis drug Vioxx can very likely be truncated before the loss of 55,000 lives and tens of thousands of heart attacks and strokes.

Journal editors certainly have conflicted interests of their own, an obvious one being the huge revenues generated by pharmaceutical company ads and reprints. One is hard pressed to find evidence showing transparency and sharing in science to be a bad thing for anyone without marketing interests. But bias is hard to sort out. Hence statistics and transparency. Cave dwellers I suppose don’t appreciate the shining of light, but many of these have adapted to the point of becoming blind.

Addendum:

John Mandrola, MD, on Twitter, flagged a comment by Dr. David Karpf about problematic aspects of data-sharing proposals. Mandrola’s tweet generated a long stream of responses — for and against — that is worth taking a look at (click below to see the responses).


Dr. Molchan is a psychiatrist and nuclear medicine physician with extensive experience in clinical research at the National Institutes of Health.

Podcast: Rare disease foundation says medical journal misled patients

$
0
0

This is the second in an unplanned, occasional series about real people who are harmed by inaccurate, imbalanced, incomplete, misleading media messages.  The first was about a man with glioblastoma brain cancer.

People with rare diseases may hang on any crumb of possible good news more than anyone else.  Many have learned how to find and scour medical journal articles for signs of hope.

So it is with people who have primary ciliary dyskinesia or PCD, which, as the National Heart, Lung and Blood Institute explains, “is a rare disease that affects tiny, hair-like structures that line the airways. These structures are called cilia. If the cilia don’t work well, bacteria stay in your airways. This can cause breathing problems, infections, and other disorders. PCD mainly affects the sinuses, ears, and lungs. Some people who have PCD have breathing problems from the moment of birth.”

Early this year, the Journal of Medical Genetics published a paper with this headline:  “Gene editing of DNAH11 restores normal cilia motility in primary ciliary dyskinesia.”

JGM logo

Screen Shot 2016-02-10 at 4.37.36 PM

Michele Manion HeadshotMany people with PCD who saw that journal article headline were excited. The news spread through the PCD community on social media like wildfire.  And then today’s podcast guest had to temper that enthusiasm because of what the journal headline did not reveal.  Our guest is Michele Manion, the executive director of the Primary Ciliary Dyskinesia (PCD) Foundation. She delivers another important message about how media messages – including, or especially, those from medical journals – can harm people.

Meghan with FalconIt’s important to put a human face on such stories. The picture at left is of Michele Manion’s daughter, now in her 30s. She has some significant lung damage.  But Manion says that her daughter is not slowed down too much in everyday life.

Manion also sent me group photos of people with PCD and their families – from New York, North Carolina, Minnesota, and this photo of a group in Saint Louis.

St. Louis PCD group

Real people, hanging on every bit of news offering hope about progress in research.  Media messengers, including medical journal editors, should keep these faces in mind before they publish.

Key quotes from Michele Manion in the podcast:

  • “We’re in this awkward position where we want patients to be excited about future of genetic therapies; to me this delegitimizes what can be done with gene editing.”
  • “We don’t want to discourage patients about research but we were in a position to have to do that and have to explain the limitations of what had been demonstrated vs. what had appeared to have been demonstrated. That was challenging.”
  • “I don’t think the intent is to harm patients. I think that’s part of the problem. The patient as the ultimate end user isn’t even part of the equation. That’s not who they’re trying to get to. They’re trying to get to funders, more press for their institution and somewhere in that thread the patient is completely lost.”

Thanks to The National Institute for Health Care Management Foundation for providing us with a grant to produce these podcasts.

Credit:  podcast editor Cristeta Boarini

Musical bridge in this episode: “Fünf Stücke: Lebhaft” by Paul Hindemith, as played by Academy of St. Martin in the Fields Orchestra

iTunes ratings & reviewsPlease note: if you have listened to any of our podcasts and like what you’ve heard, we’d appreciate it if you’d leave a Review and a Rating on the iTunes webpage where our podcasts can be found: https://itun.es/i6S86Qw.  (You need to click on the “View in iTunes” button on the left of that page, then find the Ratings and Reviews tab.)

You can now subscribe to our podcasts on that iTunes page or via this RSS feed:  http://feeds.soundcloud.com/users/soundcloud:users:167780656/sounds.rss

All episodes of our podcasts are archived on this page on HealthNewsReview.org.

Xarelto controversy highlights need for more transparency at NEJM

$
0
0

The following is a guest blog post from one of our contributors, Susan Molchan, MD, a psychiatrist in the Washington, DC, area. She’s been closely following, and criticizing, NEJM’s stance on data-sharing and conflicts of interest.


Locked fileLast week, many media outlets, including the New York Times, reported on concerns about the validity of a huge, and hugely important, clinical trial published in the New England Journal of Medicine (NEJM) involving a drug taken by millions to decrease their risk of stroke.

What’s at issue? Whether the object of the trial–the now best-selling blood thinner Xarelto (rivaroxaban)–really does have the advantage of being safer than the traditionally used blood-thinner Coumadin (warfarin). That’s now open to question, as the BMJ reported, because of a malfunctioning blood test device used in the trial. Those in the Coumadin group may have received higher doses of Coumadin than would have been optimal, leading to an increased risk of serious bleeding. Problems with the device were reported as far back as 2002, and it was recalled by the FDA in 2014.

How the problem could have been caught sooner

The Times’s coverage is tough, but there’s one point that they and other media outlets didn’t touch on here that’s important: Had the protocol and data from the trial been made available upon or shortly after publication of the trial, there’s a much better chance the problem would have been caught much sooner.  

The editors of the NEJM and other journals that publish clinical trials now require that authors make their data publicly available within 6 months of publication. However, the data from this trial apparently aren’t going to see the light of day because study sponsor Bayer wants to keep them under wraps. While they’ve agreed to share data moving forward, Bayer told the BMJ that this policy applies only to “study reports for new medicines approved in the US and the EU after January 1, 2014.”

The Xarelto trial authors published a letter in the NEJM on February 25 to try to allay concerns about the blood test device used in the study. But surprisingly, they didn’t include blood test data that some scientists say may help clarify the comparison between the two drugs. The NEJM editors told the Times that they did not know such data existed at the time they published the letter, and — just as surprisingly — they didn’t seem put out by the fact that the Xarelto authors hadn’t shared the data. Dr. Jeffrey Drazen, the NEJM’s editor in chief, “disputed that the editors had been misled about the data,” according to the Times, “and said it was not relevant to the letter that was published.”

Not the first time for NEJM

This isn’t the first time the NEJM has been embroiled in controversy over missing data. The Times referenced a past situation where NEJM editors knew of missing heart attack data from a clinical trial and allowed the study to be published anyway. The drug that was studied, the painkiller Vioxx (rofecoxib), was eventually withdrawn from the market, but not before leading to an estimated 55,000 deaths, as well as thousands of heart attacks and strokes.

As I pointed out in January, the NEJM editors have recently made a case against transparency and data sharing, in opposition to the push for universal acceptance of these principles by many in the research community. After publicly raising concerns that data-sharing would benefit so-called “research parasites” who might have the temerity to use the data “to try to disprove what the original investigators had posited,” Dr. Drazen quickly clarified that his stance against data-sharing didn’t apply to clinical trials such as the Xarelto study. However, I think it’s fair to say that NEJM’s embrace of transparency is — as the Xarelto and Vioxx situations illustrate — both grudging and limited.  It’s also fair to ask whether NEJM’s foot-dragging on this issue is connected to the journal’s cozy relations with industry, which the editors recently defended in a tone-deaf set of editorials.  

Journals make money from big trials like these

The fact is, big successful industry-sponsored trials bring in hundreds of thousands if not millions of dollars in revenue for journals. In the case of Vioxx, the Wall Street Journal noted, the journal sold 900,000 reprints producing revenue of $697,000, most of them bought by the company selling the drug (Merck). In all likelihood the company selling Xarelto also bought plenty of reprints, although the amount is not made public by the journal. Scientists concerned with this obvious conflict of interest, and the publication bias that it can produce (i.e. the tendency of journals to publish studies with positive results while rejecting negative studies), have suggested that the number of reprints sold or revenue garnered from those sales be published with the articles, just as authors must disclose their conflicts of interest.

Instead of trying to guess who knew what and when in terms of device malfunctions and side effects in clinical trials, or waiting for people to be hurt or die with subsequent lawsuits being the only way for the information to come to light, wouldn’t it be better to just put the data out there from the beginning? All of it?

Update 3/10/16: Journalist Larry Husten, who has been following this story all along, pointed us to some of the early reporting on the faulty device — including from Deb Cohen at BMJ as well as David Hilzenrath and Charles Babcock at POGO — that sparked this story and gave it life.

A single paragraph published nearly 40 years ago contributed to the opioid epidemic. What can we learn from this?

$
0
0
opioid epidemic

“Falsehood flies, and the Truth comes limping after it; so that when Men come to be undeceiv’d, it is too late; the Jest is over, and the Tale has had its Effect…”       Jonathan Swift, The Examiner, 1710

A very short letter published on January 10th, 1980, in The New England Journal of Medicine may be one of the main tributaries to what is now a flood: North America’s current opioid epidemic. Titled “Addiction Rare in Patients Treated with Narcotics,” the five-sentence letter by Dr. Hershel Jick was “heavily and uncritically cited as evidence that addiction was rare with long-term opioid therapy,” according to a recent bibliographic analysis by Toronto pharmacologist Dr. David Juurlink.

opioid epidemic

As Juurlink wrote in NEJM last week, the 1980 letter was cited in other medical journal articles over 600 times, contributing to a narrative that “allayed prescribers’ concerns about the risk of addiction associated with long-term opioid therapy.”

This new finding of the letter being repeatedly–and misleadingly–cited as evidence was covered by several news outlets, including Stat News, Voice of America, Associated Press, and HealthDay, among others. But the very fact that this one small letter spun so far out of control made me wonder: Where were the journalists when this misinformation was being spread, over and over, by medical researchers? 

Juurlink is with the division of clinical pharmacology and toxicology at Sunnybrook Health Sciences Centre in Toronto, knows a thing or two about opioids. The last time we discussed opioids, Juurlink laid out a few essential points for journalists to keep in mind–in particular, the commonly addictive nature of opioids and the danger of using high-dose opioids, and yet also how the crisis is resulting in the medical community finally starting to respond in positive ways to treat it. 

‘The need for diligence’

The letter’s use of the word “rare” to describe the rate of addiction was referring to a hospital setting of 11,000+ patients where only four cases of opioid addiction were seen. The problem is that while the letter was cited over 600 times as proof of the safety of opioids, few writers and researchers citing it went back to see what it said in its proper context: That the letter’s authors were referring to patients treated in a hospital under the supervision of physicians–and given short-term therapy–for acute-care situations, like surgery.

This is a far cry from community-based settings (where the majority of opioids are prescribed) where patients with lower back pain or sprained joints may be given a long-term opioid prescription that could result in a potential addiction.

Still, the letter was frequently cited as proof that opiods weren’t addictive. It was even used in a promotional video produced by the pharmaceutical manufacturer indicating how safe the opioids were: 

Fear of undertreating pain may have fueled misinformation

“Our findings highlight the potential consequences of inaccurate citation and underscore the need for diligence when citing previously published studies,” Juurlink wrote.

For some context to this rethink of the dangers of opioid addiction, I spoke to Dr. Michael Bierer, an addictions specialist from Boston who’s also a regular HealthNewsReview.org contributor. He told me that when he was in medical school in the early 1980s there was an undercurrent that physicians needed to do a better job managing patients’ pain.

“There was this sense of the undertreatment of pain and a fear of using opioids,” Bierer said. The NEJM letter “satisfied a need in people justifying the lowering of the threshold,” for treating pain. Having said that, what surprises him most is the impact of the letter and how broadly it was cited.  

“People seemed to accept this reference without looking at what it said. As well, letters are not peer-reviewed and aren’t given the same scrutiny as a research article,” he noted.  

Lessons to be learned 

Perhaps we can use this as a cautionary tale for how any of us should check any citations or claims “cited” from prestigious journals. Here are some things to consider:

What kind of citation is it? A letter published in a medical journal is as far from a double-blind randomized control trial as to be in a different galaxy–and thus they should never exert the same sort of strength of gravitational pull.

What is the context? As many as 80% of the citations didn’t note that the “rare” addiction seen in Jick’s patient set may have been largely due to the fact they were in hospitals, a different sort of patient altogether than would be seen in the community.  

Does the pedigree really matter? When you cite a “study in the New England Journal of Medicine” this could be misleading on a number of levels. We humans rely on shorthand heuristics to help guide us through complicated matters.  An article in the NEJM or The Lancet may give the high-level imprimatur of the subject being reliable and valid, and so we may not look any further. But even prestigious journals can be fallible, and their studies can be misquoted or inappropriately cited. (Remember vaccines and autism?) Be sure to carefully vet what you’re citing.

Even the work of independent ‘experts’ can be misconstrued.  Jick and his graduate student co-author were not on the payroll of the drug companies “I’m essentially mortified that that letter to the editor was used as an excuse to do what these drug companies did,” said the author of the letter, Dr. Jick, a drug specialist at Boston University Medical Center, to the the Associated Press.

The note to file here: Even if the “experts” being cited are independent, it doesn’t mean their words can’t be twisted out of context and used to support a marketing message. As Jick said, the drug companies “used this letter to spread the word that these drugs were not very addictive.”

More resources:

A tale of two (BMJ) studies: one gets attention, the other gets neglected. Why?

$
0
0
brain dementia mental health

Within the past couple of months the BMJ published two separate observational studies looking at how two very different lifestyle factors might impact memory and dementia.

brain dementia mental healthBoth studies draw from the same group of research subjects: the Whitehall II cohort, which has followed roughly 10,000 British civil service workers for the past 30 years or so. Here are what the studies investigated:

Is moderate alcohol consumption a risk factor for, or protective against, cognitive decline?

Does physical activity protect against dementia?

This captured our attention for two reasons.

Do positive findings trump negative ones?

First, the alcohol study — which found that moderate alcohol consumption was linked to some tissue wasting in one of the key memory centers of the brain (the hippocampus) – was widely covered by the media. But the exercise study – which didn’t find any association between physical activity level and cognitive decline – received no mainstream coverage we could find.

Could it be that the alcohol study – with its “positive finding” (that some people might erroneously equate with showing cause and effect) –  is somehow more attractive, dramatic, or reportable than the physical activity study, with its “negative finding” (that some people might erroneously equate with “finding nothing”)?

In other words, is it possible that a headline which suggests moderate drinking is harmful to the brain is more click-worthy than one which can “only” claim physical activity level has no effect on memory? There were dozens of headlines featuring the alcohol study that certainly reinforce that notion. Here’s a sampling:

Newsweek: Alcohol & Brain Damage: Moderate Drinking Linked to Cognitive Decline

CBS News: Even moderate drinking could harm the brain

The Guardian: Even moderate drinking can damage the brain, claim researchers

Japan Times: Moderate drinking linked to brain damage: study

This is as good a point as any to raise the significant limitations of both these studies. Because both studies are observational, they can not make statements regarding cause and effect. Likewise, both studies rely on self-reporting (of drinking and exercise patterns) which is notoriously unreliable.

Furthermore, the cohort group from Whitehall II isn’t exactly representative of the public at large because it’s biased towards well-educated, middle class, white men.

Can news releases dictate conventional wisdom?

What I haven’t told you yet — and it could certainly affect the disproportionate media coverage mentioned above — is that the BMJ issued a news release for the alcohol study but not the activity study.

Why is that?

A Twitter reaction to the physical activity & cognitive decline study in BMJ

If it’s for the same reasons noted above (i.e. that “positive” results are more attractive than “negative”) then it raises compelling questions regarding how media coverage — and therefore public opinion, conventional wisdom, and even public policy — can be dictated, to some extent, right from story inception.

Not just which studies get published, but which studies get released for general public consumption.

Dr. José Merino is the US research editor for The BMJ. In my email exchanges with him he quoted an email from his PR manager who wrote that the journal sends out news releases based on the following:

“Press releases are designed to generate news, and content is selected on the basis of its news potential. We make decisions based on what we think will be of interest to journalists – who will hopefully want to cover the story for print, broadcast, or online.”

I asked Merino for his take on why some studies generate news releases, others don’t, and how this impacts media coverage.

“Perhaps [in this case] it’s related to the topic of the paper. Is alcohol more interesting than exercise?  Studies more likely to be picked up by the media are those that find an association between exposure and outcome (‘positive’ studies), as well as those that have a press release. These are probably linked because it is possible that journals may be more likely to issue a press release when a study finds an association.

In this case it’s hard to know whether a press release would have meant that the exercise paper would have been picked up by more outlets, but we expect it would have. This may represent another aspect of positive publication bias: we publish a ‘negative’ study but don’t press release it, and even if we had done so, it’s possible that the press would not have picked it up, or would have picked it up less than the ‘positive’ one.”

Merino acknowledges the possibility that both publishers and readers may have an unconscious — and even conscious — bias against “negative” studies, and thinks it would make for a compelling study.

Why This Matters

  • News releases are powerful. Maybe increasingly so, as time-pressured journalists count on them for story ideas, and can use their content — sometimes exclusively –for background information and even quotes.  Therefore, news releases often function as gateways and gatekeepers for what stories the public has access to
  • So-called “negative” results are just as important as positive ones, yet we often don’t hear about them. People make decisions about their health based on what they read in the news; if that news is slanted toward positive findings, the public doesn’t have a solid foundation for making good choices
  • Using clickability as a determinant of newsworthiness may sell ads, but it shortchanges informed public discourse. Even worse it can misguide and cause real harm.

I’m certainly not accusing the BMJ of intentionally choosing to highlight just positive findings. Nor am I saying the news coverage was universally superficial.

What I am pointing out is a flow of information from scientific studies to public consumption that can be modified or disrupted at several key points along the way – from the medical journals that publish studies to the headlines most of us read. 

And I think it should give us all pause to realize that the person who controls the faucet is often a PR manager. This is not a scientist or health care professional. This is someone motivated primarily by the “news potential” of the study, and not necessarily the overall health of the public.

And I think it is imperative to bear in mind that these editorial choices ultimately do influence the health care choices made by real people.

Viewing all 27 articles
Browse latest View live




Latest Images