But, for obvious reasons, reader polls of this sort occupy the very bottom of the evidence food chain. The one thing going for my findings is they actually tell us something. Too often, it's the other way around - I encounter studies at the very top of the food chain that tell us nothing. A clinical trial investigating Johnson&Johnson's Invega for treating schizoaffective disorder offers an excellent case study ...
Introducing Randomized, Double blind, Placebo-Controlled Clinical Trials
In 2009, in my capacity as a journalist, I attended the 12th International Congress on Schizophrenia Research in San Diego. The top brain scientists in the world were presenting and in attendance, including 2000 Nobel Laureate Arvid Carlsson. I recall going to a table with my morning coffee and greeting the researchers there with, "Hi, I'm the only C student at this table."
Part of the Congress involved poster sessions, where scientists stand in front of blow-ups of their latest findings. This affords an invaluable opportunity for the researchers at the conference to engage their fellow researchers in one-on-one discussion. The first three or four days of posters involved cutting edge brain research. The last day of the conference featured posters on treatment. Instantly, I felt I had been sucked back in time a half century.
Some very enthusiastic Johnson&Johnson people informed me of a new study of theirs that shed some very important new light on schizoaffective disorder.
Really? I thought. Schizoaffective has to be the most confusing diagnostic classification in the entire DSM. Naturally, I was interested. I looked at the poster. "A Randomized, Double Blind, Placebo-Controlled Study of Flexible Dose Paliperidone ER in the Treatment of Patients with Schizoaffective Disorder," I read.
I smiled politely, engaged in very short chit-chat, then made my way to other posters. In one glance, I knew, the study told me nothing. Here's the deal:
Randomized, double blind, placebo-controlled studies are the gold standard of treatment research. Perhaps the most celebrated example of a clinical trial is James Lind's 1747 experiment involving 12 scurvy-infected sailors. He divided his patients into six pairs. Five of the pairs showed no improvement. Of the pair receiving oranges and lemons, one had fully recovered after five days, the other had almost recovered. (Then they ran out of fruit.)
A more refined study would have pitted a much larger group of orange-and-lemon people against a proportionally large group taking a look-alike, taste-alike placebo (say a citrus pill vs a sugar pill). To keep the study honest, one would have made sure the clinicians handing out the pills had no way of knowing which pill was which (the double-blind).
I knew at a glance the J&J people had all their "i"s dotted and "t"s crossed. Studies of this nature are immensely expensive - my best guess is the $20 million-range - designed by some of the smartest people in the world overseeing an immense and highly complex undertaking where an infinity of things can go wrong (and often do).
The catch? Even though the study was conducted scientifically, to rigorous standards, it was not a scientific study. It was a marketing exercise.
Science vs Marketing
Paliperidone is the generic name for J&J's FDA-approved Invega, which is Son of Risperdal. Risperdal represents the first of the new-generation atypical antipsychotics (discounting Clozaril) introduced in the early 1990s and hyped as superior to old-generation antipsychotics.
We already know that as problematic as antipsychotics may be, they are very effective in knocking out psychosis. The first studies involving Thorazine proved that seven decades ago. Since then, antipsychotics have been the treatment of choice for schizophrenia and any condition involving psychosis, including bipolar and schizoaffective, accounting for some $20 billion annually in sales.
So what's new? Nothing much, really, except that no drug company had ever undertaken a trial on a schizoaffective population. What's the point? Is a clinician, reading this study, really going to change her practice by putting all her schizoaffective patients on an antipsychotic? She already has them on an antipsychotic.
But could the study possibly be persuasive in getting a clinician to change her practice to diagnosing more of her patients with schizoaffective? And then convince her to make Invega her go-to med rather than say Abilify or Zyprexa? Ah, a marketing question.
SIGN UP FOR MY FREE EMAIL NEWSLETTER
Primary vs Secondary Outcomes
The study, which made its debut at this particular poster session, found Invega worked better than a sugar pill in reducing PANSS scores in patients with schizoaffective disorder.
PANSS is a 30-point rating scale used to assess patients for schizophrenia. But there is a twist with patients with schizoaffective, as they also experience mood symptoms. Therefore, the study also measured for depression (using the HAM-D) and mania (using the Young Mania Scale). But clinical trials are only allowed to measure for one thing, so reduction in PANSS scores became the "primary outcome," also known as "primary endpoint."
The way I heard this explained to me: Suppose a study measured for five different results. Suppose there was only a 20 percent chance of each result coming up positive. That would mean a 100 percent chance of hitting paydirt on one of the results. Forget the four negative findings, in an uregulated world the study sponsor instead would tout the one positive outcome as evidence of a successful trial. Even in our regulated world, it is common practice for drug companies to use these positive secondary endpoints to "spin" the results of a failed trial.
There is value in data from "secondary endpoints," but not when you use secondary endpoint logic to argue that your losing football team really won the game. Sure, your team ended up with fewer points than your rivals, but - hey - look at the stat sheet for total yardage. Our team killed it in that category.
This is why you set your criteria for success in advance. Then you play by the rules. No claiming you won based on cherry-picking random results.
Special Study Problems
Okay, now that we've agreed that the primary outcome is the one that matters, let's round up our schizoaffective patients and get crackin'. Um, define schizoaffective. From the DSM-IV:
An uninterrupted period of illness during which, at some time, there is either a Major Depressive Episode, a Manic Episode, or a Mixed Episode concurrent with symptoms that meet Criterion A for Schizophrenia.
Clear as day, right? The DSM-5 work group responsible for coming up with something better actually said, "the current DSM-IV-TR diagnosis schizoaffective disorder is unreliable," then did not come up with something better.
Are you following me? The world's leading psychiatric authorities flat out admitted that they don't know what the hell schizoaffective is. Or, to be more precise, that it is impossible to get two doctors to agree on what they are looking at.
This is not an isolated acknowledgement. The psychiatric literature fully details the confusion, and I have been privy to heated debates at conference seminars. Everyone more or less agrees that the diagnosis has something to do with that undelineated middle ground where bipolar and schizophrenia bleed into one another, but what is it exactly? Schizophrenia lite? Bipolar heavy? Some kind of schizophrenia-bipolar mix? Or a stand-alone illness in its own right?
But it's in the Bible - the diagnostic bible - so it must be true, right?
So here we have the absurdity of J&J attempting to take precise measurements concerning a clinical condition that no one can pinpoint, much less define. In numerous articles on this site, I make a case for the absurdity of antidepressant trials in trying to treat "depression." Like schizoaffective, we sort of know what depression is, but it is impossible to pin down. The difference here is that whereas the psychiatric establishment makes a credible show of claiming some expertise in depression, with schizoaffective it has publicly thrown in the towel.
Let's go back to our football analogy. Imagine observers from a distant star system at the Super Bowl. We earthlings clearly know what we are observing. But the alien mind is having trouble distinguishing football from rugby from soccer. They only see men-chasing-a-ball behavior.
Not only that, they have trouble distinguishing between the onfield action and the what goes on in the stands.
But they need to report back to their home planet and they need to come up with convincing numbers. So they do a "football" study that includes in its sample soccer and rugby games, and as their primary outcome they decide to report on ice cream sales.
Since they hail from a far more advanced civilization than ours, their study is technically flawless. I trust I've made my point.
For this particular study, J&J succeeded in rounding up 311 patients from more than 40 centers worldwide, including India, Russia, the Ukraine, and all across the US. This way, they were able to include a total of 311 patients who met DSM-IV criteria. An impressive logistical effort, no doubt. But maybe you see some problems. Maybe doctors in the Ukraine have a different way of looking at schizoaffective than doctors in India. Maybe the doctors in Russia play fast and loose with the rules involving who gets into the study or not. On and on, it goes.
Keep in mind, in any meds trial both the drug group and placebo group need to be as close to an exact match as possible. The "randomization" in "randomized double-blind placebo-controlled" trials means patients are assigned to different groups by chance rather than choice (thereby preventing abuses such as clinicians selecting good prognosis patients for the drug group). But across 40 different centers worldwide, with an infinity of variables, the randomization process is going to be problematic.
There was an additional complication to this trial: The patients were also on mood stabilizers and/or antidepressants. Still, antipsychotics have a very good trial track record in the PANSS challenge. Even accounting for all the special challenges of this particular trial, how hard can a glorified counting exercise be, right? Did someone say "discontinuation"?
High Drop-Out Rates
It turned out about 40 percent of the patients dropped out of the study, about equal between the Invega and placebo groups. This may be a secondary endpoint, but in this case the secondary endpoint trumps the primary one. Let's use our football analogy. Make it college football. Your team lost, no excuses. But it later comes to light that there had been recruitment violations at the winning college. The "win" no longer counts.
So here is what this particular drop-out rate tells us: That when a doctor prescribes this med for this illness there is a four in ten chance of failure, even before the patient walks out the door. Let's be generous and assume that three of the six remaining patients have an improved outcome. Thus, only three of the ten we started with experienced success.
This may be justiable, but let's at least be candid in what we are dealing with. I would argue that with drop-out rates this high, no sponsor can make a credible claim for the drug's efficacy.
But never underestimate the power of statistics. To compensate for those who quit the study, the J&J people employed a standard statistical fiction known as "intent-to-treat analysis," which includes the drop-outs in the final tally, as if these patients had completed the trial. One way of implementing this is with "last observation carried forward" (LOCF). Thus, if a patient drops out at week two of a six-week trial, his or her two-week result is "carried forward" to the final week.
There is a legitimate reason for doing this. Patients who drop out of drug trials are more likely to be bad responders. Thus, if only good responders stay in the study, the results are likely to overstate the benefits of the test drug.
But a 40 percent drop-out rate begs the obvious question of whether intent-to-treat can save the day. This was a short-term study. Had this been a long-term study (say over one year), we would expect more of an 80 percent drop-out rate (which I happened on in a Zyprexa study). On paper, Invega may be the best drug in the world for treating schizoaffective, but what use is it if four in 10 who take it immediately give it the thumbs-down?
But who listens to me? Anyway, now that we've jumped through all the various statistical hoops, we're finally ready to get down to some serious counting. Oops! - did someone say placebo?
Placebos, Damned Placebos
Placebos are the bane of all clinical trials, especially for psychiatric meds. In an earlier trial conducted by J&J, the placebo group fared almost as well as the Invega group. The second time out, those on the Invega did no better than before, but those on the placebo did a lot worse (perhaps due to a bad batch of placebos?). Thus, the J&J investigators had "clear separation" from the placebo group.
Accordingly, instead of tearing their hair out, J&J had reason to celebrate. All their hard work had paid off. Between the two trials, they now had an airtight case to take to the FDA. Indeed, six months after the trial results came through in February, 2009, three months after J&J debuted its study as a poster in April, in July the company proudly announced in a press release:
"The U.S. Food and Drug Administration (FDA) today approved the first and only [my emphasis] antipsychotic for the acute treatment of schizoaffective disorder."
The following year, a feature titled "Differential Diagnosis and Therapeutic Management of Schizoaffective Disorder," appeared as a CME supplement in the Oct, 2010 "Current Psychiatry," which goes out free to psychiatrists. The piece was "supported" by Janssen (owned by J&J). The article didn't tell us what we didn't already know about about schizoaffective, but it did let us know that:
Only 2 randomized, double-blind, placebo-controlled studies of an atypical antipsychotic (paliperidone extended-release, now FDA-approved for the treatment of SAD), have been conducted in a well-defined SAD patient population.
Essentially, J&J was marketing the illness rather than the product. The subtle suggestion was that maybe clinicians should be changing some of their diagnostic calls from bipolar to schizoaffective. Now that it had FDA approval, J&J was able to mobilize its drug reps to visit clinicians and reinforce that point, leaving behind article reprints and free Invega samples.
Bogus study, great marketing. Mission accomplished.
Based on a series of blogs, Feb, 2011, republished as two articles, April 13, 2011, reworked into one article, Jan 12, 2017.
Follow me on the road. Check out my New Heart, New Start blog.