At the time this article was written, the website c19ivermectin.com listed 73 clinical trials of ivermectin and COVID-19, involving 56,774 patients, as having been conducted. Thirty‐one of the studies (6,828 patients) were randomized, controlled trials. Fifty‐two were peer‐reviewed (18,768 patients).
A few of the studies have been challenged and even retracted for shoddy work (perhaps putting it kindly), but most have not; we will look more carefully at these studies below. Still, the aggregate results are noteworthy. The treatment group had 59% lower mortality than the placebo or standard therapy control group (examined in 34 studies involving 44,061 patients), 48% lower use of mechanical ventilation (12 studies; 2,316 patients), 57% fewer intensive‐care‐unit admissions (seven studies; 21,857 patients), 45% fewer hospitalizations (19 studies; 11,190 patients), 71% fewer cases (13 studies; 11,523 subjects), 52% faster recovery (23 studies; 3,664 patients), and 57% improved viral clearance (22 studies; 2,614 patients).
The FDA has approved many drugs based on less clinical research. When one of us (Hooper) worked at Merck three decades ago, the ACE inhibitor Vasotec (enalapril), one of the company’s biggest drugs, was tested in 2,987 patients before receiving FDA approval. The statin drug Mevacor (lovastatin), another of Merck’s big drugs at the time, was tested in 6,582 patients. Back then, that was considered to be a massive trial.
This is from Charles L. Hooper and David R. Henderson, “Ivermectin and Statistical Significance,” Regulation, Spring 2022.
On Scott Alexander
Last November, Scott Alexander, a psychiatrist and author of the science‐heavy blog Astral Codex Ten (and, before that, Slate Star Codex), authored an extensive literature review of 11 ivermectin–COVID studies that he deemed to be of high quality. He tentatively concluded that, when ivermectin is given early in an infection, the studies indicate the drug reduces mortality by 40 percent, which is just barely statistically significant (significance: p = 0.04). Yet, he refrains from endorsing the use of the drug. Why?
To explain why, he presents a hypothesis and a prejudice (more on the prejudice below). The hypothesis is what we noted earlier: ivermectin’s benefit may come indirectly by ridding the body of parasites. The relationship isn’t direct. It has to do with corticosteroids, which are a common treatment for COVID. When patients don’t have parasites, giving them corticosteroids generally helps. But when patients do have parasites, giving the corticosteroids can cause a medical condition called hyperinfection syndrome. Hence, by removing the Strongyloides stercoralis worm infections, ivermectin may prevent potential problems with corticosteroid therapy, leading to the conclusion that ivermectin helps with COVID.
However, when the larger pool of studies is examined, they show a benefit to ivermectin of 72% in areas of low parasitic prevalence, while in areas with high prevalence the benefit is 55%. This is the exact opposite of what Alexander conjectured. Further, there is some evidence that the difference in the two areas can be partly explained by considering treatment delays — it’s better to give ivermectin early in the infection — and dosage size. In the geographic areas where the drug did better, it tended to be given earlier and at higher doses.
On Statistical Significance
Consider one COVID patient outcome: the need for invasive ventilation. In a randomized, double‐blind, placebo‐controlled clinical trial by Ranjini Ravikirti et al., of 55 patients in the ivermectin arm, only one patient needed invasive ventilation while five in the placebo group of 57 did. In other words, it appears that ivermectin reduced the need for ventilators by 80%. Yet, the study’s authors concluded, “This study did not find any benefit with the use of ivermectin in … the use of invasive ventilation in mild and moderate COVID-19.”
But one can reasonably conclude that the authors did find a benefit. A close look at their data shows 91.2% confidence that there was a difference. Because the authors used the 95% threshold, they stated that they had found no benefit.
Similarly, an observational controlled trial of 288 patients found that treatment with ivermectin allowed twice as many patients to improve and get off mechanical ventilators (36.1% vs 15.4%). But authors Juliana Cepelowicz Rajter et al. report no benefit to ivermectin because they were “only” 93% confident of the difference.
Scott Alexander Succumbs to Social Desirability Bias
He [Scott Alexander] further acknowledges that “if you say anything in favor of ivermectin, you will be cast out of civilization and thrown into the circle of social hell reserved for Klan members and 1/6 insurrectionists.” Not wanting to be relegated to this group of undesirables, he withholds his recommendation of ivermectin. In short, the scientific evidence led him to a tentative conclusion that he does not want to embrace because of social desirability bias. What happened to “follow the science?”
Read the whole thing.
READER COMMENTS
steve
Mar 24 2022 at 11:08am
“Consider one COVID patient outcome: the need for invasive ventilation. In a randomized, double‐blind, placebo‐controlled clinical trial by Ranjini Ravikirti et al., of 55 patients in the ivermectin arm, only one patient needed invasive ventilation while five in the placebo group of 57 did. In other words, it appears that ivermectin reduced the need for ventilators by 80%. Yet, the study’s authors concluded, “This study did not find any benefit with the use of ivermectin in … the use of invasive ventilation in mild and moderate COVID-19.”
This is a pretty good example of so many studies in the meta-analysis. We have millions of cases of covid so there is no shortage of cases to study but we should accept studies too small to give us significant results and even worse accept them when they are shown to not be significant. This reminds me so much of a study long ago wanting us to change our whole approach to C sections based upon a study with 80 patients. It made “significance” at p=0.05 but given that C-sections are one of the most common procedures in the country it made no sense to change based upon such a small study, so we didnt and later studies, larger and better done showed that study was wrong.
Which gets you to the meta-analysis issue. Throwing a bunch of bad studies together in order to get you large numbers does not get you a good study. The person doing the meta analysis gets to choose which studies to put into it which introduces bias and/or errors. You risk too much heterogeneity in study design. If you are putting in a bunch of studies without controls you lose a lot of value. Throw in the known bias towards publishing only studies with positive outcomes and I hope people are leery of meta-analysis.
Also, Hooper misrepresents what Lexander said. Here is the quote.
“The only meta-analysis that doesn’t make these mistakes is Popp (a Cochrane review), which is from before Elgazzar was found to be fraudulent, but coincidentally excludes it for other reasons. It also excludes a lot of good studies like Mahmud and Ravakirti because they give patients other things like HCQ and azithromycin – I chose to include them, because I don’t think they either work or have especially bad side effects, so they’re basically placebo – but Cochrane is always harsh like this. They end up with a point estimate where ivermectin cuts mortality by 40% – but say the confidence intervals are too wide to draw any conclusion.”
Finally, the kind of high quality studies we would hope to see are coming out. They are not showing any positive effect for Ivermectin. Which does lead us back to where you should start ie the basic science. In the in vitro studies which showed Ivermectin had good effect it was at levels much higher than people can tolerate without harm. We already knew that. We also know that a drug who works 59% of the time is basically a miracle drug. You wouldn’t really need to do studies as it would be so obvious. What we are left with is that either it does not work or it has a pretty small effect.
Steve
steve
Mar 24 2022 at 11:09am
I really did use paragraphs.
Dylan
Mar 24 2022 at 3:51pm
Well said.
Roger
Mar 24 2022 at 12:38pm
The WSJ ran an article this week reviewing a study if ivermectin that claims to show it’s no better than a placebo. Have you seen that article?
Charles Hooper
Mar 24 2022 at 8:21pm
The paper referred to in the WSJ has not been published. A couple of points.
It’s one study among many. The 1,358 patients in this trial account for slightly over 1% of the patients who have been involved in ivermectin studies for COVID-19. So even if this trial concluded that ivermectin didn’t work, it’s not the only information we have. We have 81 studies by 782 scientists involving 128,840 patients.
Edward Mills was involved in the Together Trial of ivermectin for COVID-19. There are aspects of the Together Trial that brought criticism from other researchers. For example, the Together Trial removed mortality and adverse event outcomes from the study mid-trial. Further, the trial’s randomization didn’t match the protocol. This turned out to be important because the control patients were studied when earlier SARS-CoV-2 variants were common while ivermectin was studied primarily during the period when the Gamma variant was predominant. In other words, it wasn’t apples to apples.
The Together Trial has a number of other problems that received a lot of attention from critics: very late treatment, trial was run in a location with high community use of ivermectin, administration on an empty stomach (equals low dose), treatment limited to three days, etc. The study investigators made previous public and private comments that suggested an anti-ivermectin bias.
The Together Trial still found that ivermectin reduced the rate of mortality by 18%.
gwern
Mar 24 2022 at 1:30pm
No. That is not what confidence means. It means that ‘5% of the time, conditional on the hypothesis of the null being true, you would observe data as or more extreme’. It does not mean anything at all which can be described as ‘the probability that the difference is the product of chance alone’. This is inverting the conditional (error #1 in WP’s list). You are trying to use confidence as the Bayesian posterior estimate that it was invented to not be. See (a little ironically) Ioannidis’s pretty lucid explanation way back when of why a ‘5% false positive’ rate leads to most studies being wrong when you start flinging random hypotheses at the wall to see what sticks, as indeed people flung a lot of stuff at the wall for COVID. Note that the base rate of effective COVID drugs is somewhere around 1 in 1000, and the base rate of effective COVID drugs as effective as ivermectin is claimed to be is lower still. Now plug that into the formulas for power and false positive rates…
Then adjust even further given everything we know about small-study biases, the decline effect (we observe historically tons of trials which turn in much higher levels of ‘confidence’ and where the effect size turned out to be far smaller or zero), the considerable level of systematic bias in trials which inflate effects (particularly non-pre-registered studies), the intense ideological polarization around the desire to get ivermectin results, and the remarkable number of already proven fraudulent ivermectin studies implying more undiscovered ones and very low standards overall, we would look at these effect sizes and be far, far, less than ‘91% confident’ or ‘93% confident’ of a non-zero causal effect. What we observe is what we would predict given the extremely high prior probability ivermectin, like almost all of the ~20,000 other FDA-approved drugs, does nothing against ivermectin, and thus, on Bayesian grounds, little evidence for ivermectin having any efficacy, and certainly not enough to boost it from one-in-thousands probability to ‘93%’ or ‘95%’ probability.
Charles Hooper
Mar 25 2022 at 1:47pm
Thanks for your comment. I’m aware of the work of Ioannidis and Colquhoun. All I can say in my defense is that it’s hard to properly explain a concept such as statistical significance and p-values simply, briefly, and clearly in a popular article. Just look at Wikipedia:
Yeah, right. You and I might understand that, but the regular reader?
Try explaining that to your mother. “Here, Mom, let’s say this salt shaker is the null hypothesis and the pepper shaker is the data…”
I think this proves how weak a concept statistical significance is. It’s really hard to explain without resorting to technical language.
Jon Murphy
Mar 24 2022 at 1:52pm
Lots to like in this article. In particular, I think your discussion on statistical significance is excellent. Statistical Significance, specifically p<0.05 has gained an almost religious reverence. But it’s important to note two things:
First, the choice of statistically significant level is arbitrary. P<0.05 is just a rule of thumb.
Second: p-values are just one of many, many, ways of hypothesis testing.
steve
Mar 24 2022 at 4:21pm
I think p value had been misunderstood and overrated for a long time. Link goes to a very nice, earlier attempt to address the issue. I think he might be wrong with his claim of at least 30% but nonetheless a good article.
https://royalsocietypublishing.org/doi/10.1098/rsos.140216
Steve
Knut P. Heen
Mar 25 2022 at 7:00am
5 percent implies that 1 study in 20 is a false positive.
Suppose you find an outcome variable, for example oil price. Then you run 20 regressions trying to explain the variation in the oil price by the number of cars crossing 20 different bridges every day. One of those bridges will most likely produce a coefficient that is statistically significant at the 5 percent level.
What did you find out? You found that 1 study in 20 is a false positive.
Now, let 20 000 researchers run one regression each. 19 000 of them will throw the study away. 1000 will publish a false positive.
Start reading journals, and almost 100 percent will be false positives.
Suppose we change the threshold to 1 percent. This will force the researchers to find 100 bridges to generate a false positive at the 99 percent level. The journals will still be filled up with almost 100 percent false positives, now at the 99 percent level.
Ironically, it is difficult to be interested in this stuff if you actually understand statistics. We have to get rid of the publication bias to get somewhere. Journals must be willing to publish “we did this study, and we found nothing”.
Jon Murphy
Mar 25 2022 at 1:56pm
Thanks for the link, Steve.
If you haven’t, check out Deidre McCloskey and Stephen Ziliak’s 2008 book The Cult of Statistical Significance. It’s another take on the question you may find interesting (although it does have some flaws, as Aris Spanos discusses).
steve
Mar 25 2022 at 3:32pm
Did you read the ASA take on p values from 2016? Lots more follow up articles too. Anyway, on any given day I am in the do away with p values camp and other days I think it still has value.
Steve
Jon Murphy
Mar 26 2022 at 7:41am
I didn’t read the ASA piece, but my grad school roommate has and he summed it up for me. He’s the stats genius and I take his summary as accurate
Alan Goldhammer
Mar 24 2022 at 1:56pm
This will be my only comment. Henderson and Hooper want ivermectin to work and they are suffering from confirmation bias. If one goes to the website noted in the article, one notices that it is just a compilation of a lot of studies, some published and peer reviewed, some only preprints, and some just news articles. I am familiar with 2/3 of them having read them when they came out (in some case they were pre-prints that were later published in a journal). Using aggregations sites such as this one to draw conclusions is fraught with peril as one is relying on someone else to pass judgement on whether the study is good bad or indifferent. Competing meta-analyses may contain overlapping data and each one needs to be carefully assessed to insure that things are not just being double counted.
Some of the studies presented are merely observation and lacking an appropriate control (common in many of those that I looked at during 2020-21 time period). One needs to carefully examine meta-analyses to see what the assumptions are that went into the analysis and what data sets were used. Meta-analyses and observational studies can be done very well. the Cochrane Collaboration is well respected by academics and those in the pharma industry. They carefully examine study methodology before they incorporate them into a review. It was not at all surprising to me that they concluded based on the evidence they examined that ivermectin does not work either as a prophylactic or a cure.
Another very well respected group is the Observational Health Data Sciences and Informatics (OHDSI) program is a multi-stakeholder, interdisciplinary collaborative to bring out the value of health data through large-scale analytics. I will offer the disclosure that I was involved with the founding of this organization that began as an initiative within the Pharmaceutical Research and Manufacturers of America (PhRMA) and I was a principal project manager during its inception and initial funding. While they have not studied ivermectin they have studied some of the other drugs noted in a previous blog post by the same authors (in particular, they did a very nice study of famotidine showing that it did not work). The bottom line is that these kinds of studies are hard to do properly.
Writing these types of posts that represent innumeracy and a lack of understanding of how such studies are properly done. The two authors continue to try to spin gold from dross which is always a sign of failure.
Finally, I do not know who Scott Alexander is other than he is a psychiatrist as noted above.
Alan Goldhammer
Mar 24 2022 at 2:04pm
It appears that the links and formatting of my comment were stripped out. This is a shame as there are some critical references. If the moderators are interested, they know how to contact me.
David Henderson
Mar 24 2022 at 9:10pm
You write:
It’s true that I want ivermectin to work. What kind of person wouldn’t? When it comes to saving lives, I want everything to work. But not everything does. That’s why people do studies.
But no, I don’t suffer from confirmation bias.
Ryan M
Mar 31 2022 at 12:52am
In response to the quoted portion above, I think it is more realistic to say, not simply that “Henderson and Hooper want Ivermectin to work,” but that “Henderson and Hooper want to know why there is such an intense (negative) social reaction to a drug for which the evidence of effectiveness is no worse than other widely accepted drugs.”
Charles Hooper
Mar 24 2022 at 10:27pm
Alan,
Two can play that game. From my perspective, it appears as though you are suffering from confirmation bias. You don’t want ivermectin to be effective.
TGGP
Mar 24 2022 at 2:13pm
Henderson:
Alexander:
Is your “larger pool” mostly studies he regards as low quality?
Finally, the FDA was formed in the first place to keep food & drugs pure rather than tainted, rather than to ensure they were healthy in the first place. Approving medications with weak evidence of benefits (but no obvious harm) is just them doing what they were originally intended to do.
Charles Hooper
Mar 24 2022 at 8:33pm
No.
Scott Alexander
Mar 27 2022 at 10:51pm
Can you give a source for your claim that the medication works better in areas of low parasite prevalence compared to high? My analysis said the opposite, and https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2790173 seems to confirm.
Charles Hooper
Mar 28 2022 at 2:33pm
Here’s the link:
https://ivmmeta.com/#strongyloides
The three graphs in Figure 26 are pretty telling.
The highlights:
What Bitterman commented on may have been a result of treatment delay, dose, and other factors
The subset of studies Bitterman used in his analysis (12 of 81) had a very small number of mortality events
Bitterman is using the same summary of trials that I’m using (see the link above)
Supposedly, Bitterman knew that an analysis of the larger set of trials showed a result opposite of what he published and yet he persisted in publishing, suggesting bias
Bitterman indicated no conflicts of interest. However, he was an investigator on a Pfizer trial for an acne drug. Note that Pfizer has vaccines and drugs for COVID-19.
In conclusion, we have some reasons to be skeptical of Bitterman’s results.
Scott Sumner
Mar 24 2022 at 3:18pm
I read Scott Alexander’s entire piece, and thought it was excellent. I don’t see any evidence for this charge:
“Not wanting to be relegated to this group of undesirables, he withholds his recommendation of ivermectin.”
Given that subsequent research has confirmed his skepticism:
https://www.wsj.com/articles/ivermectin-didnt-reduce-covid-19-hospitalizations-in-largest-trial-to-date-11647601200?mod=Searchresults_pos3&page=1
I’m puzzled as to why you would question his motives. I don’t think anyone who has followed his career would claim that Alexander is afraid to take unpopular positions.
BTW, even if Ivermectin has some benefit, I would not take it. There are more effective drugs out there, even among cheap generics. (Fluvoxamine, for example.)
steve
Mar 24 2022 at 3:44pm
As I note above, Hooper misrepresents Alexander’s claim. In the only place where Alexander says that it works 40% of the time he notes that the confidence intervals are too wide to accept the results.
Steve
Charles Hooper
Mar 24 2022 at 10:20pm
Alexander considered the studies he found compelling and the outcome for each study that was the most reasonable and he found that his combined results showed a benefit of ivermectin that was statistically significant at p = 0.04.
He also reports on the Cochrane study that found a 40% benefit of ivermectin treatment but it was the Cochrane folks who thought the confidence intervals were too wide. He was just quoting the Cochrane result.
steve
Mar 25 2022 at 3:34pm
And you somehow forget to mention his comment on confidence intervals.
Steve
Michael
Mar 24 2022 at 4:50pm
I don’t understand why we have such vociferous ivermectin advocates, yet nary a word aout fluvoxamine. I can see why people might think, with some logical consistency, that both drugs are good (for Covid), or that both are bad, or that fluvoxamine is good and ivermectin bad. Cannot udnerstand for the life of me why, other than irrationality, people who think ivermectin works aren’t extremely interested in fluvoxamine.
David Henderson
Mar 24 2022 at 9:16pm
You wrote:
I bet fluvoxamine is good. Who said I’m not interested? We were writing about ivermectin.
Charles Hooper
Mar 24 2022 at 8:37pm
That’s one study and we haven’t even seen the paper. How about the other studies that fall into the “subsequent research” category?
David Henderson
Mar 24 2022 at 9:14pm
You wrote:’
Charley and I both read his entire piece twice because we couldn’t quite believe that he would give evidence of effectiveness and then say that he was against it.
You wrote:
We didn’t so much question his motives as report his motives. You might want to reread what he wrote. See if you can find another meaning to what seems to us to be clearcut.
Mark Z
Mar 24 2022 at 10:52pm
Acknowledging that it is in one’s interest to believe something isn’t a confession that that is one’s motive for believing it. In fact if someone were going to publicly take a position for reasons of self-interest or social convenience, it would be pretty irrational to also publicly mention such ulterior motives. The reason he declined to recommend ivermectin was succinctly stated:
“The good ivermectin trials in areas with low Strongyloides prevalence, like Vallejos in Argentina, are mostly negative. The good ivermectin trials in areas with high Strongyloides prevalence, like Mahmud in Bangladesh, are mostly positive.”
You and Hooper say that a “larger pool of studies’ contradicted the parasite explanation. I’m not sure if this pool consists of studies he is unaware of or ones he considered and dismissed as poor quality, but if the latter then his point of disagreement with you would appear to be that your rebuttal of this hypothesis depends on what he sees as low quality studies.
He also links to a funnel plot by Avi Bitterman suggesting that the viral positivity results were due to publication bias.
Charles Hooper
Mar 25 2022 at 12:30pm
Alexander didn’t do the Strongyloides analysis himself. He was discussing the results of Bitterman. A subsequent analysis by a different researcher, using more and newer studies, found a conclusion that was the opposite of Bitterman’s.
Scott Sumner
Mar 25 2022 at 1:26pm
“We didn’t so much question his motives as report his motives.”
Please provide the quote where he reports that his motives are biased. I read the entire post, and I don’t see anything like that. After reading his post, I reached the same conclusion he did (albeit for somewhat different reasons).
To be clear, I don’t have strong views on whether Ivermectin has any beneficial effects (nor does Alexander.)
David Henderson
Mar 25 2022 at 4:21pm
Actually we quoted it in the article. Here’s the whole paragraph in Scott Alexander’s LONG post:
At first I thought he was joking. But on a reread of the whole piece, I think he wasn’t.
Ross Levatter
Mar 25 2022 at 11:21pm
David, I think he is CLEARLY joking, not about the fact that there are risks in any profession when taking a heterodox position, but about the extreme and funny way he describes it (“cast out of civilization and thrown into the circle of social hell reserved for Klan members and 1/6 insurrectionists. All the health officials in the world will shout “horse dewormer!” at you and compare you to Josef Mengele”) He routinely makes his point through humorous exaggeration. But I think you’d be wrong to conclude from this he is saying “I’ve shown that ivermectin is, in fact, effective, but to avoid destroying my professional standing I’m now going to take it back.”
Chris
Mar 26 2022 at 9:39pm
I read that several times and can’t understand how that would be taken as anything other than a joke?
Scott Alexander
Mar 27 2022 at 10:53pm
I feel misrepresented by this.
My claim is that a superficial analysis of the results finds them to be positive. A more complete analysis, accounting for the poor quality of some studies and confounding by parasitic worms, finds that the evidence for ivermectin working is pretty weak, and I would say outright disproves claims that it works especially well.
Separately from that, I mentioned that there is strong social desirability bias to find this result, which also happens to be true (is this convenient? yes, but there’s also strong social desirability bias to say that the Holocaust happened, which also happens to be true).
Charles Hooper
Mar 28 2022 at 1:03pm
Thanks for the clarification.
TGGP
Mar 27 2022 at 8:08pm
I’m a big fan of Scott Alexander, but if you actually followed his career closely enough you’d see him admitting to cowardice:
He may be braver than most, but even he admits to reticence at times (although he could have just not published an ivermectin analysis at all). Whereas Henderson here somewhat implausibly claims not to suffer from confirmation bias, unlike the typical human being.
artifex
Mar 24 2022 at 6:22pm
I think the first paragraph of the “Statistical Significance” section in the Regulation article misinterprets how statistical significance works? p-values can’t be translated (readily or at all) into things like “one chance in 20 that the difference is the product of random chance alone” or into probabilities that we’re interested in. Confidence intervals are more useful than p-values as you can measure effect size.
Charles Hooper
Mar 24 2022 at 10:23pm
The group that is trying to stem the problems with statistical significance recommends the use of point estimates and confidence intervals.
john hare
Mar 25 2022 at 5:38am
Reading through the article and comments, I get a feeling that there must be a better way to handle verification of older treatments. Seems to be a tangled path between profits of the new with acceptance of the existing and government involvement. Adding to it is the digging in of people that wish to believe in government control vs those that don’t. Too me, it seems that for a pandemic, large challenge studies of everything that might work could be warranted. I can see the liability and credibility problems with that as well.
My priors are based on my interaction with various government agencies that prevent solutions, mostly in the construction industry. Noticed it in others though including education, medicine, and insurance for starters. I would like to see solutions start to emerge that create the information and the freedom to act on it.
Jack
Mar 25 2022 at 1:13pm
The section: Scott Alexander Succumbs to Social Desirability Bias is a ridiculous representation of the content and thrust of the ACX post.
It is so bad it makes me doubt the rest of the article.
SZ86
Mar 26 2022 at 10:50am
While “trust the experts” has gotten a bad rap lately, in some cases for good reason, there is something to be said for relying on an institutional authority like Cochrane for best summary of high quality evidence so far. Otherwise we have these non productive blog battles cherry picking findings, misinterpreting what statistical significance means, etc.
Here is what Cochrane meta-analysis concluded in May 2021 (so perhaps needs updating):
“Based on the current very low- to low-certainty evidence, we are uncertain about the efficacy and safety of ivermectin used to treat or prevent COVID-19. The completed studies are small and few are considered high quality. Several studies are underway that may produce clearer answers in review updates. Overall, the reliable evidence available does not support the use of ivermectin for treatment or prevention of COVID-19 outside of well-designed randomized trials.”
As someone who has published several meta analyses, Cochrane is reliable because they have strict inclusion criteria. As others have posted, a meta analysis is only as good as the studies included within it. This is called the GIGO objection. Unfortunately, most research is bad. A good meta analysis makes transparent choices, a priori, to only include high quality studies (most importantly high internal validity). Science is conservative in this way, there is a high bar to saying that X causes Y. At same time, absence of evidence isn’t evidence of absence so usually we are left saying “ we don’t really know, but results suggest…” In my experience this is deeply unsatisfying to most non-scientists!
Charles Hooper
Mar 28 2022 at 1:12pm
There are still two problems with the Cochrane study.
Statistical significance: If there are five studies, all of which are too small to generate statistically significant results, Cochrane would say that the drug doesn’t work. If the results of those five studies were combined, the results would be statistically significant and Cochrane would say the drug does work. The efficacy of the drug can’t depend on how we combine the results of the clinical trials.
Population of studies: There have been many studies completed since the Cochrane meta-analysis was published.
gwern
Mar 28 2022 at 2:51pm
This is a strange criticism: combing the studies is what Cochrane does, and what they did in that meta-analysis to get their RR of 0.14 to 2.51. And you should expect them to have done that without looking, because ‘combining the 5 studies’ through meta-analysis is the reason Cochrane exists, and it is even their logo: the logo is a forest plot depicting the combination of 5 or so non-significant trials (on steroids & infant mortality, IIRC), the horizontal lines crossing the null, whose pooled estimate (the diamond) was statistically-significant and the drugs work.
Ryan M
Mar 31 2022 at 12:33am
I think the elephant in the room is that our governments have forced masks and vaccines on their populations. Both interventions with somewhat shaky track records (and, in the case of masks, with studies that show them to do more harm than good).
This is not about science. It is not about health. It is about control, and that should make us very concerned.
Ryan M
Mar 31 2022 at 12:48am
Let me add to this:
Perhaps more concerning is the fact that those individuals who most strongly oppose the use of Ivermectin also support vaccine and mask mandates.
And remember that we’re not debating whether something is or is not particularly effective – even less are we debating whether that thing is somehow dangerous. As Henderson points out, there are medications with less evidence of effectiveness that are widely used, and not simply for off-label purposes.
Yet, with Ivermectin, we’re not simply hearing arguments that “hey, this may not be as effective as you think.” Rather, we see the drug being banned, pharmacies refusing to fill prescriptions, hospitals refusing the treatment to patients who want it, doctors being fired (or worse) for prescribing it.
All of which further illustrates the point of this article: what we’ve seen in response to Ivermectin is in no way justified by the drug itself, or by its effectiveness (or ineffectiveness) in treating a particular illness – none of that is “based on science.”
mm
Mar 31 2022 at 12:21pm
dang- I read posts here to learn more about economic issues- I hope your econ analysis is FAR superior to your medical analysis. The website you list has serious flaws-it includes multiple studies KNOWN to be fraudulent. Additionally, it includes least one preprint whose author has publicly stated will not go into publication b/c of flaws in methodology and has asked a number of similar people to quit using his study to support ivermectin. The table where he shows his analysis appears to leave out important negative studies and if you look at the i squared data it implies there is a lot of heterogeneity in the studies (ie you are comparing apples to oranges & should have low confidence in the overall result). Every high quality RCT has failed to show ivermectin is effective. A couple of more will report next month (covid-out, ACTIV-6) and if they are negative ivermectin for COVID will join phrenology as medical science.
Comments are closed.