I.
In November 2021, I posted Ivermectin: Much More Than You Wanted To Know, where I tried to wade through the controversy on potential-COVID-drug ivermectin. Most studies of ivermectin to that point had found significant positive effects, sometimes very strong effects, but a few very big and well-regarded studies were negative, and the consensus of top academics and doctors was that it didn’t work. I wanted to figure out what was going on.
After looking at twenty-nine studies on a pro-ivermectin website’s list, I concluded that a few were fraudulent, many others seemed badly done, but there were still many strong studies that seemed to find that ivermectin worked. There were also many other strong studies that seemed to find that it didn’t. My usual heuristic is that when studies contradict, I trust bigger studies, more professionally done studies, and (as a tiebreaker) negative studies – so I leaned towards the studies finding no effect. Still, it was strange that so many got such impressive results.
I thought the most plausible explanation for the discrepancy was Dr. Avi Bitterman’s hypothesis (now written up here) that ivermectin worked for its official indication of treating parasitic worms. COVID is frequently treated with steroids, steroids prevent the immune system from fighting a common parasitic worm called Strongyloides, and sometimes people getting treated for COVID died of Strongyloides hyperinfection. Ivermectin could prevent these deaths, which would mean fewer deaths in the treatment group than the control group, which would look like ivermectin preventing deaths from COVID in high-parasite-load areas (like the tropics) but not low-parasite-load areas (like temperate zones). This explained some of the mortality results, with the other endpoints likely being because of publication bias.
Alexandros Marinos is an entrepreneur, long-time ACX reader, and tireless participant in online ivermectin arguments. He put a very impressive amount of work into rebutting my post in a 21 part argument at his Substack, which he finished last October (if you don’t want to read all 21 parts, you can find a summary here). I promised to respond to him within a few months of him finishing, so that’s what I’m doing now.
I’ll be honest – I also didn’t want to read a 21 part argument. I would say I have read about half of his posts, and am mostly responding to the summary, going into individual posts only when I find we have a strong and real disagreement that requires further clarification. I also have had a bad time trying to discuss this with Alexandros (not necessarily his fault, I can be sensitive about these kinds of things) and am writing this out of obligation to honor and respond to someone who has put in a lot of work responding to me. It is not going to be as comprehensive and well-thought out as Alexandros probably deserves.
I’ll go through each subpart of his argument, as laid out in the summary post.
Alexandros is completely right about one of these studies, partly right about a few others, and I still disagree with him on several more. On the one where I was wrong, I was egregiously wrong, and I apologize to the study authors and to you.
On the original post, I went through a list of 29 studies, trying to decide whether or not I trusted them. I dismissed 13 studies as untrustworthy (which didn’t necessarily mean fraudulent, just that I wasn’t sure they had good methodology). Then I dismissed 5 more studies that epidemiologist Gideon Meyerowitz-Katz didn’t like (even though I didn’t have strong objections to them myself), just to get a list of studies everyone agreed seemed pretty good. This part is about my study-keeping decisions. It won’t have a very big impact on the final result, since both Alexandros and I agreed that regardless of which study-dismissing criteria you use the final list supports ivermectin efficacy. But I still tried to get this right and mostly didn’t.
Alexandros critiques many of my study interpretations, but includes four in his summary. I’ll go over those four in detail, and make less detailed comments on the rest.
Biber et al (Alexandros 100% right)
The study I am most embarrassed about here is Biber et al, an Israeli study which found that COVID patients who received ivermectin had lower viral load. In the original post, I wrote:
This is an RCT from Israel. 47 patients got ivermectin and 42 placebo. Primary endpoint was viral load on day 6. I am having trouble finding out what happened with this; as far as I can tell it was a negative result and they buried it in favor of more interesting things. In a “multivariable logistic regression model, the adjusted odds ratio of negative SARS-CoV-2 RT-PCR negative test” favored ivermectin over placebo (p = 0.03 for day 6, p = 0.01 for day 8), but this seems like the kind of thing you do when your primary outcome is boring and you’re angry.
Gideon Meyerowitz-Katz is not a fan. He notes that the study excluded people with high viral load, but the preregistration didn’t say they would do that. Looking more closely, he finds they did that because, if you included these people, the study got no positive results. So probably they did the study, found no positive results, re-ran it with various subsets of patients until they did get a positive result, and then claimed to have “excluded” patients who weren’t in the subset that worked.
I’m going to toss this one.
You can find Alexandros’ full critique here. His main concerns are:
-
I claimed that the primary outcome results were hidden, probably because they were negative. In fact, they were positive, and very clearly listed exactly where they should be in the abstract and results section.
-
That makes my dismissing their secondary outcomes as “the kind of thing you do when your primary outcome is boring and you’re angry” incorrect and offensive. The correct thought process is that their primary outcome was positive, and their secondary outcome was also positive, which they correctly mention.
-
Gideon Meyerowitz-Katz’s objected to the researchers changing the (previously preregistered) procedure partway through. But the researchers had good reasons for doing that, they got the IRB’s permission, and they couldn’t have been cherry-picking, because they hadn’t seen results yet and didn’t know whether this would make ivermectin look more vs. less effective.
-
Gideon (correctly) phrased this as a non-sinister albeit potentially weird misstep by the study authors, but in trying to summarize Gideon, I (incorrectly) phrased it as a sinister attempt to inflate results.
After looking into it, I think Alexandros is completely right and I was completely wrong. Although I sometimes get details wrong, this one was especially disappointing because I incorrectly tarnished the reputation of Biber et al and implicitly accused them of bad scientific practices, which they were not doing. I believed I was relaying an accusation by Gideon (who I trust), but I was wrong and he was not accusing them of that. I apologize to Biber et al, my readers, and everyone else involved in this.
My only reservation is that I don’t want to say too strongly that Gideon’s critique is wrong: I haven’t looked through the study documents enough to say with certainty that Alexandros’ reanalysis of the protocol issues is correct (though the superficial check I’ve done looks that way). But my mistakes are completely separate from anything Gideon did and definitely real and egregious.
Cadegiani et al (Alexandros 50% right)
Flavio Cadegiani did several studies on ivermectin in Brazil; I edited this section in response to criticism by Marinos and others, but the earliest version I can find on archive.is (I can’t guarantee it was the first I wrote) said:
A crazy person decided to put his patients on every weird medication he could think of, and 585 subjects ended up on a combination of ivermectin, hydroxychloroquine, azithromycin, and nitazoxanide, with dutasteride and spironolactone “optionally offered” and vitamin D, vitamin C, zinc, apixaban, rivaraxoban, enoxaparin, and glucocorticoids “added according to clinical judgment”. There was no control group, but the author helpfully designated some random patients in his area as a sort-of-control, and then synthetically generated a second control group based on “a precise estimative based on a thorough and structured review of articles indexed in PubMed and MEDLINE and statements by official government agencies and specific medical societies”.
Patients in the experimental group were twice as likely to recover (p < 0.0001), had negative PCR after 14 vs. 21 days, and had 0 vs. 27 hospitalizations.
Speaking of low p-values, some people did fraud-detection tests on another of Cadegiani’s COVID-19 studies and got values like p < 8.24E-11 in favor of it being fraudulent. Also in Cadegiani news: he apparently has the record for completing one of the fastest PhDs in Brazilian history (7 months), he was involved in a weird scandal where the Brazilian government tried to create a COVID recommendation app but it just recommended ivermectin to everybody regardless of what input it got, and he describes himself as:
…the only author of the sole book in Overtraining Syndrome, the prevailing sport-related disease among amateur and professional athletes. He is also responsible for approximately 70% of the articles published in the field in the world in the last 05 years, and reviewer for more than 90% of the manuscripts in the field.
And, uh, he’s also studied whether ultra-high-dose antiandrogens treated COVID, and found that they did, cutting mortality by 92% . Which sounds great, except that it looks like most of this is that the control group had a shockingly high mortality rate, much higher than makes sense even in the context of severe COVID. I think the charitable explanation here is that he made this data up too. But the Brazilian Parliament seems to be going with an uncharitable explanation, seeing as they have recommended that Cadegiani be charged with crimes against humanity.
Anyway, let’s not base anything important on the results of this study.
You can find Alexandros’ full critique here, but again I’ll try to summarize it as best I can.
-
Alexandros is unhappy with my portrayal of Cadegiani’s background. I cite details that make him look strange and maybe fake, but there are other details that make him seem more impressive, like that he won gold medals at a Brazilian Scientific Olympiad.
-
I mention Cadegiani’s “involvement” in a scandal where the Brailizan government created a COVID recommendation app that recommended ivermectin to everyone. Marinos points out that it did alter its recommendations based on the patient (eg what other drugs it recommended, what dose of ivermectin to use), and although it had some problems it was overall an okay app whose only “crime” was operating on the assumption that ivermectin was a great COVID drug.
-
More to the point, although the app cited Cadegiani’s research, he was not involved in creating it, and in fact criticized it (yes, there is some tension between Alexandros defending the app and defending Cadegiani for criticizing it; he argues that he has no position on the app’s quality but does not think it has been shown to be a “scandal”)
-
The Brazilian Parliament did recommend that Cadegiani be charged with crimes against humanity for his trial, but this was for not giving the drugs to the control group, not for excess mortality in the control group (is this nonsensical? Doesn’t this mean that the medical establishment wants to blame Cadegiani both for giving drugs that don’t work, and for not giving them to enough people? Alexandros argues that yes, the establishment really is that dumb).
-
Alexandros doesn’t dispute that one of Cadegiani’s trial had some impossible-seeming statistics, but says we shouldn’t jump to allegations of fraud, shouldn’t let this unduly influence our opinion of Cadegiani’s other trials, and also accuses Kyle Sheldrick, the person who discovered the discrepancy, of doing other bad things.
My responses:
Alexandros’ Point 1 is fair-ish. Since this person appears to be commiting pretty substantial fraud and doing some strange things, I thought it was useful to highlight the ways in which he is weird and suspicious, rather than the ways he is prestigious and impressive. But probably I went too far in this.
His Point 2/3 is completely fair, and I’m sorry for getting this wrong. I may have unthinkingly copied it from forbetterscience.com, which made this mistake before me, or I might have just failed at reading comprehension on this translated Portugese-language article I linked. In either case, I apologize to Cadegiani. This is already on my Mistakes page as of June 2022 when Alexandros wrote his original article.
His Point 4 is correct, although based on information that came out after I wrote my article. All that was available in English when I wrote was that the Brazilian government was considering accusing Cadegiani of crimes against humanity. I think I did an okay job noting that I was guessing at their reasoning (rather than reporting a known fact), and as written I did make clear that I thought he was innocent of the specific charge. Still, I appreciate the clarification.
His Point 5 is – I do feel like Alexandros is having a sort of missing mood on the fact that one of Cadegiani’s big pro-ivermectin studies contains impossible data. While this is not proof of fraud or incompetence, it is some Bayesian evidence for both. And while fraud or incompetence in one of your studies supporting ivermectin is not proof that your other studies supporting ivermectin are also fraudulent/incompetent, it is, again, Bayesian evidence.
Alexandros makes a big deal of there being four corrections in the BMJ article attacking Cadegiani, as if now the BMJ has admitted they were wrong all along, whereas these were mostly on unrelated details and the BMJ definitely did not correct the quotes about how his study was “an ethical cesspool of violations” or how “in the entire history of the National Health Council, there has never been such disrespect for ethical standards and research participants in the country”
. I feel like if his Science Olympiad medals are an important part of the story, these kinds of things are an important part too.
Still, several of Alexandros’ points were entirely correct, and I appreciate the corrections.
Babalola et al (still disagree with Alexandros)
OE Babalola (I incorrectly wrote this name as “Babaloba” in the original) did a Nigerian study which found that ivermectin decreased the amount of time it took before people tested negative for COVID. I described this study as:
This was a Nigerian RCT comparing 21 patients on low-dose ivermectin, 21 patients on high-dose ivermectin, and 20 patients on a combination of lopinavir and ritonavir, a combination antiviral which later studies found not to work for COVID and which might as well be considered a placebo. Primary outcome, as usual, was days until a negative PCR test. High dose ivermectin was 4.65 days, low dose was 6 days, control was 9.15, p = 0.035.
Gideon Meyerowitz-Katz, part of the team that detects fraud in ivermectin papers, is not a fan of this one. He doesn’t say there what means, but elsewhere he tweets [this figure highlighting how the study has “Numerous impossible numbers”]
I think his point is that if you have 21 people, it’s impossible to have 50% of them have headache, because that would be 10.5. If 10 people have a headache, it would be 47.6%; if 11, 52%. So something is clearly wrong here. Seems like a relatively minor mistake, and Meyerowitz-Katz stops short of calling fraud, but it’s not a good look.
I’m going to be slightly uncomfortable with this study without rejecting it entirely, and move on.
Alexandros calls this The Sullying Of Babalola Et Al, and says I “followed Gideon Meyerowitz-Katz off a cliff” by unfairly “lambasting” the innocent Babalola. I “[made] a mountain out of a molehill”.
Alexandros quotes a commenter who found that the most likely explanation for the “impossible numbers” in Babaloba was missing data, and notes that usually-anti-ivermectin researcher Kyle Sheldrick had evaluated the raw data and found no fraud. Alexandros concludes:
As far as I can tell, Scott discarded a good study here, and besmirched the reputation of the researchers by amplifying flimsy allegations that were known to be off-base at the time that the article was written.
I don’t think I did anything especially wrong here. There was a chart that didn’t make sense. It turned out not to make sense because some data was missing. I said “[this] seems like a relatively minor mistake, and Meyerowitz-Katz stops short of calling fraud, but it’s not a good look. I’m going to be slightly uncomfortable with this study without rejecting it entirely, and move on.”
I was right that it was a minor mistake, I was right that it wasn’t fraud, and I was right not to reject the study. I didn’t have the exact explanation (missing data), so I did not mention it, but I think I made the correct guess about the sort of explanation it was. I don’t understand why Alexandros acts like I said the study wasn’t worth keeping, or that there was no innocent explanation, or that I was accusing the researchers of fraud, when in fact I said the opposite of all those things, pretty explicitly.
Carvallo et al (Alexandros 25% right)
This was an Argentine study. I described it as:
This one has all the disadvantages of Espitia-Hernandez, plus it’s completely unreadable. It’s hard to figure out how many patients there were, whether it was an RCT or not, etc. It looks like maybe there were 42 experimentals and 14 controls, and the controls were about 10x more likely to die than the experimentals. Seems pretty bad.
On the other hand, another Carvallo paper was retracted because of fraud: apparently the hospital where the study supposedly took place said it never happened there. I can’t tell if this is a different version of that study, a pilot study for that study, or a different study by the same guy. Anyway, it’s too confusing to interpret, shows implausible results, and is by a known fraudster, so I feel okay about ignoring this one.
Alexandros responds here. Attempting to summarize his points:
-
He agrees this study is extremely confusing.
-
The other Carvallo paper was accused of fraud, but not actually retracted.
-
The fraud accusation (primarily described in this Buzzfeed article, which Alexandros believes is unfair) was for a study done in four hospitals. One of the hospitals denied knowing anything about it or authorizing it. But the main hospital said they did know about it and authorize it, it (according to Carvallo) it is considered okay in Argentina to let hospital staff enroll in trials without telling the hospital.
-
The study does have a lot of data collection issues and Alexandros agrees we shouldn’t take it seriously, he just disagrees with calling it fraud.
This is a good place to note that I very poor memory of what I was thinking two years ago, and am having to reconstruct my arguments as I go. Still, reading the BuzzFeed article, I notice things like:
-
Different sources about the study contradict each other (or gave seemingly impossible numbers for) when the study happened, how many patients were involved, and how old they were.
-
Dr. Carvallo said that another researcher, Dr. Lombardo, had reviewed these data. But Dr. Lombardo denied ever having been involved.
-
Not only did a hospital claim that they weren’t formally involved in the study, but the infectious disease doctor at the hospital said none of his colleagues that he knew of had participated, and he had never seen any ads inviting people at that hospital to participate. After this, all references to that hospital in the paper were changed to “other peripheral medical center”. Carvallo said he changed it only because the hospital wanted to do a trial of some other drug; the infectious disease doctor from the hospital said that was “an absolute lie”.
-
Carvallo refused to release the data from the study to anyone who asked for it. When one of his coauthors asked Carvallo for the data, Carvallo claimed to have given it to him, but actually only gave him “summaries of the results and a written narrative of how the study was carried out”. His collaborator was so disgusted that he withdrew his coauthorship and asked for his name to be removed from the article.
-
Carvallo said that zero people in the treatment group of his study got COVID, compared to 58% of people in the control group. This is a pretty implausibly big effect, even by the standards of other pro-ivermectin studies, although I don’t know if anyone else tried the exact same preventative protocol as Carvallo.
I think this is a more nuanced story than Alexandros’ version where Buzzfeed just doesn’t know that sometimes studies happen at more than one hospital.
Is fraud the best explanation? I think Alexandros thinks of Carvallo as just not keeping very good records, so he doesn’t have raw data, and probably mixed up his numbers a few times or gave false numbers, and didn’t have anything to send his collaborators when they asked. I think this is maybe possible, although it seems suspicious that he falsely said Dr. Lombardo was involved, falsely claimed the hospital involved was doing a different trial, and got very implausible results. I can imagine weird chains of events that would cause all of these things through honest misunderstandings. But they don’t seem like the best explanation.
After discussing this with Alexandros, he objects to my use of the term “known fraudster”. Perhaps I should have said “highly credibly suspected fraudster” instead, although in a Bayesian sense nothing can ever be 100% and at some point plausibility shades imperceptibly into knowledge. Still, I feel like my description here was more accurate than Alexandros’, which just mentions the hospital approval issue and says nothing about any of the rest of this in a thousand word subsection about this study in particular.
I did err in saying the Carvallo paper was retracted. According