Dr. Rob Knight studies how skin bacteria jump from person to person. In one 2009 study, meant to simulate human contact, he used a Q-tip to cotton swab first one subject’s mouth (or skin), then another’s, to see how many bacteria traveled over. On the consent forms, he said risks were near zero – it was the equivalent of kissing another person’s hand.
His IRB – ie Institutional Review Board, the committee charged with keeping experiments ethical – disagreed. They worried the study would give patients AIDS. Dr. Knight tried to explain that you can’t get AIDS from skin contact. The IRB refused to listen. Finally Dr. Knight found some kind of diversity coordinator person who offered to explain that claiming you can get AIDS from skin contact is offensive. The IRB backed down, and Dr. Knight completed his study successfully.
Just kidding! The IRB demanded that he give his patients consent forms warning that they could get smallpox. Dr. Knight tried to explain that smallpox had been extinct in the wild since the 1970s, the only remaining samples in US and Russian biosecurity labs. Here there was no diversity coordinator to swoop in and save him, although after months of delay and argument he did eventually get his study approved.
Most IRB experiences aren’t this bad, right? Mine was worse. When I worked in a psych ward, we used to use a short questionnaire to screen for bipolar disorder. I suspected the questionnaire didn’t work, and wanted to record how often the questionnaire’s opinion matched that of expert doctors. This didn’t require doing anything different – it just required keeping records of what we were already doing. “Of people who the questionnaire said had bipolar, 25%/50%/whatever later got full bipolar diagnoses” – that kind of thing. But because we were recording data, it qualified as a study; because it qualified as a study, we needed to go through the IRB. After about fifty hours of training, paperwork, and back and forth arguments – including one where the IRB demanded patients sign consent forms in pen (not pencil) but the psychiatric ward would only allow patients to have pencils (not pen) – what had originally been intended as a quick record-keeping had expanded into an additional part-time job for a team of ~4 doctors. We made a tiny bit of progress over a few months before the IRB decided to re-evaluate all projects including ours and told us to change twenty-seven things, including re-litigating the pen vs. pencil issue (they also told us that our project was unusually good; most got >27 demands). Our team of four doctors considered the hundreds of hours it would take to document compliance and agreed to give up. As far as I know that hospital is still using the same bipolar questionnaire. They still don’t know if it works.
Most IRB experiences can’t be that bad, right? Maybe not, but a lot of people have horror stories. A survey of how researchers feel about IRBs did include one person who said “I hope all those at OHRP [the bureaucracy in charge of IRBs] and the ethicists die of diseases that we could have made significant progress on if we had [the research materials IRBs are banning us from using]”.
Dr. Simon Whitney, author of From Oversight To Overkill, doesn’t wish death upon IRBs. He’s a former Stanford IRB member himself, with impeccable research-ethicist credentials – MD + JD, bioethics fellowship, served on the Stanford IRB for two years. He thought he was doing good work at Stanford; he did do good work. Still, his worldview gradually started to crack:
In 1999, I moved to Houston and joined the faculty at Baylor College of Medicine, where my new colleagues were scientists. I began going to medical conferences, where people in the hallways told stories about IRBs they considered arrogant that were abusing scientists who were powerless. As I listened, I knew the defenses the IRBs themselves would offer: Scientists cannot judge their own research objectively, and there is no better second opinion than a thoughtful committee of their peers. But these rationales began to feel flimsy as I gradually discovered how often IRB review hobbles low-risk research. I saw how IRBs inflate the hazards of research in bizarre ways, and how they insist on consent processes that appear designed to help the institution dodge liability or litigation. The committees’ admirable goals, in short, have become disconnected from their actual operations. A system that began as a noble defense of the vulnerable is now an ignoble defense of the powerful.
So Oversight is a mix of attacking and defending IRBs. It attacks them insofar as it admits they do a bad job; the stricter IRB system in place since the ‘90s probably only prevents a single-digit number of deaths per decade, but causes tens of thousands more by preventing life-saving studies. It defends them insofar as it argues this isn’t the fault of the board members themselves. They’re caught up in a network of lawyers, regulators, cynical Congressmen, sensationalist reporters, and hospital administrators gone out of control. Oversight is Whitney’s attempt to demystify this network, explain how we got here, and plan our escape.
Before the 1950s, there were no formal research ethics. Doctors were encouraged to study topics of interest to them. The public went along, placated by the breakneck pace of medical advances and a sense that we were all in it together. Whitney focuses on James Shannon’s discovery of new anti-malarials during World War II, as US troops were fighting over malarial regions of Southeast Asia). Shannon tested his theories on ambiguously-consenting subjects, including the mentally ill. But:
At a time when clerks and farm boys were being drafted and shipped to the Pacific, infecting the mentally ill with malaria was generally seen as asking no greater sacrifice of them than of everyone else. Nobody complained, major strides were made in the treatment of malaria, and Shannon received the Presidential Order of Merit.
Physicians of the time followed a sort of gentleman’s agreement not to mistreat patients, but the details were left to individual researchers. Some individual researchers had idiosyncratic perspective:
Looking back on that era, hematologist David Nathan remembered that he applied a rough version of the Golden Rule to experiments: He would never do anything to a research subject that he would not do to himself. Once an experiment passed this threshold, however, his idea of informed consent was to say, “You are the patient. I am Doctor Nathan. Lie down.”
I believe Dr. Nathan when he said he wouldn’t do anything to patients he wouldn’t do to himself – he once accidentally gave himself hepatitis in the course of an experiment. Still, this is not the sort of rule-abidingness that builds complete confidence.
A few doctors failed to follow even the flimsiest veneer of ethics. The most famous example is the Tuskegee Syphilis Study
, but this happened towards the end of the relevant era. The debate at the time was more shaped by Dr. Chester Southam (who injected patients with cancer cells to see what would happen) and the Willowbrook Hepatitis Experiment, where researchers gave mentally defective children hepatitis on purpose
. Two voices rose to the top of the froth of outrage and ended up having outsized effects: Henry Beecher and James Shannon.
Henry Beecher was a prominent Harvard anaesthesiologist and public intellectual, known for exploits like discovering the placebo effect.
Being well-plugged-in to the research community, he was among the first to learn about studies like Southam’s and Willowbrook, find them objectionable, and bring them to the public eye. Through public appearances and papers in prestigious journals, he dragged the issue in front of a sometimes-reluctant medical community. But he thought regulation would be devastating, and had no proposal other than “researchers should try to be good people”, which everyone except him realized wasn’t actionable.
Shannon was less brilliant, but unlike Beecher he was a practical and experienced bureaucrat. His own history of dubiously-consensual malaria research left him without illusions, but as he grew older he started feeling guilty (and also, more relevantly, became head of the National Institute of Health). Having no time for Beecher’s delusions of self-regulation, he ordered all federally-funded research to su