Disclaimer: None of the opinions expressed in this letter should be construed as statements of fact. They only reflect my experience with the research process, and my opinion regarding Francesca Gino’s work.
I am also not claiming that Francesca Gino committed fraud: Only that there is overwhelming evidence of data fabrication in multiple papers for which she was responsible for the data.
On September 30th, 2023, the New Yorker published a long piece on “L’affaire Ariely/Gino”, and the role I played in it.
I am grateful for the messages of support I received over the past few weeks.
In this post, I wanted to share more about how I came to discover the anomalies in Francesca Gino’s work, and what I think we can learn from this unfortunate story.
How it all began
I started having doubts about one of Francesca Gino’s paper (Casciaro, Gino, and Kouchaki, “The Contaminating Effect of Building Instrumental Ties: How Networking Can Make Us Feel Dirty”, ASQ, 2014; hereafter abbreviated as “CGK 2014” ) during my PhD.
At the time, I was working on the topic of networking behaviors, and this paper is a cornerstone of the literature.
I formed the opinion that I shouldn’t use this paper as a building block in my research.
Indeed, the idea that people would feel “physically dirty” when networking did not seem very plausible, and I knew that many results in Management and Psychology published around this time had been obtained through researchers’ degrees of freedom.
However, my advisor had a different view: The paper had been published in a top management journal by three prominent scholars… To her, it was inconceivable to simply disregard this paper.
I felt trapped: She kept insisting, for more than a year, that I had to build upon the paper… but I had serious doubts about the trustworthiness of the results.
I didn’t suspect fraud: I simply thought that the results had been “cherry picked”.
At the end of my third year into the program (i.e., in 2018), I finally decided to openly share with her my concerns about the paper.
I also insisted that given how little we knew about networking discomfort, and given my doubts about the soundness of CGK 2014, it would be better to start from scratch and launch an exploratory study on the topic.
Her reaction was to vehemently dismiss my concerns, and to imply that I was making very serious accusations.
I was stunned: Either she was unaware of the “replication crisis” in psychology (showing how easy it is to obtain false-positive results from questionable research practices), or she was aware of it but decided to ignore it.
In both cases, it was a clear signal that it was time for me to distance myself from this supervisor.
I kept digging into the paper, and arrived at three conclusions:
-
The paper presents serious methodological and theoretical issues, the most severe being that it is based on a psychological mechanism (the “Macbeth Effect”) that has repeatedly failed to replicate.
-
The strength of evidence against the null presented in study 1 of the paper made it extremely unlikely that the result was p-hacked: It is statistically implausible to obtain such a low p-value under the null, even when using researchers’ degrees of freedom.
-
Francesca Gino had many other papers that appeared equally implausible (i.e., untrustworthy psychological mechanisms leading to large effects with very low p-values).
It was at this point that I started suspecting that part of the evidence presented in CGK 2014 was not just p-hacked but based on fabricated data.
At the time, I wasn’t clear how warranted these suspicions were, or about the best way to share them with anyone, as I did not have enough tangible elements to support or prove this suspicion.
What I knew, however, was that I had accumulated enough theoretical and empirical arguments to seriously question the conclusions of CGK 2014, and that these arguments might be of interest to the scientific community.
Indeed, CGK 2014 is an unavoidable building block for anyone studying networking behavior: It is authored by influential scholars, published in a prestigious journal, received the Outstanding Publication Award in OB at the 2015 Academy of Management annual meeting for its “significant contribution to the advancement of the field of organizational behavior”.
Sharing these concerns (again, independently of my suspicion of fraud) appeared like the right thing to do: It would bring awareness to the issues in CGK 2014, and hopefully spur novel investigations into the root causes of networking discomfort.
I therefore decided to write a 10-page criticism of the paper and to include it in the first chapter of my dissertation.
This criticism summarized my argument against CGK 2014, explained why I chose not to rely on this paper in my research, and justified my choice of studying different psychological underpinnings of networking discomfort.
This criticism, again, only focused on the theoretical and methodological issues in CGK 2014.
Shooting the messenger
The story so far is very banal.
I, a (very) early-career researcher, took a deep dive into a famous paper and discovered inconsistencies.
These stories always start with “that’s odd…”, “it doesn’t make any sense…”, or “there is something off here…”.
Then, I second-guessed myself, a lot.
After all, the authors are famous, serious people; and the paper is published in a prestigious peer-reviewed journal.
So I thought “I must have misunderstood,” “I must be missing a part of the puzzle,” “it was probably addressed during the peer review process”… Then, as I finally grew more confident that the issues were real and substantial, I decided to write about them.
What should happen then (if science were, as many people like to say, “self-correcting”) is that, after a peer-review of some form, my criticism would get printed somewhere, and the field would welcome my analysis the same way it welcomes any other paper: Another brick in the wall of scientific knowledge.
As revealed in the New Yorker piece, this is not at all what happened.
The three members of my committee (who oversaw the content of my dissertation) were very upset by this criticism.
They never engaged with the content: Instead, they repeatedly suggested that a scientific criticism of a published paper had no place in a dissertation.
After many frustrating exchanges, I decided to write a long letter explaining why I thought it was important to document the issues I had discovered in CGK 2014.
This letter stressed that I was not criticizing the authors, only the article, and encouraged the members of my committee to highlight anything in my criticism that they viewed as inaccurate, insufficiently precise, or unfair.
The three committee members never replied to this letter.
Given this lack of response, I decided to keep the criticism in the dissertation draft that was shared with them before my defense.
On the day of the defense, external committee members called the criticism “unusual,” “unnecessary,” and argued that since I had not run a replication of the study, I could not criticize it.
Only one committee member found it “brave and interesting.”
After the defense, two members of the committee made it clear they would not sign off on my dissertation until I removed all traces of my criticism of CGK 2014.
Neither commented on the content of my criticism.
Instead, one committee member implied that a criticism is fundamentally incompatible with the professional norms of academic research.
She wrote that “academic research is a like a conversation at a cockt