Normally, I avoid blogging on anything topical; I want to write for the ages, not our daily hysteria. But I’m going to make an exception for… “Sokal 2.0.” The Chronicle of Higher Education provides an overview:
Three scholars — Helen Pluckrose, a self-described “exile from the humanities” who studies medieval religious writings about women; James A. Lindsay, an author and mathematician; and Peter Boghossian, an assistant professor of philosophy at Portland State University — spent 10 months writing 20 hoax papers that illustrate and parody what they call “grievance studies,” and submitted them to “the best journals in the relevant fields.” Of the 20, seven papers were accepted, four were published online, and three were in process when the authors “had to take the project public prematurely and thus stop the study, before it could be properly concluded.” A skeptical Wall Street Journal editorial writer, Jillian Kay Melchior, began raising questions about some of the papers over the summer.
Beyond the acceptances, the authors said, they also received four requests to peer-review other papers “as a result of our own exemplary scholarship.” And one paper — about canine rape culture in dog parks in Portland, Ore. — “gained special recognition for excellence from its journal, Gender, Place, and Culture … as one of 12 leading pieces in feminist geography as a part of the journal’s 25th anniversary celebration.”
The scholars behind the hoax describe their master plan here:
Our paper-writing methodology always followed a specific pattern: it started with an idea that spoke to our epistemological or ethical concerns with the field and then sought to bend the existing scholarship to support it. The goal was always to use what the existing literature offered to get some little bit of lunacy or depravity to be acceptable at the highest levels of intellectual respectability within the field. Therefore, each paper began with something absurd or deeply unethical (or both) that we wanted to forward or conclude. We then made the existing peer-reviewed literature do our bidding in the attempt to get published in the academic canon.
Examples, in the authors’ own words:
Sometimes we just thought a nutty or inhumane idea up and ran with it. What if we write a paper saying we should train men like we do dogs—to prevent rape culture? Hence came the “Dog Park” paper. What if we write a paper claiming that when a guy privately masturbates while thinking about a woman (without her consent—in fact, without her ever finding out about it) that he’s committing sexual violence against her? That gave us the “Masturbation” paper. What if we argue that the reason superintelligent AI is potentially dangerous is because it is being programmed to be masculinist and imperialist using Mary Shelley’s Frankenstein and Lacanian psychoanalysis? That’s our “Feminist AI” paper. What if we argued that “a fat body is a legitimately built body” as a foundation for introducing a category for fat bodybuilding into the sport of professional bodybuilding? You can read how that went in Fat Studies.
Needless to say, active practitioners of “grievance studies” were displeased. But quite a few more traditional academics have been equally quick to dismiss this project’s importance. Thus, my old friend Jacob Levy remarks, “I am so utterly unimpressed by the fact that an enterprise that relies on a widespread presumption of not-fraud can be fooled some of the time by three people with Ph.D.s who spend 10 months deliberately trying to defraud it.”
Well, unlike Jacob, I am impressed. Deeply impressed. Why? Because, on reflection, Sokal 2.0 amounts to an Ideological Turing Test. As I originally explained in 2011:
Mill states it well: “He who knows only his own side of the case knows little of that.” If someone can correctly explain a position but continue to disagree with it, that position is less likely to be correct. And if ability to correctly explain a position leads almost automatically to agreement with it, that position is more likely to be correct. (See free trade). It’s not a perfect criterion, of course, especially for highly idiosyncratic views. But the ability to pass ideological Turing tests – to state opposing views as clearly and persuasively as their proponents – is a genuine symptom of objectivity and wisdom.
My idea has inspired multiple actual tests. But frankly, none of them are in the same league as Sokal 2.0. Three scholars who held a vast academic genre in low regard nevertheless managed to master the genre’s content and style expertly enough to swiftly publish enough articles in earn tenure! Frankly, if that doesn’t impress you, I don’t know what would.
The main question in my mind: Does Sokal 2.0 primarily show that the authors are intellectually strong… or that “grievance studies” is intellectually weak? Both can be partly true, of course. But the harder the authors had to toil to achieve their goal, the less they impugn the honor of their target. So how hard did they toil? The authors’ self-account:
[W]e spent 10 months writing the papers, averaging one new paper roughly every thirteen days… As for our performance, 80% of our papers overall went to full peer review, which keeps with the standard 10-20% of papers that are “desk rejected” without review at major journals across the field. We improved this ratio from 0% at first to 94.4% after a few months of experimenting with much more hoaxish papers.
In other words, they barely broke a sweat. While you could accuse the authors of self-deprecation, this is a rare human failing. When we succeed, most of us like to highlight our own awesomeness, not the ease of our goals. While most people would have been less successful than the hoaxers, what they did was far from superhuman. And that, in turn, amply supports their main theses: The fields they hoaxed have low intellectual standards and don’t deserve to be taken seriously.
Does this mean that the subjects of race, gender, sexual orientation, body image, and so on don’t deserve to be taken seriously? Not at all. You shouldn’t blame subjects just because the fields that study them fall short. Identity is too important to be left to people who embrace their own identity. Still, until the researchers who study these subjects calm down, speak clearly, and treat dissent with civility, they will continue to produce little knowledge.
P.S. My main caveat about my positive evaluation of Sokal 2.0: I’ve seen too many hoax movies not to wonder if there’s a hoax within a hoax. Probably not, though.
READER COMMENTS
Michael Keenan
Oct 4 2018 at 7:09pm
The Ideological Turing Test point holds for the “fat bodybuilding” paper, but some of the other papers (e.g. the dog park paper) were helped along with fake data.
I’m unimpressed by the papers that use fake data, such as the dog park paper. The core of it is a fake observational study showing that humans are more uncomfortable with gay dog sex than straight dog sex, attached to a silly discussion section that mentions “dog rape culture”. It looks to me like the journal accepted it based on the interesting observational study, but the coverage is calling it the “dog rape culture” paper.
Only the papers that didn’t use fake data count as passing Ideological Turing Tests.
Thomas Sewell
Oct 4 2018 at 9:29pm
Reading their description, the “fake data” papers should count as well, because the authors deliberately constructed them to ‘present very shoddy methodologies including incredibly implausible statistics (“Dog Park”), making claims not warranted by the data (“CisNorm,” “Hooters,” “Dildos”), and ideologically-motivated qualitative analyses (“CisNorm,” “Porn”).’
Thus any which were accepted does demonstrate the thesis of low scholarship levels and academic rigor in these fields.
Jay
Oct 5 2018 at 8:18pm
I’d say that the inability to recognize a terrible data set counts as evidence of a lack of understanding by the peer reviewers.
I’ll use an example from my own field of training, chemical crystallography. If a paper was submitted that showed a crystal structure with two carbon atoms 110 picometers apart, at least one reviewer should catch it. Chemists know that neighboring carbons should be about 120-155 pm apart. A C-C bond distance of 110 pm probably isn’t real, and any paper claiming such a thing should at least be written as if the authors know it’s really unusual and have lots of data designed to convince the reader of that specific fact.
Likewise, if you wrote an economics paper saying that a minimum wage increase caused employment to rise, any economist would be very skeptical. Economists know that it doesn’t usually work that way, and anyone claiming that it did work that way would be expected to have sufficient evidence to convince an extremely skeptical audience.
If an amateur can make unrealistic claims to supposed experts about the experts’ claimed field of expertise without challenge, that’s pretty strong evidence that the expertise is deficient.
Robert EV
Oct 6 2018 at 12:40pm
Analytical tools develop.
The fabricated data sets may have been bad, but were they bad enough to be outside of the range expected from such data sets in this era?
Jay
Oct 7 2018 at 8:11am
If they’re not, what does that tell us about the reliability of conclusions based on data sets in this era (i.e., about all the other papers in those fields)?
S D
Oct 5 2018 at 3:38am
This all hinges on the extent to which these are ‘high-impact’ papers, or just part of a for-profit network with low standards which help researchers get more length on their CV.
It would be useful to see on average how many papers from these journals ever get citied. I am not an expert on this, but I suspect it is very small.
Alan Goldhammer
Oct 5 2018 at 7:19am
Citation analysis is fraught with a degree of peril. Back in the days when I was still a research biochemist, everyone used a couple of standard assays for proteins. They were always referenced in the submitted manuscripts and as a result Oliver Lowry and Ulrich Laemmli ended up as two of the highest cited biochemists of all time because of the methods they developed. Though dated, here is a nice article on citation analysis in the sciences.
Hazel Meade
Oct 5 2018 at 2:31pm
That was my thought as well. There are a lot of low-quality for profit journals. I’d be interestd in seeing the impact factor of the journals they published in.
Heck maybe they invented fake journals too (hoax within a hoax theory).
Dave Smith
Oct 5 2018 at 11:14am
It is quite interesting that the journals that were fooled and not fooled did not seem to be random. As Tyler Cowen reports, Sociology journals were not fooled. This means that serious editors and serious referees are hard to fool, even once in a while. People at the Chronicle who are critical of this need to consider this fact and update their opinions.
Robert EV
Oct 5 2018 at 12:22pm
I understand your point, and can definitely see some truth to it, but:
I’m sure many blackface comedians studied black people enough to know them well, and thus take ‘new’ behaviors and distort them in such a way as to get laughs from a white audience. This doesn’t change the fact that blacks at the time were engaged in the same noble struggle against life as whites (though from a socially lower stratum).
Knowing a viewpoint enough to state and then reject it has no bearing on whether one’s own unspoken biases in that matter are justifiable.
E.g.: http://thesubjectsupposedtoknow.us/paul-blooms-against-empathy-is-a-right-wing-trojan-horse/
Jay
Oct 5 2018 at 8:28pm
Their papers were accepted by leading* journals in the fields, so this is more like the blackface comedian of your example being offered tenure in an African-American Studies department.
*The hoaxers claim that these were leading journals. I have no idea whether this claim would be widely accepted within the fields involved; if it turned out that these were generally regarded as crappy journals, the hoax would lose most of its sting.
Robert EV
Oct 6 2018 at 12:50pm
And Fat Studies may be the leading journal for the study of overweight people, but it may also the the only journal specializing in overweight people.
It looks like a total of six colleges, at the height, may have taught fat studies courses. That plus the occasional interested faculty member or desperate grad student elsewhere is likely barely enough to fill a quarterly journal if they’re going all out (Fat Studies published three issues in 2017, with a total of 20 ‘articles’, plus miscellaneous non-article reviews, commentaries and the like).
Jay
Oct 7 2018 at 8:05am
My understanding of the results, which is admittedly minimal, is that some of the disciplines came out looking better than others. From what I’ve heard, the mainstream sociology journals came out looking pretty good, in the sense that they published a lot less of the fake papers and explicitly rejected more. The gender studies journals came out looking a lot worse. Gender studies is farther from the mainstream than sociology per se, but a lot more mainstream than “fat studies”. My impression of the results is that the field of sociology* showed relatively rigorous standards (emphasis on relatively) but that the subfields the hoaxers referred to as “grievance studies” came out looking pretty bad.
*as constructed, i.e. the journals which were generally called sociology journals by people who called themselves sociologists and worked in departments called “Sociology”
Bab
Oct 5 2018 at 5:31pm
I think that it may be possible to publish a fake article in a scientific journal, but probably not a hoax article along the lines of Sokal Squared. The point was not just that the articles were untrue, but that they were absurd and were published anyway. Likewise, the point of the Sokal article was not merely that it was flawed or untrue, but that it was complete and arrant nonsense.
The point is that gender and critical studies is a field which by design uncritically accepts the uncorroborated personal testimony of someone as true, as long as it aligns with their worldview. Moreover, it reacts with great hostility whenever the validity of someone’s testimony is questioned – this is generally termed the “erasure” or “invalidation” of someone’s existence. As long as those things remain the case, the field will always be vulnerable to these sorts of hoaxes. I think that it is very important to study racism, but it needs to be done in a way that is empirical and rigorous.
Tiffany
Oct 8 2018 at 7:10pm
Where are these scholars teaching? Every single article about this topic is giving these 3 people credits as being scholars but the only academic link thus far is to an assistant professor. Can anyone qualify who these 3 are and also let us know why these academic journals are significant? I have never heard of any of these and I have many years of postgraduate training. I nice story would do more background checking than any national news outlet has bothered to do.
Comments are closed.