Suppose someone sends you a new article claiming X. Intuitively, we think, “This will either make you more likely to believe X, or have no effect.” Once you understand Bayesian reasoning, however, this makes no sense. When someone sends you an article claiming X, you should ask yourself, “Is this evidence stronger or weaker than I would have expected?” If the answer is “stronger, ” then you should become more likely to believe X. However, if the answer is “weaker,” then you should become less likely to believe X.
Thus, suppose you initially consider X absurd. When someone sends you some evidence in favor of X, you should update in favor of X if the evidence is less awful than expected. You should update against X, in contrast, only if the evidence is even more awful than expected.
Similarly, suppose you initially consider X absurd, but your brilliant friend nevertheless defends it. The fact that a brilliant person believes X is evidence in its favor. Given his brilliance, however, his arguments should only persuade you if they are even better than you would have expected from one so brilliant. When a great mind offers mediocre arguments, you shouldn’t merely be unmoved; you should be actively repelled: “That’s the best you can do?!”
Example: One of the smartest people I know routinely sends me pro-“social justice” links on Twitter. As a result, I think even less of the movement than I previously did. If even he fails to defend his view effectively, the view is probably truly devoid of merit.
What, however, should I conclude if this mighty intellect simply stopped sending me links? One possibility, of course, is that he’s given up on me. Another possibility, though, is that he’s exhausted his supply of evidence. At this point, he’s got nothing better than… nothing.
The strange upshot: While Bayesian reasoning seems to imply that persuasive efforts are, on average, ineffective, there is a reason to keep arguing. Namely: Failure to argue is, on average, an admission of intellectual defeat. And by basic Bayesian principles, this in turn implies that the continuation of argument is at least weak evidence in favor of whatever you’re arguing.
Stepping back, you can see a somewhat depressing conclusion. When people are perfect Bayesians, argument is a kind of Prisoners’ Dilemma.
If your opponent keeps arguing, you want to keep arguing so it doesn’t look like you’ve run out of arguments.
If your opponent stops arguing, you want to keep arguing to emphasize that your opponent has run out of arguments.
As a result, both sides have an incentive to argue interminably. Which, as you may have noticed, they usually do.
Is there any ejector seat out of this intellectual trap? Yes. You could build a credible reputation for talking only when you have something novel to add to the conversation. Then instead of interpreting your silence as, “I’ve got nothing,” Bayesian listeners will interpret it as, “I’ve rested my case.”
[silence]
READER COMMENTS
Loquitur Veritatem
Jul 29 2019 at 11:32am
All of which has nothing to do with the truth or falsity of X, which is a binary choice (i.e., not probabilistic), and everything to do with the state of mind of the person who is thinking about X.
Andre
Jul 29 2019 at 12:50pm
Failure to argue is, on average, an admission of intellectual defeat.
For some things, yes. For others, there may be a substantial social cost (whether with that person or the broader observing group) to “being on the wrong side” of an issue – even when one is correct. In those circumstances, failure to argue (or desisting) is a form of social self preservation.
Philo
Jul 29 2019 at 12:52pm
I suggest that, instead of falling silent, you say explicitly, “I refer you to the argument I made at [link supplied];” i.e., “I rest my case.”
John Hall
Jul 29 2019 at 1:59pm
I found this post a little hard to follow…
Your argument is predicated on the point about brilliant people and them making good arguments (you seem to mix up arguments and evidence at certain points). I don’t really see how it matters how brilliant they are. Originality might be more relevant than brilliance for me. Regardless, I found it difficult to cast this in a Bayesian lens. Maybe what you’re trying to do is conflate the probability of some X as being p(X | argument) * p(argument | brilliance of argumenter).
RPLong
Jul 29 2019 at 2:23pm
For me, this argument reads like a case against Bayesian reasoning. A very strong argument can nonetheless be weaker than the one you expected; that doesn’t call for revising your prior, though, because it’s still a strong argument. A weak argument can likewise be stronger than the one you expected, but if it’s still a weak argument, there’s no reason to update your prior. A third argument could be either strong or weak, but hearing it might trigger a chain of reasoning in your own mind that causes you to learn something about the subject matter and update your prior.
Basically, there are all manner of possibilities here. If a Bayesian restricts himself to the behavior Caplan describes above, he’ll miss out on a lot of important information.
nobody.really
Jul 29 2019 at 2:25pm
Arguably people have learned to exploit this dynamic strategically:
“Damn with faint praise, assent with civil leer,
And without sneering, teach the rest to sneer;
Willing to wound, and yet afraid to strike,
Just hint a fault, and hesitate dislike.”
— “Epistle to Dr. Arbuthnot” by Alexander Pope (1688–1744)
nobody.really
Jul 29 2019 at 2:44pm
Also this:
“Favorinus [of Arelata, 85-155 CE], the philosopher, used to say that faint and half-hearted praise was more dishonoring than loud and persistent abuse.”
—Noctes Atticae (Attic Nights), Book 1, by A. Cornelius Gellius (123-170 CE)
Chris
Jul 29 2019 at 2:39pm
You seem to be putting a strong emphasis on prolonged, ‘brilliant’ arguments, however, often the best argument is fairly simple, and the best evidence fairly straightforward, and if someone doesn’t find it persuasive, the fault lies more with themselves for not being rational about the topic.
An example would be flat earthers versus a video taken from a satellite or weather balloon launch. The evidence takes less than 5 minutes of video to unequivocally demonstrate the roundness of earth, and yet flat earther’s continue to exist.
Also, I’m not sure if you’re trying for humor or not, but your view on social justice seems to be so rigid and irrational that you’re unwilling to even contemplate an argument against it. The fact that your friend may have given up on you is more likely due to you being a lost cause than them having run out of evidence or argument. At some point you have to decide that the opportunity cost of continuing to develop new, persuasive arguments and finding additional evidence is larger than the reward for proving your point to someone that ignored the merits of the previous evidence and arguments because they weren’t above a subjective threshold of brilliance.
nobody.really
Jul 29 2019 at 3:04pm
Yeah … kids won’t get that. Maybe if you append a GIF of someone dropping a microphone…?
Kevin Jackson
Jul 29 2019 at 6:49pm
You mention a brilliant friend of yours sends you mediocre arguments on Twitter. This makes you think less of social justice because “If even he fails to defend his view effectively, the view is probably truly devoid of merit.”
Let’s say, however, that tomorrow your friend doesn’t share the link with you, but he does share it with the pro-social justice crowd. One of those people, some you consider to be a complete idiot, sends the link to you. Do you then say, “If even a complete moron can make a coherent argument for social justice, they’re must be something to it”?
This is the problem with the method you describe. The most brilliant people will be the least effective at convincing you, because you have such high expectations, and vice versa for the least brilliant people.
I think there is a better system: if a brilliant person makes a mediocre argument for a position you consider false, it should raise your opinion of the matter and lower your opinion of the person.
Mark Z
Jul 30 2019 at 1:59am
“If the answer is “stronger, ” then you should become more likely to believe X. However, if the answer is “weaker,” then you should become less likely to believe X.”
I’m not sure this is true. It assumes a strong positive relationship between the average quality of arguments made for a position and whether it’s true, but I can imagine plenty of positions where arguments made for and against the correct position are on average equally bad. People with poor reasoning don’t tend toward bad ideas, imo; they are more intellectual coin flips. And the more complex an dispute is, the fewer people with the intellect and knowledge to reason to the right conclusion, leaving the general quality of arguments on each side to the mercy of the many coin flips of the mass of people without the right knowledge to understand it.
Then there’s the fact that people (smart and dumb alike) often pick their side and structure their arguments in some way for reasons that are unrelated to truth or falsehood. Falsities can be useful, and thus picked up disproportionately by intelligent pragmatists who recognize their utility; or they can be diseases that spread more effectively than truths because of their greater emotional appeal. Though it’s tempting to say, “your argument is so dumb, I’m more convinced of my rightness fit having heard it,” I don’t think it’s a logical conclusion. It also likely tempts people to deliberately or subconsciously select the dumbest of their opponents to engage with so they may reassure themselves, “if my enemies are this dumb, I must be right.”
I wholeheartedly agree though about only expressing a strong opinion when one’s confidence in it is equally strong. Unfortunately though, I’ve found people find confidence (and disdain) persuasive in their own right, or at least socially compelling (when someone assertively remarks “can you believe those idiots actually believe X” few people have the social fortitude to respond “actually, I’m one of those idiots who think X,” so confidence can be and routinely is easily misused to short circuit the faculties of one’s interlocutors.
Matt C.
Jul 30 2019 at 6:33am
Alas, I think a large proportion of society conflates volume with quality. If one reads 100 articles supporting X and 10 opposing X, they are more likely to support X even if all 100 article supply the exact same weak evidence.
John Fembup
Jul 30 2019 at 9:31pm
Isn’t it more than articles, or assertions, or even personalities? Somehow one must decide whether the evidence presented is stronger or weaker.
Here’s the great Richard Feynman on evaluating a hypothesis
https://m.youtube.com/watch?v=OL6-x0modwY
Peter Gerdes
Aug 3 2019 at 6:09pm
I think part of the problem people are having with this piece is the wording suggests an incorrect interpretation. You say
I think one (the?) natural reading of this is “I heard a new study supporting X. Is the evidence for X stronger or weaker than I would have expected FOR a new study supporting X.
Yet that’s not quite correct. It could be that studies supporting X are relatively common, this is less strong than you would have expected *for* a study supporting X yet nevertheless (because it arrived before you would have expected) conditioning on it’s existence should increase your confidence in X.
Regarding keeping arguing and your friend on twitter you seem to suggest adopting a model on which the quality of the arguments they make is largely determined by their intelligence and the best case that can be made for that position. But we know that’s virtually never true. People virtually never make the best case they can for positions they hold and what prompts them to send messages or not is mostly about how they influence their social status.
For instance, consider arguments for abortion. On social media you hear lots of bad arguments for abortion that are easily dismissed (e.g. claims of absolute bodily autonomy…claims the speaker isn’t willing to endorse if asked about a conjoined twin who wants to be seperated knowing it will kill the other twin) but there are a bunch of really really good arguments you never hear (no one will cite the abortion-crime link or raise the very real net utility increases from aborting disabled fetuses and replacing them with non-disabled ones).
In short, I fear this post itself makes Bayesianism look bad as it sorta naively applies this deeply misguided model on which people present arguments to make their best persuasive cases lest others draw negative Bayesian inferences. I think many people who don’t identify this as a problem with your assumptions will nevertheless sense that it’s wrong somehow and falsely attribute the problem to Bayesianism rather than your failure to take into account the fact that social media serves a primarily coalition building and signaling function.
F. E. Guerra-Pujol
Aug 4 2019 at 8:12pm
Even without the Bayesian layer, does this mean philosophers are trapped in a never-ending and pointless Prisoner’s Dilemma?
See, e.g., https://priorprobability.com/2019/07/30/are-philosophers-trapped-in-a-prisoners-dilemma/
Franco Bertucci
Aug 24 2019 at 9:17pm
I won’t understand this article until I look up “Bayesian” reasoning. But I do know that when I stop offering arguments to my three-year-old daughter, it isn’t out of intellectual defeat. It’s a very different type of defeat.
Comments are closed.