Here’s Mike Huemer’s second set of responses to me and you.
About Bryan’s Comments
Thanks again to Bryan, and the readers who commented on his post, for their thoughts about Part 2. This is all cool and interesting. I’ll just comment on a few questions and points of disagreement.
1. Real World Hypothesis (RWH) vs. Brain-in-a-Vat Hypothesis (BIVH)
Why, though, couldn’t we race the Real World theory against the Simulation-of-the-Real-World theory?
Good question. We can think of it like this: we have a theory, B (for “brain in a vat”), and some evidence E. B doesn’t predict E very well, because there are so many other things that are about equally compatible with B. What you could do is add some auxiliary assumptions onto B, producing (B&A). And you could pick A specifically so that it entails E, given B.
In Bayesian terms, this has the effect of increasing your likelihood (P(e|h) in Bayes’ Theorem). It also reduces your prior (P(h)) by the same ratio (or more). So there’s no free lunch.
In other words, now the problem is just going to be that the new theory (B&A) has a low prior probability, because it stipulates that the scientists program the computer in a particular way, where this is one out of a very large number of ways they could program it (if there were such scientists).
Btw, this might not initially sound to you like it’s a super-specific stipulation (“they program a simulation of the real world”). But I think it really is very specific, comparatively speaking. You have to include the stipulation that they make a simulation that is perfect, with no glitches or program bugs or processing delays that would reveal the simulation. This is possible but is true of only a very narrow range of simulations that could exist. You have to stipulate that they decide to simulate an ordinary life in a society much less advanced than their own. Again, possible, but only a narrow range of the intentions they could have, and not one that makes a whole lot of sense. By comparison, if we found that our lives maximized enjoyment, or aesthetic beauty for observers, or intellectually interesting situations, or anything else other than “looking just like a normal mundane life”, then we’d have some evidence for the BIVH.
To what extent is this approach [direct realism] compatible with just saying that the reasonable Bayesian prior probability assigns overwhelming [probability] to the Real World story?
Also a good question. It’s different from saying we assign a high prior to the RW Hypothesis, because it depends on having sensory experiences. In the “high prior” story, you should believe the RWH before having any sensory experiences (if you could somehow still understand the RWH at that time).
The direct realist approach is more like assigning a high prior probability to “Stuff is generally the way it appears” (or “if there appears to be a real world, then there is”). I think it might be a requirement of rationality that one assign a high prior to this.
2. The definition of “knowledge”
[W]hat’s wrong with the slightly modified view that when we call X “knowledge,” we almost always mean that X is a “justified, true belief”? … [W]e can think of “justified, true belief” as a helpful description of knowledge rather than a strict definition. What if anything is wrong with that?
I think “justified, true belief” is a fair approximation to the meaning of “knowledge”. (A closer approximation is “justified, true belief with no defeaters”.) That is to say that when we call x “knowledge”, we mean something close to that. I don’t think we ever mean exactly that, though. E.g., when you say that someone knows something, part of what you’re saying is that they’re not in a Gettier case. That’s always implied by your statement (even if Gettier cases aren’t something anyone is thinking about at the moment), so you never merely mean that the person has a JTB.
About Reader Comments
part of the problem with questions like “Is there a God?” is not that they are meaningless or that they have no answer. Rather, it’s that they are unanswerable.
I don’t see why that’s unanswerable. That is, I think we can and do have evidence for or against the existence of God. Granted, none of it is conclusive evidence. But then, we also don’t have conclusive evidence for or against any scientific theory, yet we shouldn’t say that scientific questions are “unanswerable” (should we?).
The second example [goodness of polygamy], however, is not a factual question, and will depend on what each particular culture considers good and bad.
It is a factual question! I can’t find that passage right now (it’s not on 87-8 in my copy), so I’m not sure what point I was making. But I explain my arguments against moral relativism in ch. 13.
I too found the discussion of ‘polygamy is wrong’ to be ignoring the ambiguity of the proposition
I doubt that I was doing that. Just take whatever interpretation of the phrase you want, assume that is understood, and then read the passage with that sense. If you think there are three senses of “wrong”, say, wrong1, wrong2, and wrong3, just substitute “Polygamy is wrong1”, and then read the rest of the passage as normal. Again, I’m not sure where this passage is, so I am not sure what its actual point was.
…Phenomenal Conservatism, which says that we are entitled to presume that whatever seems to us to be the case is in fact the case, unless and until we have reasons to think otherwise.This sounds exactly as Popperian fallibilism, since you are admitting that the moment you get a good reason to think what it seems to you is false, you should doubt it, and thus the original “foundation” is still fallible.
This isn’t Popper’s point. Popper’s point is not merely that we are fallible and should give up our beliefs if we find evidence against them. Pretty much everyone agrees with that for almost all beliefs, and so that would not be a distinctively Popperian point (nor would Popper have gotten famous for saying that). What is distinctive of Popper is that he thinks that you never get any reason at all to believe that any scientific theory is true. (Most people can’t believe that Popper thinks that, because it’s so crazy, and so they just refuse to interpret him as saying that, no matter how clearly he says it. If you don’t believe me, see the Popper quotations in this post: https://fakenous.net/?p=1239.)
“BIV is a bad explanation because from it anything goes and so is not really an explanation” … Michael goes on an unnecessary argument (not even completely expressed in the book) with made up probabilities
I thought it would be too complicated for undergrad students. In case you’re interested, this is where the argument is explained more fully: https://philpapers.org/rec/HUESTA.
The Deutsch argument doesn’t sound adequate to me, since it doesn’t explain why the BIV theory is unlikely to be true. The remark, “from it anything goes” is indeed the start of the explanation, but it sounds like Deutsch does not give the rest of the explanation. He infers that the BIV theory isn’t an explanation, which doesn’t follow at all. If there were a BIV, and its experiences were really caused by scientists stimulating it, then that, trivially, would be the explanation of its experiences.
I don’t think my argument was unnecessary, because what I did was to actually explain why the BIV theory should be rejected. As far as I can tell (not having read Deutsch’s book, but just from Benjamin’s comment), Deutsch doesn’t actually say why we’re not likely to be BIV’s.
But just because “your belief is not justified” is not a good reason to change it, specially if the belief to which you should change is also not justified.
I think this is a misunderstanding. I didn’t mean that you should change to another unjustified belief. I meant that, according to the skeptic, you should change from believing to not believing (whatever they’re saying is unjustified). Why? That’s just what “unjustified” means. If you think that it can be rational to hold an “unjustified” belief, than I just don’t know what you mean by “unjustified”.
In order to create an accurate description of “only” the brain in its vat, the scientists, and the brain apparatus — as if that were all that existed, without relying on the simple rules of physics playing out from a (presumably simple) original condition — you would need an absolutely absurd quantity of description.
I think this argument is assuming that (i) the BIV theorist has to give a detailed description of the actual state of the brain, the scientists, etc., without stating the laws of nature (?), but (ii) the Real World theory only states the general laws, and doesn’t have to specify any boundary conditions. (?)
But I can’t figure out why Hellestal assumes that. Why wouldn’t the BIVH and RWH both assume the same laws? And in order to get any empirical predictions, both would of course have to add some information about the configuration of the physical world (some initial conditions). The BIV theory would have to add information about the state of the BIV. The RWH would have to add information about the state of the real world.
Maybe you’re assuming that the BIVH says that there is only a BIV, and nothingness outside the BIV’s lab. Of course that’s a ridiculous theory. But that’s not how anyone understands it. The BIV theory specifies that there is a BIV (and stuff for stimulating it, etc.), and it doesn’t make claims about whatever else is going on outside the BIV’s room.
the low entropy starting condition of our universe … I don’t know why this puzzles anyone. A low entropy starting condition is a mathematically simpler starting condition. That is literally what “low entropy” means: simplicity.
I don’t agree with the last statement. Entropy is more precisely defined in terms of the measure of the phase space region that corresponds to a given macroscopic condition. Low entropy corresponds to a small region, and high entropy to a large region. Almost all of the phase space is occupied by the highest-entropy state, thermal equilibrium. On the standard way of assigning probabilities in thermodynamics, “low entropy” basically means “improbable”.
For our world: You need a universe, which should ideally have simple rules of physics and a simple (low-entropy) starting condition. Then the simple rules of physics need to play out to, eventually, create us. And that’s it.
But that’s not enough to explain your evidence. To explain your evidence, you need there to be, e.g., tables, and giraffes, and 8 planets, and Mount Rushmore, etc. Because you have experiences of all those things, so the RWH says that all those things really exist. For the BIVH, you still only need the scientists and the brain-stimulating equipment to explain all those experiences. There doesn’t have to actually be, say, a Mount Rushmore.
To my surprise, Prof. Huemer largely neglects social epistemology.
True. I was trying to keep the book of manageable length. However, I plan to do an epistemology text next, and it will at least include testimony and peer disagreement.
The burning question I have: is the typo in the footnote on p.95 on purpose?
No. In fact, I don’t even see the typo. (?)
I think the skeptic’s claim that we cannot know anything with 100% certainty must be correct.
Do you really mean that, or do you just mean that we can’t know controversial beliefs with certainty? (E.g., progressives do not in fact know the optimal minimum wage.) Would you say that you’re uncertain whether you exist? Is it uncertain that A=A?
…our most solid cases of knowledge is built from cooperation. In the example with the octopus, if my friend is next to me and is also seeing the octopus, and we both talk about what we are seeing, and our descriptions match, the likelihood that I am truly seeing the octopus goes up.
True, but notice also that this is not actually a likely case. If you see a normal physical object in normal conditions, you’re not going to be asking your friend if he sees it, etc. I’ve never done that in my life. Why not? Because I don’t need to, because I already know what I see.
The cases in which you actually need to check with other people are theoretical claims. Like, you’ve just given an argument against the minimum wage, and you ask your friend who is an economist to check it. I’ve done that sort of thing all the time. And yes, that definitely increases the likelihood of being correct.
This undermines the first claim above (“…our most solid cases of knowledge…”). No, our most solid cases of knowledge are things like immediate observations, made in normal conditions (good lighting, no hallucinogens, etc.). The theoretical knowledge that is commonly produced cooperatively (like science) is typically much less solid (less likely to be true, more likely to be revised in the future), even after we’ve gone through that cooperative process. Of course, that’s not because the cooperative process is bad; it’s because theoretical claims are inherently harder to know and easier to be wrong about. Which is why we feel the need to work together on them in the first place.
READER COMMENTS
MarkW
Jun 14 2021 at 8:18pm
The BIV hypothesis is one of those things that, to me, seems so obviously wrong/useless that it puzzles me that very smart people can take it seriously, even as a thought experiment. How do we know that the BIV hypothesis is silly?
Absurd level of detail. For BIV to be true, the simulation would have to include all physical properties down to subatomic particles (because, of course, physics experiments work).
Your brain (and mine) couldn’t be separate from the simulation, they would have to be part of it. Why? You could have brain-surgery (say, severing the corpus collosum) and observe the expected results in yourself (‘split brain’ symptoms). And then you could do the same operation on others and observe the same results. So if there were a simulation, your brain and mine would have to be included in it, not sitting separate in a vat.
So, we’re talking about either A) TRW or B) A fully-detailed simulation down to subatomic particles and that includes all brains and their operations. At that point TRW vs simulation (an all encompassing simulation that works exactly the same way in the finest detail) is a distinction without a difference.
Henri Hein
Jun 16 2021 at 1:51am
Being wrong and useless are two different things. The absurdity of the BIV scenario is part of the point of it. It’s not posited as a serious alternative to TRW. It’s posited as a model that, if we cannot rule it out, there must be lots of other models we also cannot rule out.
Your level of detail point makes it less likely to be true, yes, but it still doesn’t rule it out.
MarkW
Jun 16 2021 at 11:33am
“It’s posited as a model that, if we cannot rule it out, there must be lots of other models we also cannot rule out.”
But I think we definitely can rule it out. At least we can rule out a brain in a vat that is outside and separate from the simulation. On the other hand, if the brain is being simulated along with everything else, then we’re down to semantics — is a world ‘simulation’ built out of atoms meaningfully different in its implications from a computer simulation of those atoms? I say no, and given that, I’d say BIV is not only wrong, but also not useful.
Henri Hein
Jun 16 2021 at 11:14pm
I don’t think we can. Your example with the brain surgery doesn’t really refute the BIVH. When we examine our brains, we are still looking at spoofed signals from the machinery that runs the vats. Presumably it’s going to make us find what it wants us to find.
I found the line of reasoning in Huemer’s book more compelling. I continue to think that skepticism in some form is useful and necessary, but I also realize that it becomes counter-productive at some point, since we end up spending too much time and energy on pointless disputes. I don’t really know where the optimal line cutting through that tension runs.
Hellestal
Jun 15 2021 at 8:32am
“Maybe you’re assuming that the BIVH says that there is only a BIV, and nothingness outside the BIV’s lab. Of course that’s a ridiculous theory. But that’s not how anyone understands it.”
I think that was a fair interpretation of how the description in the book currently reads. But fair enough.
If you allow the rest of the world outside the lab, the conclusion is even easier.
You can’t just say BIV posits “nothing” outside the lab. It posits an entire world outside the lab that create conditions that allow that lab to exist. You can’t, for instance, posit a black hole two feet outside the door.
Those conditions? That allow the lab to exist? Those same conditions will ALSO allow regular people who aren’t brains in vats to exist. The same conditions that allow the lab also allow a world of people to exist without the lab and apparatus.
The BIV theory needs: a world + the lab/apparatus/techs
The non-simulation theory needs: a world
BIV still gets its ass kicked.
“I don’t agree with the last statement. Entropy is more precisely defined in terms of the measure of the phase space region that corresponds to a given macroscopic condition. Low entropy corresponds to a small region, and high entropy to a large region. Almost all of the phase space is occupied by the highest-entropy state, thermal equilibrium. On the standard way of assigning probabilities in thermodynamics, “low entropy” basically means “improbable”.”
This is confusing thermodynamic processes with the information theoretical measure of information.
“Entropy” as an informational theoretic measure literally means “complexity”.
This is part of the point of the Maxwell’s Demon hypothetical. If you ARE such a demon, then the system is not actually in a high entropy state. You know when to open and close the shutter. High entropy = highly probable (thermodynamically) because of our relative state of ignorance, not complexity in the rules of physics. But just because we personally don’t know which particles are whizzing around more quickly, and which not, doesn’t mean that we posit that the laws of physics governing the paths of those particles (or — and this is key — their likely starting conditions) are suddenly more complex. We don’t assume that. We rely on the simplest (lowest entropy) rules we can find that accurately describe what we can see.
…I doubt this is very clear, so I’ll drop this line of argument here. But I appreciate the response.
TGGP
Jun 15 2021 at 9:09pm
The bit about not being 100% certain of anything may be drawing on this reasoning:
https://www.overcomingbias.com/2008/01/0-and-1-are-not.html
Benjamin
Jun 16 2021 at 2:23am
Thanks Michael for the answer. Let me see if I can make my points a bit clearer.
First of all, let me clarify that I have never read Popper directly, and only know his ideas from David Deutsch, so maybe all this is more Deutschian than Popperian.
Yes, I agree that is what is distinctive of Popper. I also agree with him that there is never a positive reason at all to believe that any scientific theory (or non scientific) is true.
I also think that Inductive arguments is logically flawed, and Inference to the best explanation has some more merit, but it is also ultimately flawed. Maybe this is my mathematical training, which made me believe that the only positive arguments you can make are only true if the axioms are true, but there is never a positive reason for why the axioms are true, so all mathematical truths are conditional.
But Popper/Deutsch don’t limit themself to denying positive arguments for truth, I think their deeper point is that this is not a problem. They hold that it is rational to believe any conjecture, even if you don’t have positive arguments for it, so long as you don’t have negative arguments against it. And it would be irrational to not want to change your belief once you have found a negative argument. Then is the time to search for a better one. In a way is a “beliefs by survival”, anything that has not been killed by criticism is ok to keep.
There is no positive logical argument for the Theory of Relativity. And it could be false. But we don’t have any negative argument against it, the most we have is the knowledge that it contradicts Quantum Mechanics, and so one of them must be false, but we don’t have reasons for which one is causing the problem, or if both are. That is no reason for believing that the GPS system is just working by magic.
This is more of a fault in my summary of Deutsch argument than the argument form Deutsch himself. Any attempt I do is going to fall short.
What Deutsch says is not that the BIV theory is not an explanation, but that it is a bad explanation, and so we can discard it. And is a bad explanation because it has superfluous components. The vat is not doing anything in the explanation, you could replace it for a computer simulation, or for a fridge. Similarly you could replace the scientist by gods, or teenagers playing around. Explanations with “moving parts” are bad (you should remove all the moving parts).
I don’t think the argument from probability was achieving that, because as I mentioned the probabilities are all made up. Not just the priors, but all the likelihoods (like the likelihood that we observe this if we live in RWH). So it is again a conditional argument on those probabilities being true.
Just to reiterate, I still find the book one of the bests in philosophy I have ever read. I am recommending it insistently to everybody. Thanks for writing it, and engaging in this discussions.
Henri Hein
Jun 16 2021 at 2:28am
In my copy, the last sentence reads “Why You Are Probably Not a Brian in a Vat.”
Yes. Of course, I cannot deny something exists to which awareness of my present is attached. I’m not positive what that means or what I can infer from it. Maybe I am an agent in a super AI, my identity and my memories are all fake, I was created to produce this sentence, and when I’m done writing it, I will cease to exist. Maybe I am the figment of some strange being’s imagination whose thoughts somehow produce consciousness. In those scenarios, “I exist” means something different from what our language normally conveys.
No, but it doesn’t tell me anything about the real world. It’s true because we have defined the symbols with a meaning that makes it true. I grant that’s not nothing. Having a solid logic framework to build models and make inferences is incredibly useful. I still need some other evidence before I can use it to draw conclusions about the world.
That’s strange to me, because I do that. I mean, if I am driving with a friend on a route we have been on 30 times before, I won’t call out “oh look, the house on the corner is still red!” But whenever I have seen something as fantastically gorgeous as an octopus – and I mean in real life, not on the screen – I have exchanged remarks with my companion(s) about it every single time.
That’s interesting. You would know better than me, so I’m prepared to take your word for it. I would have thought our senses would have to be confirmed with others before they can be trusted. In my own case, as an example, my hearing is somewhat compromised, so I get tinnitus and the way I hear sounds from the environment will be different from how a human with normal hearing experience them. When I first started hearing extraneous sounds, it sounded a bit like a quiet AC or ceiling fan. I had to check with others whether they were hearing it. At home, I heard a different sound, more akin to the sound some lamps make, and it was loud. I was sure my neighbor was running some serious hydroponics operation. I remember an epiphany I had when I walked around the neighborhood one evening and realized the intensity of the sound was exactly the same in a several block radius, which finally made me realize it was internal. So it’s not obvious to me that our immediate observations are reliable.
John Alcorn
Jun 16 2021 at 9:48am
Re: Karl Popper
Popper’s philosophy of science might be wrong in 17 different ways, but it is right in a crucial way, which Micheal Huemer mentions in passing in his blogpost, “You Don’t Agree with Karl Popper“:
Crucial, because unfalsifiable grand theories can readily become fashionable, influential, and entrenched — partly because they are unfalsifiable. Consider, for example, social theories based on hermeneutics of suspicion, such as Freudian theories of unconscious complexes and Critical Race Theory. These theories are unfalsifiable. To make matters worse, such theories provide conceptual warrant to interpret disagreement as evidence for the theory! If you challenge the theory, ipso facto you exemplify the theory.
Comments are closed.