Tuesday, November 7, 2017

Boolean value-Beliefs vs Probabilistic Beliefs and their Relationship to the Accountability of Scientists and Fact-Checkers



          On the Pro-Truth Pledge's website, it says:

          Misinformation is anything that goes against reality.  It can mean directly lying, lying by omission, or misrepresenting the truth to suit one’s own purposes. Sometimes misinformation is blatant and sometimes it’s harder to tell. For those tough calls we rely on credible fact-checking sites and the scientific consensus.

          Now, I've noticed a certain question that seems like it probably comes up a lot from potential signers of the Pro-Truth Pledge. They ask, "How do you fact-check a fact-checking site?"

          And the response, "By asking other fact-checkers to fact-check them."

          Then they ask, "If a fact-checker checks another fact-checker, and finds a statement that the first fact-checker said to be false, how do you know whether the second fact-checker is right about that or if the first fact-checker was right after all?"


          And the response: "check the scientific consensus."


          And they ask, "Can you fact-check the scientific consensus?"


          And the response "scientists can, but regular people don't have enough expertise and so they can't."

          And then they ask, "then how do you know if the scientific consensus is correct?"

          And the response, "because science has been pretty reliable in general compared to everything else so far, so scientific consensus is really unlikely to be wrong."

          And then they ask, "But how do you know if scientific consensus is correct in this particular case?"

          And then they also ask, "And how do we know that scientists never collectively decide to lie about their findings? Like, if the physicist community had conspired to hide some dangerous knowledge so that no one finds out the secret of how to make nuclear weapons in 1945, how would we know they were lying?"
          
          Although I suppose in that case it could be argued that scientists basically never collectively decide to lie about their findings unless there's a REALLY GOOD REASON, because they're really smart and their findings are how they get paid and going through all that investigative work and then not getting to brag about finally solving the mystery afterwards and advance science further for all mankind really really sucks.

          And they might also ask "And what if the scientific consensus is just plain wrong about something, simply because their current measuring tools aren't strong enough to get the right answer on a particular research question, and no one will realize it until maybe a century later when we have better tools?"
 
How would we know?

          And the response?


          The Pro-Truth Movement doesn't seem to have made that response yet. And I know a lot of people are going to want one. Just saying "trust the scientific consensus" isn't enough. It's probably going to get people good, reliable results most of the time, but many laypeople won't know that in advance of signing the Pledge.

          So I, a bonafide Pro-Truth Pledge signer and advocate, will now take it upon myself to provide that response. Bear in mind, the response is a bit complicated and depends on a decent amount of background knowledge of advanced critical thinking methods, in particular how you can apply probability theory to your own thought processes. I'm going to walk you through it and make this as simple and easy to understand as I can, so please bear with me.

          Let's begin, shall we?

          Let's start with the way a typical layperson who signs the pledge is likely to think. Here is something someone I know on facebook who had signed the PTP posted:


          "We can overcome some biases but not all because we, each of us, is always on the inside of our own selves looking out. We must come to agree as to rules to describe objective phenomena because we have no guarantee that our subjective perceptions of those phenomena are identical or that any of us is empirically correct in our perception. In fact, the odds are that neither they nor we are factually correct. Analogy: We both agree that the sky is blue, but the way your optical system processes color may not be the same as the way mine does? (Some of us can see more colors, some less and to a greater or lesser degree.) And to ice the cake, when we get down to the empirical reality -- the science of it -- we find that that sky is actually colorless and what we're perceiving as color (i.e., a pigmented object reflecting certain optical wavelengths) is actually a result of the prismatic effects of the atmosphere -- refraction vice reflection. We're fooled due to our own subjective limitations -- limitations that we can't be objectively aware of but that we can assume we have."

          Let's break this down. Here is what seems to be the core claim of the above quote:"We must come to agree as to rules to describe objective phenomena because we have no guarantee that our subjective perceptions of those phenomena are identical or that any of us is empirically correct in our perception."
 

          The above argument has 3 component claims:


                    -we have no guarantee that our perceptions are consistent with each other

                    -we have no guarantee that our perceptions are correct (that they match reality).

                    -Therefore, we must agree on outside rules to evaluate whether specific beliefs accurately describe objective phenomena, rather than relying on our own perceptions.



          These statements sound reasonable at first. Someone who describes all of their beliefs in terms of pure Boolean values (0 or 1, yes or no etc), would not be able to notice the flaw in the above argument.


          But why shouldn't we use Boolean values to describe our beliefs? With Boolean values it's a simple 0 or 1, yes or no, true or false. And in reality, a claim is ALWAYS either true OR false.


          Yes, but how much do you know about reality? Maybe the claim that "Sailor Vulcan loves cereal" in reality must necessarily be 100% true OR 100% false. But you don't know which it is.

          Now suppose you got some finite amount of evidence, some clue as to what the answer to that question "Does Sailor Vulcan love cereal" might be. For instance, you discover that:
"Sailor Vulcan lives an unusually healthy lifestyle, and only every once in a while consumes an amount of carbohydrates similar or close to what can be found in a bowl of cereal during a single meal."


          You should DEFINITELY update on this evidence! Clearly this evidence favors the hypothesis that Sailor Vulcan doesn't love cereal.


          But wait a moment. Even after seeing this evidence, you still don't really know whether Sailor Vulcan loves cereal or not. Sure, that hypothesis seems less likely now, but you still don't really know.

          How much evidence would it take to really know?

          Well you could ask me if I love cereal. And I might tell you "yes" or "no". But how would you know if I was telling the truth? Maybe I don't know if I love cereal. Maybe I haven't tried it before. Hey, it's possible, isn't it? So then you could ask me if I've eaten cereal before. And I can say "yes" or "no". 

          How would you know if I was lying?
          
          How do you know that cereal exists?

          No really, how do you know?

          Because you eat it for breakfast every day?

          How do you know you eat it for breakfast every day?

          Because you remember eating it?

          How do you know you're remembering that correctly? Is there an experimental test or some other outside indicator that you can rely on to be 100% sure that your cereal-eating memories weren't confabulated/hallucinated?


          Now, you might be tempted to give up at this point and say "Well that could never happen."

          But how do you know that for certain? Haven't you been wrong before? A typical layperson isn't bothered by this question, but they don't really know why they aren't bothered by it. They will just reiterate that those hypothetical scenarios are ridiculous nonsense and obviously false, which is circular reasoning. I ask them, "How do you know that claim X is false?" and they come up with a counterexample in the literature or their own experience that shows claim X to be false. And then I ask "How do you know that the counterexample you just gave isn't false?" and they say "Because claim X is obviously false, therefore you can't falsify the examples that falsify claim X".

          How do you know that for certain? 

          I'm not bothered by this question any more than the typical layperson. However, I know (or at least I'm reasonably certain) of why I'm not bothered by it. The reason?
          
          Because the chances of claim X being true are EXTREMELY low. I don't need to round off my estimate of those chances to the nearest whole number.
 
          Yes that's right, it's really that simple.

          In the case of whether Sailor Vulcan loves cereal, you could make an estimate in your head of how likely you think it is that Sailor Vulcan loves cereal. You would base this estimate on your previous baseline estimate modified by the evidence you have gotten after your baseline estimate was established. Maybe your baseline estimate of the probability that someone loves cereal is "Most people love cereal, and therefore in the absence of any more specific information about this individual's food preferences, I estimate a high probability (if you're really smart you'll give it a number, let's say 70% for now) that Sailor Vulcan loves cereal."


          Then, you acquire the evidence: "Sailor Vulcan lives an unusually healthy lifestyle, and only every once in a while consumes an amount of carbohydrates similar or close to what can be found in a bowl of cereal during a single meal."

          Let's say that the probability that this evidence is genuine and not me lying or being mistaken about my food preferences or you misremembering something I said or did in your presence etc. is really high maybe around 80%.

          Since your prior probability that Sailor Vulcan loves cereal is 70%, that means that if you looked at 100 people who were sufficiently similar to me, you would expect the number of them who love cereal to be ~70 people, and the number of them who don't love cereal to be ~30 people. The prior probability of Sailor Vulcan being in the group of 70 people is 70%, and the prior probability of Sailor Vulcan being in the group of 30 people is 30%.

          Now here's where things get really interesting. You can use the evidence: "Sailor Vulcan lives an unusually healthy lifestyle, and only every once in a while consumes an amount of carbohydrates similar or close to what can be found in a bowl of cereal during a single meal," to update your original mental probability estimate. To do this, simply multiply your original mental probability estimate by the probability of the evidence given that original estimate.

          For instance, multiply your estimate of the likelihood that "Sailor Vulcan loves cereal" by the likelihood of finding out that "Sailor Vulcan lives an unusually healthy lifestyle, and only every once in a while consumes an amount of carbohydrates similar or close to what can be found in a bowl of cereal during a single meal."
          
          Don't look at the numbers. You can do this without consciously doing any math!!

          Imagine the hypothetical you that estimated a 70% baseline likelihood that Sailor Vulcan loves cereal. Put yourself in their shoes, try to imagine what that version of you is thinking and feeling, what their psychological state is.

          Imagine your psychological state when you believe that Sailor Vulcan is highly likely to love cereal.
          Now imagine precisely how surprised or unsuprised you are when you find out that
"Sailor Vulcan lives an unusually healthy lifestyle, and only every once in a while consumes an amount of carbohydrates similar or close to what can be found in a bowl of cereal during a single meal."


          Now imagine your psychological state after this. How likely do you feel it is that Sailor Vulcan loves cereal? Do the chances feel higher? Do they feel lower?

          Remember this feeling. Memorize it. Pay attention to it. This feeling of being surprised or unsurprised by the evidence tells you what your mental likelihood-estimates were before encountering that evidence. And if you know in advance how surprised or unsurpised you would be to find out that a certain claim is true or false, then that level of hypothetical surprise or unsurprise IS your current mental likelihood-estimate that said claim is true or false. You don't need to round this estimate off to the nearest whole number. This mental estimate IS your currently held belief.

          Now, you're never going to be able to get infinite certainty in this new kind of belief. 0% and 100% are not actually probabilities, but absolute certainties, aka "estimates of infinite certainty", so they can't be updated by the evidence. Anything multiplied by 0% is 0%, and anything multiplied by 100% is still itself (100% equals 1 and 0% equals 0, percentages are just treating all numbers from 0 to 1 as improper fractions with a denominator of 100).

          Do you see why it was often so hard to change your mind before? When you round up all your beliefs to the nearest whole number and treat them like they are all 100% likely to be true or 0% likely to be true, it becomes really really hard to change your mind. It's also worth noting that "50% likely to be true" isn't quite the same as "I have no idea whatseover". With a  50% mental likelihood-estimate that a claim is true, you would not be surprised if the claim was true, nor if it was false. But if you really had no opinion, if you really had no idea and no way to make a guess besides eenie-meanie-minie-moe or flipping a coin, then it wouldn't be at 50%. You simply wouldn't have any idea how surprised or unsurprised you would be at all.

           Since you don't have an infinite amount of evidence to draw from, there will always be some level of uncertainty in whether our beliefs are true or false, and in how closely our perspectives match reality.

          Just because different people's brains are unreliable sometimes in slightly different ways doesn't mean that you should just always ignore your brain and only listen to experts. Experts have limited data sets too, and they aren't with you 24/7 to make all your decisions and judgements for you in your everyday life. Scientific consensus is definitely more reliable than individual judgements, but science is used more for generalized and reproducible knowledge and there are types of knowledge that aren't scientific, like the fact that my keys are in my pocket, or the fact that i am currently writing a post on my blog.

          If you wanted to rely on scientific consensus to figure out where your keys were, you'd need to give the scientific community quite a bit of data about yourself and your personal organizational habits, and then you'd probably need to have a large representative sample of everett branches with a "you" in it who have misplaced their keys sometimes and then they could do a behavioral study on them and then make a prediction of where your keys are most likely to be right now based on that, having each "you" look in each location, and then recording the number of yous who found the keys in each particular spot.

          Or at the very least, (since this is real life) you need a large sample of people who are sufficiently similar to you to habitually tend to leave their keys in the same spots as you do when they lose their keys.

          You dont need a 100% certain "guarantee" that your perceptions are accurate to act on them or to use them as models to help make further predictions. Having a finite amount of data/observations means you can have a finite amount of likelihood/expectation that your hypotheses are true or false.
          
          And this brings us full circle back to the Pro-Truth Pledge, and to our original question: how can we know if and when scientists or fact-checkers collectively lie or are collectively wrong?

          To be honest, it is my opinion that the Pro-Truth Pledge oversimplifies rational epistemology by framing rational beliefs in terms of Boolean values. Potential states of reality can be Boolean, but a rational belief in the absence of infinite evidence will always be merely probabilistic.

          However, I think it does this for a very good reason. It seems most likely to be because most people don't have the time or desire to learn any probability theory, and yet we still need a way for regular people to distinguish between:

                    1. claims that are significantly higher than 50% likely to be true based on the evidence available
          and
                    2. claims that are significantly lower than 50% likely to be true based on the evidence available.

          Most people already use Boolean values to describe their beliefs about a thing, rather than using a percentage that measures how surprised or unsuprised they would be by certain hypothetical observations they could make about that thing.

          And it would be a lot harder to start a movement educating everybody about probability BEFORE starting a movement getting people to value truth more.

          Rest assured, using Boolean values can still be good enough a lot of the time. Generally, I would say that Boolean value-beliefs are probably good enough for most purposes when you're dealing with subjects that have larger data sets, because doing more tests tends to increase the disparity between:

                    1. the probability of a claim being true given the evidence available
          and
                    2. the probability of that same claim being false given the evidence available.

          Unfortunately, using Boolean value-beliefs makes it harder to independently evaluate the reliability or honesty of particular fact-checkers and scientists/scientific organizations. If you're using Boolean value-beliefs, then when it comes to science it's really hard to change your mind with evidence, because you won't be able to form an opinion until the scientists have already gathered enough evidence to make a really high or really low likelihood estimate of whether a claim is true or false. Checking for yourself would mean gathering the evidence that had been used to form your scientific consensus-given baseline estimate in the first place.

          If you're using Boolean value-beliefs, any probability estimate too close to 50% is simply labeled "I don't know". So no matter how much evidence you gather, you're still basically stuck there until you've gathered enough evidence to be overwhelmingly on one side or the other. And even then it's hard to update, because you've already ran so many tests, and they all resulted in "I don't know". If all the tests you ran felt "Inconclusive" it's going to be hard to make that jump to "this is really likely/unlikely to be true". But if you can't update your beliefs incrementally with the evidence, then you can't properly check a scientist's work. You would only be able to evaluate whether new results matched with old well-established results, and anything really surprising would be thrown out the window simply because it is really surprising.

          So laypeople who don't have any knowledge of probability will just have to take our word for it and hope that fact-checkers and science is trustworthy.

          Which is a bit of a bummer, because if they were thinking about their beliefs in terms of percentages of expectation, as levels of surprise/unsurprise, rather than just a yes/no, they would be able to update incrementally. And that would allow them to actually be able to form opinions based directly on the evidence available to scientists which they publish in journals, rather than just taking their word for it.


          And the thing is, while scientific consensus is a LOT MORE reliable than individual perception, it still isn't infallible, and in the infrequent and unlikely event that the scientific consensus gets something wrong, a layperson with no knowledge of probability would be unable to notice it. This could have consequences for new or emerging fields of study which are highly relevant where there aren't very large data sets yet but people still want to know about. If there's any public reporting being done on a relatively new field without many or perhaps without any established well-supported theories, then a layperson who uses Boolean-value beliefs instead of probabilistic beliefs is going to be unable to form any rational opinions about it. And if it's a field that they really need to know about for their own safety or wellbeing or the safety or wellbeing of society, then not being able to form an accurate understanding of the currrent state of the field is not a good thing.

          However, every cause has to start somewhere, and since currently most people aren't interested in learning about probability, we'll just have to stick with the Pro-Truth Pledge as it is for now.

No comments:

Post a Comment