Monday, September 4, 2017

Are people innately good? Or...

A while back, I posted this on my facebook page:

"
I currently think that most people are inherently good but crazy. Specifically, I think that people turn off their empathy for perceived aggressors even when the perceived aggressors aren't actually aggressing, and that this is sufficient to explain the phenomenon of why people do evil, why they act against their innate human pro-social values. It probably wouldn't be evolutionarily advantageous to empathize with someone who you think is threatening you, because then you might try to be friends with something that wants to eat you, or hesitate to kill them before they eat you. And if you're taught throughout your life that another person or group is threatening you or people you care about...
Perhaps this mindset can even become habitual if it is in place long enough?
Anyways, it doesn't seem necessary to postulate that people are at all inherently evil, that people have an innate anti social value. That seems to be unnecessary extra detail.
Does anyone have any counter examples to this theory?"

Nobody responded to the comment with any counterexamples, and I got two likes.

I recently saw a comment in an earlier thread in one of the facebook groups I frequent which basically said (unless I'm misunderstanding) that there's a conspiracy of mega-rich people to perpetuate ignorance amongst the populace. I didn't think it was very likely, because it seemed like there were simpler explanations which don't involve that kind of coordination. 

"True, however Zappa understood back then that there was a movement towards the celebration of ignorance and the denigration of intelligance. Most likely designed by the wealthy elite to further separate us into two classes instead of three, giving more to power to them. Yes?"

My response at the time:

"
Unnecessary extra detail imo. We already know that people have a natural tendency to dislike when someone says things they don't already agree with and don't want to hear. If someone is really ignorant, than any new info that they learn could cause them to react like that. Maybe people are less educated now on average than they were before for whatever reason. Or perhaps humanity's knowledge and the common use and reliance on that knowledge has grown faster than most people can keep up, so someone who is ignorant today might have been considered well educated before? Do you have any evidence that discriminates the wealthy elite conspiracy hypothesis out of the many other possible explanations?
Edit: there is some knowledge that isn't going to be likely to cause that sort of reaction to someone learning it for the first time, like "the sky is blue" or "the Earth is round" or "1+1=2". But I'm guessing it depends on the knowledge. I think my point still stands though"


However, since then I have had a change of perspective.

Twice today I have been confronted with the subconscious practically almost-kind of deliberate unreasonableness of other human beings. The first time, someone made a comment implying that cis-straight-males are more likely to be anti-intellectual than women or homosexuals, and when I told him that actually no, women and homosexuals can be just as anti-intellectual as heterosexual cis-men, the conversation somehow devolved into an argument in which everything I said was responded to as if it was coming from a misogynist, and I was even ad-hominemly compared to the "All Lives Matter" movement, and it took hours for one of the two people I was talking with to realize that they were putting words in my mouth, while the other one ended up conveniently forgetting what we were talking about, and then when I reminded him he appeared to confabulate some other bullshit explanation about how I was supposedly bringing up mens' rights issues in the midst of a discussion about feminism even though that wasn't what happened at all and we hadn't even been talking about feminism when I brought up men's rights issues and I felt like he should have already known that because he had been participating in that entire discussion. It was as if he had forgotten the contents of the entire discussion on the spot, at a time when it would have been most convenient for him to do so.

What's arguably even worse, is that he probably didn't consciously decide to do that. It probably happened subconsciously, automatically without even having to think about it, so it's not exactly his fault. As far as I can tell, pretty much everyone does this when faced with information that threatens their self-image unless they learn how not to, and that takes a lot of time and practice.


Later, I got into an argument with some people on the Slate Star Codex discord channel about Gleb Tsipursky, the president and founding member of Intentional Insights, which is a nonprofit organization that promotes rationality and effective decision-making. He has recently been the target of a lot of misinformation and slander, and those attacks were effective enough that a large portion of the secular community got suckered in by them. When I tried to talk about this on the Slate Star Codex discord channel I was flooded by a rapid series of claims about Gleb's supposed dishonesty, and when I tried to specifically debunk one of them one person said that my response was irrelevant because it didn't refute some other claim about Gleb which I hadn't gotten to yet, and then at one point someone asked me why I hadn't responded to any of the other points yet and I said "I can only respond to one point at a time!"


They literally were jumping from one claim to another and barely giving me enough time to respond in between. At one point they accused me of actually being Gleb Tsipursky using a sock puppet account, and when I tried to explain that NO, I am not Gleb, they just didn't believe me and went on to say that that's what Gleb would say too in those circumstances if I were him using a sock puppet account. Like, regardless of what I said, they would still have believed that I was Gleb Tsipursky. Like those old witch trials where anything that the defendant could possibly say or do would be taken as evidence that she was a witch.

"
Friedrich Spee von Langenfeld, a priest who heard the confessions of condemned witches, wrote in 1631 the Cautio Criminalis ('prudence in criminal cases') in which he bitingly described the decision tree for condemning accused witches:  If the witch had led an evil and improper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dissemble and try to appear especially virtuous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches characteristically pretend innocence and wear a bold front. Or on hearing of a denunciation of witchcraft against her, she might seek flight or remain; if she ran, that proved her guilt; if she remained, the devil had detained her so she could not get away."

And THIS was on an online community for rationalists--people who (one would expect) actually care more about reason and truth than the average person.

For the sake of transparency, I also mentioned that "
while I do have a vested interest in Intentional Insights and the Pro-Truth Pledge, I only started volunteering with them after investigating some of the claims made by that post on EA. But if I thought there was sufficient evidence that Gleb was dishonest, I would leave InIn a heartbeat because that kind of thing would defeat the entire point of volunteering with them. Unfortunately I'm not that great at arguing especially when I don't have time to plan out what I'm going to say ahead of time because I have a social developmental disability. And Gleb's post about this is a lot more comprehensive regarding the things that have been said about him, so it would be much easier if you read through that post and then check the claims he makes there"

And then I posted the link to that post for the second time in that conversation, hoping that other people in the discussion would actually read it first and before coming back to the discussion. Instead they just stopped talking about it altogether and moved on to other topics and the subject of Gleb Tsipursky did not come up again.

Twice today I have been confronted with people being semi-deliberately unreasonable with me in an ad hominum manner, and both times it took hours and hours of my time to have to defend my own character and integrity from ad hominum arguments in discussions where my own character and integrity shouldn't have had anything to do with what was being discussed!

And on the heels of all that came the breaking point: a couple hours ago I read this article which basically describes how medical research in the U.S. is being controlled and censored on a massive scale by an organization that isn't accountable to anyone. And as I read it, I felt this growing sense of something shifting in my mind. And suddenly I realized a lot of what I said in the two facebook posts about human nature which I quoted at the beginning of this blog post seem like excuses, like I was trying to say that "actually people are innately good because of this nifty theory" rather than relying more on actual observations of human behavior.

I'm not entirely sure if I'm just falling prey to the horns effect or if learning of the evidence of that article is prompting me to integrate a lot of prior evidence that I never got around to really properly integrating, which in turn causes me to update on this evidence more than I would normally expect to. 

Either way, I'm now noticing all of the holes in my nice, shiny theory of human nature which I described earlier. People don't really have built-in utility functions--our terminal values are probably just an implied byproduct of how our values in the moment fit together. If humans did have built-in utility functions then yes, you could say that humans are innately good because a desire to optimize for the life, wellbeing and preferences of other people as well as yourself falls out of/is implied by how we tend to value things in the moment under consistent self-reflection. For a perfectly rational agent, how you value things in the short term should always match up with how you value things in the long term. If you have a innate desire to be kind to others, then, if you're a perfectly rational agent, you wouldn't have an in-the-moment preference to not be kind, because then your preferences would contradict each other and you would not be able to really satisfy both.

However, a lot of people aren't very good at consistent self-reflection. Sure, if someone was perfectly rational they wouldn't have goals and values that contradict each other. And in any case, people are just as capable of having unethical long-term goals as ethical long-term goals. it's not just a matter of pro-social conscious reason vs anti-social subconscious instinct. Otherwise no one would ever plan to do unethical things in advance, they would only commit crimes of passion, only do wrong when they lose control of themselves.


What all this means is that if someone has extremely anti-social preferences it doesn't matter that they would be more ethically-minded if they were more self-consistent because they're not that self consistent. If someone has both a preference to be good and treat other people with fairness and compassion, as well as a preference to lash out at their hated outgroup and people who disagree with them, it doesn't matter if they can't have it both ways, it doesn't matter if they can't really satisfy both values, they will try to satisfy both of them anyway at the cost of not being able to really satisfy either of them, and will automatically, subconsciously, and yet carefully and kind of deliberately not notice the inconsistency.

If evil people are only able to be evil because they're crazy, it doesn't change the fact that they're evil. To say that they are innately good implies that they don't really know what they really, ultimately want, which isn't necessarily true. They know what they want, its just that some of their desires are inconsistent with each other and they just are ignoring those inconsistencies for whatever stupid reason.


And what's even worse, as far as I have been able to tell, is that probably everyone is susceptible to these kinds of inconsistencies in their preferences unless they learn to pay attention to them and consciously decide on trade-offs that fix the inconsistencies. Like, in a way it is their fault when people make these sorts of inconsistent and harmful decisions. After all, they know what they are doing and they choose to do it anyway. And yet in a way it also isn't their fault. They don't know that they know what they are doing, and the decision sometimes happens automatically at a subconscious level, too quick to be consciously deliberated over. And even when a wrong is planned out in advance or continuously perpetuated for a really long time, they just automatically carefully avoid really questioning it. If they actually questioned what they were doing they would have to change their minds and their behaviors, and that takes extra cognitive effort as well as potentially emotional pain, so they just avoid questioning it.

They try to avoid looking at the man behind the curtain, even though they ARE the man behind the curtain. And yet even though they know that they are the man behind the curtain, they don't know that they know that they are the man behind the curtain.

I think that about sums up why my father treated me the way he did when I was a kid. He really still thought of himself as a moral, rational person, even as he bullied and belittled me and called me names and threw tantrums at me and controlled everything and even on rare occasion was physically violent with me. Everything had to be his way or else, and his demands were oftentimes contradictory or unclear or in some other way impossible to meet. Obviously, he will never admit that he did anything wrong, not even to himself. He will never even put things in anything approaching such terms. Always he will think that he was merely being harsh, that it was for my own good, that if he didn't do what he did I would be much worse off. If he ever seriously questioned that line of reasoning, he would be confronted with the truth and have to change his mind. And to admit that he had committed a wrong of that magnitude would ruin his self-image as a moral and rational person. 

When combined with his other self-esteem issues, his long-term chronic depression and his generally hopeless attitude towards life in general, admitting to himself that he had abused me would likely have the capacity to break him utterly. I'm not sure what he would actually do if he broke like that, whether he'd start being even nastier or have some sort of meltdown, or if he would commit suicide or just stop functioning in his every-day life or what. I don't think I want to find out. Not anymore.

And isn't it telling, that in spite of me saying that I value all human life and no one deserves to die, in spite of all my talk about how people should be rational and have self-consistent preferences, a part of me still wishes to somehow force him to understand and admit what he did to me, even though I know that would kill him on the inside whether or not his brain was still processing information. Because if someone can't satisfy any of their values and preferences because they're so utterly broken that they don't have the motivation to do anything, and if they're permanently stuck like that, then isn't that kind of like dying? I would never want that to happen to anyone. So why does a small part of me still want to make him feel the pain he put me through? And even if I correct the inconsistency in my preferences with a trade-off--I don't want to break another human being utterly, I just want him to feel the pain he inflicted on me, wouldn't it be nice if it was possible for that to happen without utterly breaking him?--it would still be inconsistent. I shouldn't want to inflict that amount of suffering on anyone. It is inconsistent with my other desires. It practically goes against everything I stand for. So why do I still feel that way?

Note1: it has been pointed out to me that the claim that women and homosexuals are just as capable of anti-intellectualism as straight cis-men is not incompatible with the claim that men are more likely to be anti-intellectual. That being said, I still think it's unlikely that there is a significant difference in likelihood of being anti-intellectual between men and women. Anti-intellectualism seens to me to be strongly related to a lot of general human nature stuff, and I wouldn't expect that sort of thing to apply significantly more to one sex than the other. That being said, there's a difference between statistical tendencies and natural laws: one often has exceptions and the other doesn't. Even if one sex is more anti-intellectual on average than another sex, the amount of anti-intellectualism might still be less than 50% of all members of a sex for any sex. I would need more supporting evidence to change my mind about this.

Note2: afterwards I spoke with the person who had made the "All Lives Matter" comment and he said,

"Well, obviously your unresolved conflicts with your father is central to hidden unconscious issues.

As far as your reactions to me, you did a bit of projecting.  Yes, I lost the intent of the thread since I was on a few (similar) at the time.  Yet, you didn't believe me.  No, I wasn't attacking you (or even criticizing) since I don't know you yet.  Yes, I had a point to make (which you may or may not have "gotten") and yes, your reactions seemed to be more about you then me.  Seriously.  And so the issues with your father may be involved.

I sure don't know!

But yes, I had similar issues with my father and yes, he thought he was "good" even though he hurt me emotionally."

This was enough to raise my probability estimate that my understanding of what he was saying was wrong up to somewhere between 45% and 55%. The stuff he was saying about being in multiple similar conversations at the same time did kind of make sense, but I have no idea how well it explains the behavior I actually observed. I'm not sure what to think, except that he didn't intend any harm, even if he did mean his ad hominem remarks on some level. And if I'm being perfectly honest with myself I've made the same kind of mistake before which I got upset with him for. I've probably made it lots of times. It's not just other people who are sometimes irrational, it's basically everyone including me. I have also sometimes gotten defensive and started hearing people say what I expected them to say rather than what they were actually saying.

It's definitely something to think about.

Note3: As has been pointed out in the comment section, I seriously misunderstood the person who thought I was Gleb. I thought they were making an accusation and then doubling down on it with witch trial logic. But it seems that what actually happened is that they expressed an uncharitable thought, took it back and then explained why they had been thinking it. And yes, this is another example of me being defensive and hearing what I expected to hear rather than what was actually being said.

4 comments:

  1. I thought this was a very insightful post overall. It's also a good illustration of why, for all my interest in rationality, I’ve been very hesitant to engage with EA, which seems to be, among other things, a movement of people heroically determined to resolve inconsistencies in their preferences to an extent that I find exhausting just thinking about it. My personal approach to cognitive dissonance, inspired by my (admittedly superficial) understanding of Buddhism, Stoicism, and the work of Joshua Greene, is to notice that most of these internal conflicts concern matters about which I am not, in the present moment, in a position to decide anything of consequence. So I try to let them go and focus on the moment-to-moment narrow object-level ethics of whatever decisions life throws my way, most of which turn out not to be all that hard. (And even when they are hard, at least I can actually *do* something about them.)

    OTOH, I don’t think someone following my approach would ever found something like givewell.org, nor does it offer any useful guidance on the AI value-alignment problem, as far as I can tell. But I give myself permission to beg off from that sort of work on the grounds that I am still learning how to get along in life despite being afflicted with treatment-resistant depression and anxiety.

    Speaking of taking the easy path, I tl;dr’ed out repeatedly while following various chains of links about the Gleb Tsipursky affair (that I’d never heard of before today). Nevertheless, I’m going to render an opinion, which is that, even when I try to interpret what I learned as charitably as I can manage, I still find it odd how *vigorously* you defend Mr. Tsipursky, and I think I know why, having just acquired the right conceptual tools courtesy of these posts from Don’t Worry About the Vase.

    It seems as though Tsipursky, at best, could be said to be approaching his advocacy work with an “easy mode” ethos, focused on optimizing metrics of social-media engagement and that sort of thing, as opposed to the “hard-mode” ethos of just being too good and original to ignore (see, e.g., Eliezer Yudkowsky or Scott Alexander). He then takes a similar rules-lawyery approach to defending himself from his critics, which won’t cut any ice with them, if, as I suspect, what their objection really boils down to is that (a) he should just be playing on hard mode already, and (b) he should understand this intuitively and not need to be warned more than once. Maybe part (b) is unfair to the neurodiverse, and I try to take such considerations seriously, so I won’t endorse it. I certainly do endorse part (a) though, so I hope I’m being helpful by spelling it out. I can see how this could resemble damned-if-you-do-damned-if-you-don’t witch-trial logic, if one is looking at it from way down in the weeds of particular allegations of astroturfing or whatever, but I think that misses an important larger point: in matters more consequential than video games, I subscribe to the “Play on hard mode or stay out of the way of those who do” school of thought, as do many other people. We tend to regard playing on easy mode as a form of defecting or plundering the commons, and people can be quite harsh in their attempts to deter such behavior.

    Anyway, thanks for the food for thought. Try not to get too wrapped up in all the bullshit.

    P.S. “ad hominum” should be “ad hominem.”

    ReplyDelete
    Replies
    1. Yay my first comment! Thanks. I'll correct the spelling error later. Just to be clear, you know the witch trial example I referenced was in relation to the accusation that I was Gleb Tsipursky using a sock puppet account, right?

      I'm not sure exactly what you mean by easy mode and hard mode. Do you mean he's doing too much self promotion? Or something. I'm kind of confused here.

      Delete
  2. I was in the SSC Discord watching the Gleb argument, and as someone who never heard of Gleb before and still doesn't care one way or the other, I feel qualified to act as an impartial observer.

    I think you're taking a very hostile interpretation of the person who said they thought you might be Gleb. After the initial comment, they said that they assumed someone else probably knew otherwise, and that you were a better writer anyway. They later said it felt like arguing with Gleb because of content and style similarities.

    All that is indeed consistent with "You're a Gleb sockpuppet", but if that were the case I would expect things to be far more accusatory than they were. Where you saw an accusation followed by doubling down and witch trial logic, I saw someone confessing an uncharitable thought, immediately walking it back, and then trying to explain why they had it. They certainly didn't invoke the witch trial logic you're implying was in play.

    Some of is up for interpretation depending on your level of charity, but I think you're making some verifiably contradicted statements, like how disagreement with the suggestion was answered with "Yeah, I assumed someone else probably knew otherwise."

    This sounds a lot like, as you put it, getting defensive and hearing what you expected people to say.

    ReplyDelete
    Replies
    1. You're right, I didn't realize that. Thanks for bringing this to my attention, I made a note of it in the post.

      Delete