Nine bad ways of thinking

Logic is the screwdriver of thought. It’s good for solving precise problems, of which there’s more lately, but human thinking through the centuries has developed a much wider set of tools.

Accordingly, here’s some common ways to think poorly. These are not logical fallacies, but broader ways of getting things wrong.

False shortcuts
A shortcut is when you have a large problem to solve, like evaluate many possibilities or consider many factors, and as you try to abstract it in your head you think “X has to have that relationship to Y so the answer can’t be over there”. For example you think two ways are equivalent, or one choice is no better than the other, so there’s no point exploring some part of the space. Well, when you get that abstraction wrong it’s a false shortcut and you make mistakes. Happens to me all the time.

Category arguments
Arguing that X is a Y, or questioning whether X is a real Y, is almost never a helpful philosophical method. There are well meaning situations when we’re trying to be inclusive, but more often than not it’s an attempt to exclude someone for not being a “real” member of a group. It also carries the suspicion of frame shifting. What is it about Y (the frame) that you want to pin on X (the individual) and is that helpful? Could you deal with the question at hand universally, or individually?

Baseline calibration
Making choices is hard. We’re used to ranking alternatives among each other, but do we consider how much difference the choice makes overall? When we scale the options with baseline importance, the difference may be tiny. Like this thing is better than that thing, but how much difference is a thing going to make to my life or my happiness? People make this mistake with money and buy things that are too cheap or too expensive. It’s also in business, politics, and everywhere.

Learnt constraints
A corollary to the false shortcut is when you have a real constraint, such as X isn’t practical, or person Y would never have it, and that used to be true but the constraint isn’t there any more; because you changed, the times changed, or something else. A learnt constraint is the miserable condition when you’ve excluded some possibilities without noticing the constraint isn’t actually there. It holds back people, and it’s also why innovation is difficult.

Taking people at face value
To take people at face value is to process what they say and not why they say it. For example “how is Claire?” means I want information about Claire but also signals that I care about her, or I may be trying to show that I’m a caring person, or feeling unease about something that happened with Claire in the past. Here it’s obvious that there’s a social subtext, but all speech carries subtext all the time. The proper way to parse statement S is to think “Person X is telling me S, and what may be going on in X’s brain to say that?”. Men are especially clumsy with subtext and women better at detecting it. I hope AI is female.

Apparently you’re supposed to take people at face value to avoid the ad hominem attack. That’s nonsense. Ad hominem is to say the witness is a slut. Face value is to ignore that the witness has material interests. An example where the question is irrelevant is to say the lawyer is defending the accused because they’re the lawyer. Of course in trusted situations we can take people at face value, but not all the time.

Toxic topics
One of the worst way to have moral arguments is to bring up examples that make people altogether uncomfortable or unwilling to defend a side. Think of the children, the Nazis, the latest atrocity. That’s obviously prone to manipulation. It also produces a toxic debate between thick-skinned people as those with higher levels of empathy, and often the victims, are driven out. To deal with these issues create a safe space, take good time, focus on patterns rather than specifics, and frequently affirm shared principles.

Wanting to be right
A great way to be wrong is to care about being right, in other words to be emotionally invested in the ideas you already have. You’re likely to think deeply to yourself, avoid sharing or exposing your true beliefs, and seek confirming evidence and like-minded people. In fact being wrong has almost the same symptoms as being intelligent, and there’s no shortage of smart but misguided people. The mistake is valuing thought over experiment.

The right way to be right is to test your ideas by convincing others, applying in practice, or comparing expectations with facts. Drop the ideas that don’t work, that’s the hard part. Then take in some new ideas, combine, and repeat. Intelligence is an evolutionary process, not a mathematical one. The price of being right is you have to face being wrong often, and you have no choice over what you’re right about.

Classical thinking
A less common error is to try to grasp the objective truth about some matter that’s inaccessible. Is this person guilty or good? What transpired behind closed doors? What’s in X’s mind? Well, you don’t know. It’s like quantum mechanics. You can’t probe the truth and all you can do is maintain your knowledge in a superposition of states consistent with the evidence. Person X could be trustworthy or not. The error is not to get a wrong outcome, it’s to think classically that they are one way. Western fiction is classical and Asian cultures understand superposition, it seems.

The unitary intelligence
People don’t interact with the world and with others rationally. We interact, first, emotionally: Do I like this person, these ideas, this situation? When we perceive a new thing we ask all the ideas in our head what they think of it. If our ideas mostly approve, we admit the new information or new argument. Otherwise we exclude it. If our existing ideas rebel, we exclude it violently. Wanting to value facts, question assumptions, or entertain something new are also ideas we might have. They play a big part in what we admit.

We’re not a unitary intelligence, we’re a forum of ideas. The reason your arguments with religious people don’t work is you present ideas that they (their other ideas) don’t like and they don’t carry the ideas of openness or objectivity as strongly as you do. Or they carry only openness and get seduced by quack theories. Once you see people as a forum of ideas it affects discourse and also ethics. Do you write off people or try to get their better ideas to prevail?