I do a lot of Sudoku. They’re logic puzzles, using the numbers one through nine in a nine-by-nine grid. The goal is to use all nine numbers once and only once in every nine-field row, column and box. If you find yourself with two sixes in the same row, for example, you’ve blown it.
It’s pure deductive reasoning, no probability, no induction (if it happened this way in nine of the last 10 puzzles, it’s probably the case here!), no guessing. You do not want to fill in a number unless you are absolutely sure it’s the only place it could go. As in bet-your-life-on-it sure. If there’s more than one place for it, you do not guess. It’s like Highlander: “There can only be one!”
I find it relaxing — there is no ambiguity, no argument. You either know for sure and know why you know, or you don’t know.
What a contrast to the information environment the last few years, where there are so many people, wittingly or not, feigning certainty while espousing bullshit. Compromised fact-checkers relying on captured science and medicine, misleading the demoralized and credulous, desperate for a narrative to hold off the tidal wave of fear and doubt. A giant chorus of the obedient protecting themselves from dangerous dissent with their various incantations: “Conspiracy Theory!’ “You Are Not An Epidemiologist!” “Trust the Science!”
But if the 6 can’t be here and here, and the 5 must be there, then there’s nowhere else for the 7 except there. And once we know the 7 is there, the 7 in the bottom left box must be there. That means the 8 is in the lower row, so we can eliminate the eights from the upper and middle rows of that box…
Even when you can’t immediately solve a Sudoku, when the lightning bolt of insight that opens up the whole puzzle hasn’t yet struck, at least you know for sure what you don’t know. No matter how difficult the puzzle, there is such simplicity, order and peace in the realm of pure deductive logic.
. . .
The harder puzzles require you to make assumptions. That is, you might have a box with two possible candidates, and you assume one is correct conditionally and go through the implications of it.
For example, if a box can contain only a 1 or a 3, you might assume it’s a 1, see what happens to the boxes around it, and then alternatively assume it’s a 3, and see the effects of that. If some other box has three candidates, say 3, 4 and 7, and the 1-assumption, makes it a 3, and the 3 makes it a 7, you can be sure it’s not a 4.
Eliminating the 4 doesn’t tell you what the answer is, either for it or the initial box, but it narrows down the possibilities and enables you to play out further conditional scenarios more easily. In other words, by running scenarios for both, “if x is true” and “if x is false” you can find out that “z must be false” even though you still don’t know the truth with respect to x.
It’s important though not to treat your conditional findings as true. In the example above, assuming a 1 in the first box yielded a 3 in the second. If you fill in the 3, though, you just making a 50/50 gamble that might screw up the entire puzzle.
That sounds obvious, but that’s only because my example contains two boxes. Imagine you’re five or six boxes down the cascading chain, making assumptions (and then assumptions within assumptions — like the dreams within dreams of Inception — and you can see how easily you could get it confused.
There is a big difference between “this is the case if x is true”, and “this is case, period.” But sometimes when you’re deep down the assumption rabbit hole, you forget that the entire edifice is based on a conditional. That’s how you wind up with two 6s in the same box and realize the entire 40 minutes you’ve spent wrestling with this puzzle were wasted. You thought you were making breakthroughs, but it turns out you were living a lie the entire time!
. . .
The real world is infinitely more complex than even the hardest Sudoku. It requires us to make conditional assumptions within conditional assumptions all the time. Assuming, the data from this study in the Lancet is correct, assuming that its design is not flawed, assuming it hasn’t been influenced by its funding sources, assuming the subjects in the study don’t differ in some material respect from me (lifestyle, genetics, etc.), you should consider it’s findings.
But if you forget the assumptions involved and simply fill in the box based on the conclusions therein, you run the risk of making a serious error. Many people adopted low-fat, high sugar diets to defeat cholesterol, got on statins, avoided the sun, became vegan for health. It’s pretty obvious where I stand on those practices, but irrespective of whether they’re in fact beneficial, the decision to adopt them is based on many (dubious) assumptions being true.
. . .
The beauty of Sudoku is not only do you know — or find out soon at minimal cost — what you don’t know, but also your assumptions (at least initially) are explicit. You say to yourself, “assume this is a 1, and let’s see what it does, and assume it’s instead a 3, and let’s see what that does.” You are therefore capable of untangling the results of your conditional experiments and drawing sound conclusions from them.
Often when I talk with people about health, civil liberties, medicine or other matters of import, the conversation gets derailed due to conflicting assumptions. Can we examine a question and in so doing untangle the conditional beliefs informing it? Can we agree that what one is saying is only true if one buys into particular premises, and that those premises themselves cannot be taken as a given, but also should be examined?
If you support giving endless weapons to Ukraine no matter the cost, is that because you believe it is an innocent being attacked without provocation by the evil Vladimir Putin? Is that assumption beyond scrutiny? It doesn’t matter where you come out on that question so much as recognizing you are filling in a box based on that assumption, and should that assumption be false, your entire policy prescription is bankrupted.
Let’s say you do believe Putin invaded Ukraine because he is evil and will move onto the rest if Europe next, i.e., he’s basically Hitler. Why do you believe that? Do you have first-hand knowledge of him, or is it something you read in the New York Times? If the latter, then your belief about Putin being true depends on the Times being reliable with respect to geopolitics.
If you follow this sort of reasoning to its logical conclusion, you end up looking for first principles. What can I trust? How can I be sure to fill in the boxes accurately so as not to find out later I was living a lie?
It’s difficult in the real world to find certainty. Even René Descartes, who settled on “I think, therefore I am” didn’t get far*. The best we can do, in my opinion, is via the scientific method, offering a hypothesis that purports to fit the facts, and scrapping it as new facts and better-fitting (more explanatory) hypotheses come along. The allegory of a perfectly deductive realm like Sudoku then isn’t in completing the puzzle. It’s in not filling in the boxes inappropriately. Forget about believing what’s true, but take great pains to avoid the lie.
Do you fear it's only going to get worse?
Any data you, yourself didn't collect has to be suspect. With the replication issues and outright fraud proving many studies aren't what they purport to be. Also with the AI fake photos and I'm sure videos getting better what can be trusted. Even if say you could talk to Putin or Zalenskky, how do you know they aren't lying to you? If you could perform your own blind random trial, how do you know your biases didn't lead you to set up the experiment in a way that leads to your desired conclusion? And that would be if you had access and ability to get your own facts, but "ain't nobody got time for that".
Somewhere I saw "it's not what you believe, it's who you believe" and as bad as things are now, everything is going to get even more tribal.
"The best we can do, in my opinion, is via the scientific method, offering a hypothesis that purports to fit the facts, and scrapping it as new facts and better-fitting (more explanatory) hypotheses come along."
Exactly right. What seems true today can appear false tomorrow (and then maybe truer again later!) depending on how evidence mounts (or not) over time. Either way, we need to remain humble and open-minded to the shifting sands.
I like the example you give about why anyone should believe the Putler narrative because the NYT tells us this is true. If anything, if the NYT (and similar such MSM outlets) want readers to believe Putin's plan has always been an imperial one, i.e. invade Europe after his "unprovoked aggression" in Ukraine, then we can feel reasonably secure in believing the opposite is much closer to the truth. After all, NYT are PROVEN liars.
How often in life do we trust proven liars be they among family, friends or acquaintances to be reliable sources of truth/information? No matter how trusting a nature one might possess, only fools do that.