Continued from: https://pashanomics.substack.com/p/post-systematic-and-systematic-identity
S: One of the main appeals of rationality is a theoretical possibility of establishing the one true state of knowledge. One could never really achieve it, however the theoretical ability to do so may be appealing. The post-rational idea feels a lot like the post-modern rejection of objective truth and the views of truth as an instrument of power alone.
P: Well, the rejection of objective truth is certainly part of the deal, as well as understanding the struggle for the label of "science" and truth as a power-struggle. The issue of post-modernism and why much of post-modern deconstruction is un-appealing is that they circle away from the point of "no objective truth" and right back into "the absence of a system at all" is the objective truth, where the "system" is capitalism, for example. In such 2020's post modernism is only half-postmodernism, half totalitarianism rejecting any challenges to it, instead of incorporating the challenges to it into a new system.
S: The issue I have here in admitting there is "nothing right" is that by this standard there is nothing "wrong" as well. But there are clear examples of people being wrong on a factual and mathematical level. A person claiming that both drowning and non-drowning are evidence of witchcraft is wrong on every foundation. In general, to be wrong, implies a certain standard that a set of beliefs could be judged by. This standard needs to be coherent and consistent, hence, Bayseanism / utilitarianism, etc.
P: Of course, the "drowning and not-drowning are both evidence of witchcraft" guy is wrong. Of course, a standard of rightness and wrongness needs to be consistent. The trouble arises from two directions – when there are explicit challenges to the theory from within the theory and when the conclusion conflicts with common sense. The trouble with Bayesians theoretically is that it is possible to break the abstraction of probability with something as simple as the sleeping beauty problem. Suddenly the neat questions of what the probability of heads is not so simple anymore and we must rely on other common-sense arguments to see how to approach the situation. Of course, there are several theoretical problems approximating Bayesian updates and actions in an embedded, limited agent who also need to reason about logical facts, which then, of course, begs the question of whether this theoretical framework is a good starting place.
From a post-rationalist and common-sense perspective, it’s also possible to break probability abstractions through the ideas of “belief as a tool” or “self-fulfilling confidence,” where a seemingly irrational belief in success can be helpful.
S: Belief as a tool sounds like a good way to shoot yourself in the foot by trying to outsmart your own updating process by providing the “belief” as the answer instead of the update procedure.
P: Just like rationality can be good way to shoot yourself in the foot by trying to force a procedure, rather than by following the existing meta-procedure. Now, I don’t mean to be too negative here. Post rationality is not “anti-rationality”. As the name post-rationality implies, it’s still important for people to first learn rationality or a similar systematic way of thinking well enough to pass the Ideological Turing Test for it. Then and only then it’s a good idea to move on to a fuzzier way of reasoning. However, when the logical conclusions of the system start really pushing against common sense, the idea is not to begin to view "common sense" as irrationality.
S: Coming back to the original question. Besides the “conflict causes identity,” “super-rationality” and “descriptive mathematics”, what other positive formal ideas does post-rationality suppose?
P: if I had to carve out the mathematical foundation of post-rationality, it would be roughly equivalent to all of math minus the Bayesian probability theory and other foundations of systematicity. I can imagine a parallel universe in which there is a different form of “logical rationality” where people have the idea that thinking is about making correct logical inferences over beliefs. They could view probabilistic statements of “it’s going to rain with 40% probability” as something that they would an assign truth / falsehood to, rather than assigning probabilities directly. You can implement Bayesian reasoning on top of this framework if you really want.
S: But this is an extremely inefficient way to accomplish probabilistic reasoning.
P: But making an argument about “efficiency” of human computation is moving away from the idea of “laws of reasoning” and more towards the idea of “the kinds of laws of reasoning that work on human hardware,” which is a more meta-worldview. More broadly speaking, you can use a lot of math to model many processes inside the human or society and create different visions of “proper” reasoning based on those math models. However, the important piece is still a feedback loop of “common sense” that enables you to know you have gone too far. Otherwise you could end up with such silly ideas as destroying all of nature to signal your “compassion”.
On a more positive note, the simplest analogy here is:
post-rational thinking about thinking: all of math :: rational thinking about thinking : Bayesian theory.
Also, post-rational thinking about thinking is also not the whole of post-rationality either. Some people are better theoreticians, some people are better decision makers.
S: It still seems that some of post-rationality comes from a desire to socially distance oneself from particular aspect of rationality’s community that does not fit one's inner aesthetic.
P: Well, yes! In fact this is one part of the post-rational insight. The other part is that such desire for distancing is not evidence against the validity of the theory, but rather evidence for it. There are frequent tensions within the community – people feel neglected or feel that the competitions for prestige are too under-mining. You could trace these to certain “causes” – selecting from pools of people with certain personality quirks, or certain forms of social organization.
You can also look a little deeper and see that it’s not just the frame of “beliefs with probabilities” that can get you in trouble. The frame of the “individual” itself can become problematic when over-used. This is a thread that cuts across all systematic / individualist philosophies which define people by the extent that they fit into the system. While not relying on handouts and being able to fit into the system are useful things to know how to do, just like statistics is also a useful thing to know, the “end game” isn’t really there. The ability to fit into the system should not super-cede the ultimate benefit of building a healthy long-term functional society, however the systematic over-emphasis on "careers" and other individual accomplishment has done exactly that. There are other organizational problems, for example, over-emphasizing individual’s capacity for action pushes leaders away from defining clear roles or more directly assigning tasks, removing ability to act coherently.
A “free” marketplace of status un-touched by either strong norms or strong leadership is different than a marketplace of money and has a lot more ways to fail and degenerate into ever-escalating conflicts over minute things. Certainly a group of motivated people using lots of willpower can still potentially accomplish a project in between the status fights of going even more meta, but at the end most of them are going to be asking themselves – how could we do this better?
The point here, is not to dismiss individualism entirely. It is yet another raft you use to cross the river of the modern world. The point of post-rationality is to show a person who has learned rationality and found themselves attuned with its premises, but averse to its “logical” and “practical” conclusions that there isn’t something wrong with them, but rather that there are better philosophies to follow.