On Epistemology: Philosophy & Practice


“If knowledge can create problems, it is not through ignorance that we can solve them.”

Isaac Asimov

“Studies have shown that more information gets passed through water cooler gossip than through official memos. Which puts me at a disadvantage because I bring my own water to work.” 

Dwight Schrute



Epistemology is a branch of philosophy regarding the theory of knowledge. However, it is my opinion that epistemology is pretty much the basis of the entire philosophy game — possibly preceded only by logic. With this in mind, I conceive of it as a root, rather than a branch.

Epistemology boils down to: What do you think you know, and why do you think you know it? It is the study of mapping beliefs to evidence and evidence to beliefs. One would find it very difficult to make any ground in other philosophical disciplines, such as ethics or metaphysics, without some epistemological principles being established first. For this reason, I think epistemology is very important.

As I see it, reality is going to exist whether we want to believe it or not — in fact, Philip K. Dick said “Reality is that which, when you stop believing in it, doesn’t go away.” Additionally, we are going to form beliefs, create models of the world and cast judgement, irrespective of whether we want to or not. It precisely because of these two factors — that we are going to form beliefs continuously, and an objective version of reality will exist regardless of those subjective beliefs — that, in my opinion, we should value epistemology. Why not ensure that our unmitigable beliefs are aligned with unmitigable reality?

Care for epistemology helps us achieve that. However, epistemology, like all philosophical disciplines, possesses many more questions than answers. For instance, a recurrent question in epistemology is: What is knowledge?

My answer to that question — for the moment, at least — is to ignore it. Whilst deep inquiry is important, it is also important to maintain some approximation of relevance. When you skew too far towards the philosophical, you lose practicality. And it is at this exact point that most people become disinterested. No matter how important the matter at hand is, once the content surpasses a particular threshold of abstractness or uncertainty, people’s interest declines. Epistemology is subject to this.

My general interest in philosophy comes from the desire to think about, explore and create, the world in the most well-reasoned, moral, productive and “correct” way possible. Philosophy is a significant component of this process, though, not everything. Roughly speaking, philosophy is concerned with Why? This differs from science, which is the practice of addressing questions relating to How? On the application side of this, we have what might be considered as engineering, which takes the knowledge of science and asks What?

To me, it is the commingling of these different disciplines and mental-stances that leads to the best outcomes — at least, in the “real world” sense. As evidence of this, we could look to the one of the principles of systems-engineering, for example, which states in essence: The optimisation of a single component within a system tends to detract from the performance of the entire system. If we conceive of our thinking, and our resulting behaviour, as a system — a complex one at that — then the optimisation of, or exaggerated influence by, a component — be it philosophy or science or engineering — will then detract from our total effectiveness.

With all of this in mind, I think avoiding the extremes of any of these endeavours should be the common practice. No doubt some time should be spent at the extremes, but it should be a minority. Go to the Land of Thought Experiments when you have chance of finding something that solves, or provides clarity for, a Real World scientific or engineering issue. Don’t just live there indefinitely, only revisiting the Real World when absolutely necessary.

Continuing in this vein of thought, I believe a more practical and comprehendible alternative to philosophical epistemology is epistemic rationality. This might seem like nothing other than a language game, but I personally see the distinction as important. Philosophical epistemology can get bogged down in: What do I know? How do I know that? How do I know, that I know that? … Ad infinitum. I am of the opinion that my proposed alternative can help avoid this. As I said, it might seem like a trivial distinction, but please, humour me.

Rationality operates a few levels above the dark and vague depths of philosophy, and is concerned with optimising outcomes, or dealing with uncertainty, at the level of the organism or agent — a practical level. Philosophical epistemology on the other hand can get you a little stuck, always looking for what is underneath, wading around in the darkest depths of theoretical inquiry where it is difficult to see clearly or be sure of anything. However, in all subjects and fields you need to make at least a few assumptions in order to get the ball rolling — even math has assumptions, and this is important because math is about as close to a human can get to The Pure Truth.

Because of all this, you have my permission to politely walk away from someone who thinks they are beating you in the philosophical equivalent of a rap-battle by continuously questioning your assumptions. Don’t get me wrong, you need to question assumptions to some extent, but that extent isn’t infinite and should be in proportion to other questions also being asked (like I touched on above).

It might seem impressive to some — mostly philosophy undergrads, I would wager — to continually ask “Why?”, but you also need to ask questions such as “What is stopping us?”; “How have things come to be this way?”; “Would you mind if I thought this over for a bit first?”; “Should we actually do that?” and “Have we lost sight of the big picture?”. These questions won’t signal how deep, critical and thoughtful you are to the same extent as asking “Why?” endlessly, but they often produce more important answers and insights. Related to this, rationality accepts a few basic assumptions, off-loading some of the weight of uncertainty, helping us to step out of the vacuity that unrelenting epistemic investigation can pin us in, and allows us to actually get on with it.

In my opinion, rationality is something that basically everyone can, and should, try to foster — in both themselves and others. While it doesn’t rule out learning for learning’s sake, it does, however, place a greater emphasis on acquiring accurate knowledge for its utility; the ability for knowledge to provide benefits or satisfaction overtime through informed behaviour. One of the basic assumptions of rationality is that you know something when you can predict outcomes reliably — at least, better than chance, anyway. When you have the knowledge to produce or predict outcomes, you can make choices and change variables so that what occurs results in higher overall utility. This, in a very real sense, is the act of steering the future in a direction to which you more greatly prefer.

Sounds great, does it not?

What’s The Deal With Rationality?

While I have addressed rationality previously, I will give a quick recap or primer for the sake of today’s discussion, which will build on some of these ideas. In short, “rationality,” as the term is used in cognitive science, encompasses two subcomponents: (1) epistemic, and (2) instrumental rationality. Use of the word rationality may be describing either, or both of these. They are related, but have slightly differing definitions.

1. Epistemic rationality is the process of updating our beliefs so that they more accurately represent the structure of reality. It entails acquiring knowledge and information in a systematic and objective manner so that our view of the world becomes more reliable over time.

2. Instrumental rationality is the process of achieving our values by acting in ways which most probabilistically favour the desired outcomes. This is the component of rationality that comprises behaviour and is about steering reality/the future in favourable directions. This may be altruistic, entirely malevolent or something in between, but ultimately there are better and worse ways to achieve whatever you wish to achieve.

Alternatively, the more concise and simple explanation of these concepts is that rationality comprises“What is true?” and “What to do?” — which are epistemic and instrumental rationality, respectively.

Today’s article — if you couldn’t tell from the title — will focus on the former, epistemic rationality. From this point onwards we will be mostly concerned with how to go about learning things and updating our beliefs incrementally over time, as they are exposed to new evidence, so that we can hold the most probable belief possible (at that point, given the information we have).

The Map & The Territory

Epistemic rationality is the notion of obtaining a map that matches the territory. While no map perfectly matches the territory that it describes — all maps are an abstraction — that is not to say that all maps are of equivalent value. This is because the purpose of a map is to describe a certain territory, which makes a more accurate map inherently more valuable (i.e. it has higher utility). For instance, let’s say you’re thinking about renovating your house. In order to help with the planning process, you decide to get some pictures of your current house drawn up. One set of pictures is completed by a seasoned architect, and the set by the 4-year old who lives down the street.

One of those sets is going to have higher utility, providing you with a stronger foundation for informed behaviour beyond that point.

The same, too, can be said of our brains and how they draw reality. Our beliefs are our mental maps or blueprints. When we say we believe something, we are describing how we think the world is — where the lines are, at what angle something connects to something else, and what parts are shaded in. Sometimes these beliefs are accurate and they make good maps. Many times, though, they are not.

Beliefs that better represent reality are more valuable, primarily because they allow for more profitable action. Or phrased in the inverse manner: Less accurate beliefs will be more costly to maintain and act upon than their alternatives. For example, an individual who continues to hold anti-vaccination beliefs is going to cost themselves more from a personal-health standpoint than someone who holds a more accurate view of reality (namely that vaccines, while imperfect, are very helpful).

The major underlying contention of epistemic rationality is that our beliefs don’t change reality, no matter how much we want them to. Believing vaccines are harmful doesn’t make it so, yet it does lead to behaviours that harm health outcomes. This is something we obviously want to avoid. As such, we need to learn how to hold correct beliefs, with appropriately calibrated certainty, so that we can act in the most effective (rational) way possible. Having a reliable map of the territory is the basis for effective navigation of said territory — the epistemic potentiates the instrumental.

By this point you can probably accept that — at least in theory — it is important to have maps that match the territory. So rather than spending any more time trying to convince you that it is important, I will now shift focus to trying to explain how you can do that.

Before we do that, though, we first must consider some of the factors that make creating accurate maps difficult.

Our Imperfect Mapping Software

There is a vast penumbra of factors that make it difficult when it comes to obtaining the most accurate beliefs possible, so I won’t be able to discuss them all. However, I will try to summarise a few of the major ones here. At the very least this section should induce some humility in regards to what we believe about our own beliefs.


Mapping Issue #1: Perceptual limits


Before we consider any factors that would skew or bias our perceptions of the world in any particular direction—creating a consistent shape to our beliefs—we first need to recognise that even if the information that we take in is received perfectly, without any tints or shadings, it is still incomplete.

The easiest example I can use here is the human eye. We act as if our eyes allow us to see everything, but this is far, far from the truth. For starters, we can only see light that is between approximately 380-740 nanometers, which is the colours ranging from violet to red. Anything beyond that—such as ultraviolet, or infrared—we do not perceive. This does not mean that things in this range do not exist, however. Additionally, just because we cannot see them doesn’t make them any less important either. For example, thanks to the scientific process, and more specifically the role that Johann Wilhelm Ritter played within it, we are now aware of ultraviolet (UV) rays and can factor those into our conceptual frameworks. If we were only reliant upon what our eyes can see, as well as held the idea that it is only important to concern ourselves with what we can see, skin cancer would be wildly more problematic than what it currently is.

Continuing on with the flaws of our visual perception, next is how minute the size of our focal point is. Roughly speaking, what we see clearly is an area that is comparable to the size of our thumbnail held at arm’s length in front of us. The rest is “filled in” by our brains. Everything outside of the focal point is mostly blurry because it is just what our brains expects to be in that area — and it isn’t going to waste resources on computing a more precise image.

The limits of visual system have been the subject of numerous psycho-perceptual experiments, so I won’t dive too much further into them here. It should be recognised, though, that there certainly are more limitations — and that our other senses are far from perfect! All our senses provide incomplete, and misleading, forms of information.

Hopefully the above at least gives you some awareness that there is stuff you are missing. This is a very important consideration, because as the saying goes: seeing is believing. As an initiated rationalist, however, you can now adopt the more accurate belief that “seeing may be believing, but seeing isn’t knowing.”


Mapping Issue #2: Social forces

The origin of human intelligence is a hotly debated topic. The leading theory currently—based on my reading at least—is the Social Brain Hypothesis, which has been forwarded by Robin Dunbar. Dunbar has drawn attention to the links between neocortex size and the group size of various mammals and posits that the abilities described by the concept of intelligence evolved to deal with high complexity social environments.

The idea that intelligence arose due to these purposes can seem somewhat counter intuitive to us. Commonly we tend to perceive intelligence and social skills as seperate domains. This apparent separation seems even more evident in certain subpopulations, such as those with Asperger’s Syndrome, who tend to present with higher than average levels of intelligence, yet suffer from social awkwardness and have impaired theory of mind. 

The more intuitive idea, as far as we are concerned, is that it is evolved in order to deal with ecological problems. Because in modern-times we use our intelligence to solve problems, we tend to think that it evolved for that purpose. However, purpose—or I should say; intended purpose—is a human-created construct. Purpose is not a consideration of evolution, though, it cares very little about anything other than reproductive fitness, and only selects for what helps to get an individual’s genes into the next generation (relative to other members of the species). This is why tool-building or ecological problem-solving theories of intelligence tend to fall flat on their face.

As another example of evolved traits/behaviours and their “purpose,” let us turn to sexual activity for a moment…

(Warning: I actually use sex-related examples in this piece multiple times. I’ve found these kinds of examples seem to get people to re-focus more and they are more likely to remember them over the long-term. Just an anecdotal observation, though).

In modern times, sexual activity is engaged in for a huge variety of reasons. People have sex for stress relief, to express themselves or to make money—and I’m sure there’s many other uses for it that I’m not creative enough to think of. However, the point is that engaging in sexual activity for the purpose of reproduction is a much less prevalent behaviour than its evolved “purpose.” It appears that this is akin to the use of our intelligence. Just as sex evolved for one thing, and we now use it for many other things, same goes for the functioning of our mind and intelligence.

This vein of research has been supported by two researchers, Hugo Mercier and Dan Sperber, who posit that we developed the ability to reason (a subcomponent of intelligence) as a means of deciphering who is trustworthy — a necessity of functioning well in a highly complex social environment.


The point of all this is to say: At our base level, we are social creatures. While we may like to think that we use our 21st century Homo sapien intelligence and reasoning skills to see the world for what it is, and discard the idea that we are susceptible to a herd mentality, but this simply does not appear to be (entirely) true.

Our intelligence, our reasoning skills, and thus the beliefs we hold, all appear to stem from a foundation that was primarily concerned with what others were thinking, who to align ourselves with, and how to manage successful political factions. While we cannot, of course, rid ourselves of our evolutionary history, we can at least be aware of it. Recognising the social basis of our complex thoughts is important for ensuring that social norms and pressures doesn’t shape them too significantly without us being aware of it.


Mapping Issue #3: The necessity of congruence between belief & behavior

Another factor which makes having and holding the most accurate beliefs a challenge, is our need to reconcile our beliefs with our behaviour. Up until this point I have basically stated that forming accurate beliefs is important for effective behaviour, thus indicating the direction of causation is:

Belief → Behaviour

However, it is not a unidirectional relationship. While our beliefs do govern some of our behaviour, our behaviour also feeds back into our beliefs, imparting a non-epistemic influence.

To illustrate this point, let us consider sex-example number 2: an extramarital affair.

Let’s say, John thinks adultery is wrong and you’re a bad person if you do it. A simplified version of John’s beliefs might look something like this:

Premise 1: Adultery is wrong because it causes emotional pain.
Premise 2: Causing emotional pain is bad.
Premise 3: Bad people do bad things.
Conclusion: Adulterers are bad people.

In addition to this, John considers himself to be a good person (because he doesn’t commit adultery, among other criteria).

Up until the afternoon of John’s work Christmas party, he had no issue maintaining those beliefs. However, as events of the evening unfolded, John may have visited the bar one—or ten—too many times over the course of the evening.

*Gasp*

I will spare you the specifics, but what alcohol does is interferes with neurotransmitters in the brain—the source of both its desired and undesired effects. The issue of concern here mostly relates to it disrupting the frontal lobes. Generally speaking, the frontal lobes are responsible for most of our more recently evolved behaviours, such as reasoning, long-term planning and impulse inhibition. Introducing alcohol into the system reduces how successfully these processes can be done. Overconsumption of alcohol, therefore, promotes a short-term mindset. In addition to this, the reduced ability to impair inhibitions puts you more at the whim of more basic biological desires, such as — you guessed it — sex.

Without giving you all the graphic details, let’s just say John spent some of the early hours of the next morning engaging in some heavily intoxicated activities with Wendy, who is Head of Administration at John’s workplace.

As you also might have guessed, Wendy is not John’s wife.

The result: John now has some incompatible beliefs. He cannot continue to believe that adulterers are bad people and that he is a good person if his premises remain the same. John needs to either redefine things so that he isn’t a bad person for committing adultery, or to accept that he isn’t a good person. Either way, what John believes, and his epistemic process, have been interfered with by John’s own behaviour. He now isn’t just aligning his beliefs with what most effectively matches the evidence, but also what helps to ease cognitive dissonance. John is, in part, trying to construct a map that isn’t too confronting to look at.

In case it wasn’t clear already: Don’t be like John.


How We Can Be Better

In short, the processes which we use to update our maps, will ultimately determine their effectiveness.

The first thing I want to say, is that holding more accurate beliefs — improving your epistemic rationality — does not, and likely should not, start with seeking and acquiring more (factual or declarative) knowledge. Just let knowledge of the Real World come to you — which it will, if you let it. All you’ve got to do, initially, is to remove the roadblocks. Knowledge and information can be an unstoppable force; but if you attempt to preserve certain ideologies and bias-laden views of the world, the supposedly unstoppable force will meet its match in an immovable object.

The key here is to first reduce tension within the system, not increase it. Initial port of call should be to remove the blindfold, or whatever else may be obstructing your view of reality, and not to begin with researching how to construct an artificial set of eyes as a workaround.

Take that for what you will.


The Bayesian Ideal

Things are going to get slightly technical at this point, for just a moment. I promise it’s for a good cause.

In 1763 a work of great importance was published titled, “An Essay towards solving a Problem in the Doctrine of Chances”. This essay was the work of Thomas Bayes, and it was published 2 years following his death, due to the efforts of his friend Richard Price.

While the history is not of great relevance to our purposes here, the mathematical theorems contained in that paper are. In the essay, Bayes laid out the work that was of such great importance in the field of conditional probability, it was named after him. And thus, Bayes’ theorem was born.


Bayes’ theorem looks like this:





A, B = Events
(A|B) = Probability of A, given B is true
(B|A) = Probability of B, given A is true
P(A), P(B) = Independent probabilities of A and B


The equation reads as follows:

The probability of A, given B is equal to the probability of B given A, multiplied by the probability of A, divided by the probability of B.

Now, as important as this equation is, it’s not super critical that you understand the exact mathematics of it. So rather than spend another 10 pages trying to explain an equation that packs so much of a punch that it is on the path to dethroning Karl Popper’s Theory of Falsification in science, I will make haste and get to why it’s relevant here.

The most important thing you need to understand about the equation is that it represents the mathematical ideal for updating your beliefs when new evidence arises. If you can see the general gist of how the equation might do that, as well as remember the key takeaways that follow, then you’re golden.

(Unless of course you are a computer scientist who is trying to construct the world’s first Artificial General Intelligence, then you’ll probably need to understand Bayesian probability a little more deeply than I’ve explained here. Maybe try a YouTube video or two as well.) 

The major takeaway from Bayes in relation to epistemic rationality, is that new information doesn’t completely eradicate the relevance of old information. It tweaks it, altering the probability, but doesn’t negate it entirely.

I’ll use an example to illustrate this point… actually, you know what, let’s use John again. I don’t think we are done throwing him under the bus just yet.

Here goes:

After his rendezvous with Wendy, John was experiencing some pain while going to the bathroom. A few days later the pain had still not yet subsided, so John decided to go and get himself checked for a sexually transmitted infection (STI). Having explained his situation to the doctor—whilst leaving out a few details—John was informed that his symptoms suggested he might have something known as drunkidiotitis, a rare STI that affects 1% of the population. The doctor then administered the test for drunkidiotitis, which came back positive.

“So I have drunkidiotitis?!” asked John, desparingly.

“Not necessarily” said The Doctor, “The test sensitivity in this case is only 70%. So we can only update our beliefs—using Bayes’ rule—in relation to the sensitivity of the test, which represents the strength of the evidence. In 3 out of 10 cases where a positive result is shown, it’s actually a false-positive and the patient doesn’t have drunkidiotitis. You could be in the 30%.”

“Oh ok, thanks Doc” said John, looking hopeful. “So the probability at this point is that it’s 70% likely I do have it, but there’s still a decent chance of good news, as there’s a 30% chance that I don’t.”

“No John, that’s not right either”  sighed The Doctor. “To get an accurate idea of how likely it is you have drunkidiotitis, we need to adjust our beliefs incrementally, not just substitute in a whole new set of numbers. The test is one piece of evidence, but not the only piece. Bayes’ rule posits that we take the prior-probability, that drunkidiotitis is a 1 in 100 infection, then we factor in the new evidence provided by the test — calibrated by it’s sensitivity — and that then allows us to calculate the posterior-probability, which are the odds that you actually have drunkidiotitis. The test doesn’t completely rule out all other information, it is just another piece of evidence that will either increase or decrease the likelihood that you have drunkidiotitis.”

In the end, John did have drunkidiotitis. But he learned a valuable lesson that day: You need to update probabilities (and beliefs) incrementally. New evidence, whether it be confirmatory or contradictory, doesn’t make everything else that is known irrelevant. You shift from your previous position towards what the new evidence suggests, and stronger evidence will cause larger shifts, but you still keep in mind where you came from—your prior probability.

Overall, Bayes theorem is about what constitutes evidence and how heavily we should weigh that evidence. Because of this, Bayesian reasoning is the pinnacle of rationality.

And now is the point where I tell you that learning all that was pointless. Well, not pointless, but outside of medical or mathematical settings it doesn’t carry much exact use. In everyday life, where the probabilities are all mostly unknown and there aren’t numbers attached to everything, the important thing to understand is the general concept, not the actual maths. 

This point was made by Philip Tetlock and Dan Gardner in their book Superforecasting: The Art and Science of Prediction. Tetlock and Gardner have spent extensive time studying “superforecasters,” a group of individuals who show consistent prowess when it comes to predicting the outcomes of highly complex events or situations (due to their highly accurate mental “maps” and epistemic rationality). In regards to doing the mathematics of Bayesian calculations, they had this to say:

“The superforecasters are a numerate bunch: many know about Bayes’ theorem and could deploy it if they felt it was worth the trouble. But they rarely crunch the numbers so explicitly. What matters far more to the superforecasters than Bayes’ theorem is Bayes’ core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.”


If it’s good enough for the superforecasters, then it’s good enough for me — and, hopefully, you. Keep that core insight in mind and you’ll very likely find yourself becoming more and more acquainted with reality.

It is my thinking that Bayes’ rule is mostly useful because it provides a template for you to insert information into within the realm of conscious thought — or through the use of System-2, using the popular cognitive psychology parlance. As I said, the exact mathematics aren’t the important thing. The important thing is, first, noticing evidence. Then, bringing your model of the world to mind. And, finally, thinking — effortfully — about how that evidence alters the model. This is the principle of Bayesian updating, and you can do it without any numbers or exact details.

Some Other Useful Things

This is not to say that these factors are less important than Bayes’ theorem; only that they are much, much easier to explain. The following are, in my opinion, some of the absolutely most impactful things you can do in order to improve your epistemic rationality. I warn you: Do not underestimate their power.


1. Introspection

I am unsure why introspection isn’t promoted more. Potentially because it can oftentimes be very confronting and challenging. However, I have found it to be a very fruitful activity. By paying attention to your own thoughts and emotional reactions to certain situations, you get a look at how you expected the world to be. This is one of the clearest views you’ll ever get of your mental map— don’t neglect to pay attention to it when the rare chance arises.

When someone tells you something and it immediately prompts pleasure signals in your brain, or you read something and it immediately induces pushback, notice that and pick up that thread. Once you have that thread in hand, you can follow it and see where it leads. When done correctly, introspection allows you to be your own teacher. You can learn about the world simply by examining the imprint it leaves upon you.

It’s also a hell of a lot cheaper than textbooks, let me tell you!

(I love textbooks by the way, that wasn’t an anti-textbook remark…)


2. Find Map & Territory Contact Points

Interacting with reality is the only way to determine whether your map is actually valid.

Unfortunately, an all too common occurrence is that people prefer their map of the world, rather than an accurate map of it. Because of this, many keep their maps and theories insulated and sectioned off, not daring to expose them to the ravages of reality.

This exposure, however, is the best thing you can do for determining your map’s weak points. That is why it is unpleasant. It hones in on what you don’t know, rather than continues you to tell you how much you do. If you silo off your models and theories, you can continue to feel proud of them, feeling that they make you smart. This comes at a cost, however, and that is that you can’t be sure that they are reliable. In order to do this, you must seek contact points with reality.

How can you do this? By making predictions or recognising instances which would test your theory.

Predictions give you an opportunity to be wrong, and being wrong is what allows you to learn and rule something out. Make them, big or small, and adjust. Avoid the temptation to rationalise or explain-away when the results come in.

Additionally, think about what other instances would beat your map into shape. Rather than making a prediction, think about what kind of evidence could you potentially discover that would allow for aggressive and view-changing contact between your map and the territory. If you can’t think of an instance that would allow for map and territory interaction, the question then remains: How accurate can your current map possibly be then?

Either you find contact points or you don’t. Either outcome is informative.


3. Be Curious

I’ve left this one for last, as I think it is vitally important and I hope it leaves a lingering impression.

Nothing, and I mean nothing sniffs out the truth—for better or worse—than curiosity. If there was an epistemic supplement that I could recommend everyone should take, it would be a daily dose of curiosity. Sure, Bayes’ rule helps us update our beliefs when we encounter new evidence, but curiosity is the driving force behind seeking evidence and searching for answers. Bayes’ rule may be the finely-tuned steering-wheel of our epistemic process, but curiosity is the engine.

The power of curiosity on information processing was beautifully demonstrated by Kahan and colleagues — in the rationally stunted arena of politics, mind you. An impressive feat. The paper is titled Science Curiosity and Political Information Processing and it was interesting for a number of reasons, but one in particular of relevance here. 

To start with, there is a growing literature supporting the notion that as intelligence and education goes up, people have a greater and greater ability to rationalise their preconceptions — and this is what they do, rather than shift their understanding towards the truth. This is known as the Sophistication Effect

What Kahan and colleagues observed, however, is science curiosity actually promoted less biased views and more truth-seeking behaviour. They showed that, with high scientific curiosity, evidence promotes an adjustment of views in a uniform direction (as it should from a perfect Bayesian reasoning perspective), which is different compared to the effects of high scientific comprehension, which promotes a greater divergence of opinions when confronted with the same evidence.

In addition to this, those with high science curiosity were more likely to consume sources of information that displayed surprising and conflicting opinions to the ones they already held, whereas low science curiosity participants were shown to much more strongly favour information that supported their currently held beliefs.

This is exciting stuff.

Curiosity is one of the most powerful solvents we can apply to our personal or group-level biases. Curiosity is the feeling that the world is exciting, and you don’t currently know all there is to know. You must be willing to be wrong, and to accept that the world may surprise you, in order to learn. Curiosity is the fuel that propels us towards understanding, so you should try to generate it whenever you can. If you don’t, you’ll inevitably revert back to biased preconceptions about how the world is and should be. As you should be well aware by now, this takes you further from the truth.

Conclusion

We covered a lot today. The major point to keep in mind are:

– Rationality in general is made up of two components, epistemic and instrumental rationality. These components are concerned with what is true and what to do.

– There are a number of forces that work against us when it comes to epistemic rationality. These include perceptual limits, social pressure and the cohesion of beliefs with our own behaviour.

– Bayes’ rule suggests that any new piece of evidence should impact prior probabilities, but not discard them entirely. Our posterior probability is calculated by the test sensitivity/weight of the new-found evidence.

– There are a number of factors that we can improve how epistemically accurate we are, including introspection, making predictions and being curious.

And with that, I will leave you with a quote by decision theorist, Eliezer Yudkowsky, regarding what it is like to improve our epistemic rationality — though, not from an objective, ideal, outside perspective, but from within our own heads:

There’s a whole further art to finding the truth and accomplishing value from inside a human mind: we have to learn our own flaws, overcome our biases, prevent ourselves from self-deceiving, get ourselves into good emotional shape to confront the truth and do what needs doing.

And with that, I bid you good luck. Go, run, make predictions! Learn about the world and update your beliefs in a Bayesian manner. Oh, and when you come into contact with Reality, tell it I say “Hi”.

I am fascinated by the power of knowledge; in particular, how through its implementation we can build a better life for ourselves and others. Most specifically, I am interested in ideas related to rationality and morality. I believe we can all be benefited by having a concern for both probability as well as people. As a student, I am studying Artificial Intelligence. As a professional, I work in mental health case management. When I am not doing one of these things, I am very likely writing for my blog, recording an episode for the "PhilosophyAu" podcast, hanging out with my nan, reading a book or, occasionally, attending a rave. A previous version of myself obtained a bachelors and a masters degree in sport science and was the Manager of Educational Services for a leading health and fitness company.

Related Posts

Leave a Reply