Complexity: Bias and Reasoning
The word “bias” is richly imbued with implications of censure. It is not, after all, a good thing to be told you are biased. And, while there are extraordinarily destructive biases — like racial or gender prejudices — in recent years accusations of bias have expanded to about any topic. (It seems a quick way to dismiss arguments without engaging discussion.)
Yet, to be human is to be biased. More accurately, to be human is to have limited or inaccurate knowledge in many ways — one of which is bias. Many of our knowledge limitations are not bias. For example, witnesses to an event each see only part of that event because each inhabits a different physical location. Thus, their limited knowledge is merely fact. And all observation and thinking is filtered within the human mind based on what we expect to see, what we have seen before, our experiences, and by the focus of our attention. Sometimes biases do that filtering but there are many other filters.
Arguments about bias imply, though, that perfect knowledge is possible. There is no such thing. Our limitations, including bias, are inherent in the human condition. They are also inherent in how our brains work. But before we explore the brain, let’s get more clarity on the idea of “bias.”
What is Bias?
The word bias is tricky — it’s apparently clear, specific meaning evaporates as we move closer. Merriam-Webster’s first definition starts with:
Bias: an inclination of temperament or outlook.
especially : a personal and sometimes unreasoned judgment : prejudice
So a bias is a “personal judgement” and sometimes an “unreasoned judgement.” It also relates to prejudice — both with a small “p” and the destructive, hurtful, full on damaging Prejudice (with a capital P) against groups. That’s all a little vague as it is a broad continuum from innocent to horrific.
Oh, we also find a bias may be only a tendency. It can also be error introduced into statistics. And then there’s the bias in fabric— a diagonal which cuts across weave and weft.
Going back to the first definition, consider the term “reasoned.” While we aspire to reasoned judgement, our subconscious does a very large part of our processing of the world around us and the challenges we face. Since reasoning is a conscious process, things which emerge from the subconscious are not reasoned — so are they all biased? And is “unreasoned” always a problem? I once worked for a boss who used ironclad logic to arrive at wrong answers.
The Brain Making Sense of the World
Let’s consider bias in what we physically see for a moment. Kevin Mitchell’s Innate looks at how the human brain develops and ends up wired. I found this passage quite interesting.
[S]eeing (or perception more generally) occurs through the act of inferring what it is that is causing the sensory stimuli. That is thought to happen when our brains compare the incoming sensory information to an internal “model” we have (or that our brains have, at least) of the current state of the world around us. The idea is that the brain then adjusts the model to accommodate those signals. The brain is, in essence, predicting the state of the world and deriving a measure of the error of that prediction by comparing it with sensory data. It then tries to reduce that error to zero by updating the model… This comparison requires information to flow in both directions — not just bottom-up, from the sensory periphery to higher and higher regions of the cortex, but also top-down, to carry the information about the current model.
Mitchell notes that most connections around our visual cortex are NOT those carrying the input stimuli (what we are seeing) but those bringing feedback “from the higher visual areas” — in other words those trying to make sense of what the stimuli presents.
And then there’s this: Our brains do not fully process input stimuli (e.g. sort out all the details of an image) before we begin to interpret the input — to decide what was sensed and what it means. Wait. Really? Using far broader terminology than that employed by Mitchell, that sounds like our biological development is tuned toward deciding meaning — in a sense tuned in a way which encourages, or at least doesn’t discourage, bias.
This system is different within each species — adapted to each species’ needs for what it can sense, the quality with which it senses, and the speed at which it perceives. Interestingly, each species is also adapted in the way it determines what sensory inputs mean. In this way, for example, a single odor may mean very different things for different species and those varied meanings are important to species survival.
The Human Value of Bias?
It rather seems that we are always teetering on the edge of our knowledge limits — including bias. Many conclude that we evolved this way for the savannah and bias is a negative hold-over (or artifact) from what we needed while hunting and gathering.
I don’t agree. I suspect the way the human mind makes sense — including arriving at bias — plays an important role for us as group animals and that this value continues today.
While humans are more similar to each other than they are to other species, Mitchell’s book indicates that each human brain develops differently. Thus, each of us perceive different things from the same situations as our brains vary in the sense-making applied. This sense-making also, somehow, is influenced from our own experiences.
Many see this glass as half-empty as they focus on the limitations of individual humans. I think that’s short sighted.
The Handbook of Evolutionary Psychology observes “Where one error is consistently more damaging to fitness than the other… a bias toward making the less costly error will evolve — this is because it is better to make more errors overall as long as they are relatively cheap.”
This might be particularly true within groups as it is not costly for each of us to have errors in our thought as long as we are in a group — and it might even be helpful. Before discussing this more thoroughly by looking at group reasoning, let’s clarify the idea of reasoning.
Reason, not Logic
Reasoning is a complex human activity searching for answers in situations of uncertainty by relying on our full human abilities. If we demand that reasoning be logic, we are eliminating many excellent, highly-evolved human abilities. Reasoning, then, is a process which pulls from logic as well as all our abilities including instinct, emotion, and observation as well as our ability to interpret in the context of the challenges or decisions we face.
The idea of reason is used this way in the US court systems where jurors find someone guilty only if their guilt appears “beyond reasonable doubt.” Tin hat conspiracy theories of innocence aren’t enough for a jury to conclude innocence as long as the prosecution has presented a strong case. Yet when humans on the jury arrive at a sense there are reasonable doubts about the prosecution’s they are obliged to find the defendant not guilty.
As Groups Reason Amid Uncertainty
When confronted by a situation of uncertainty a group needs to build a larger shared knowledge than that possessed by any single individual. We can gain this when each individual discusses and argues their own unique perception. In a sense, we are looking at the reverse of a Venn diagram. Where we tend to focus on what is shared in a Venn diagram, amid uncertainty the group benefits when each group member adds something new to the whole knowledge of the group — and each individual’s bias just might be part of their value.
Thus the group becomes far likelier to find successful ways forward — and certainly more successful than by following best practices, international standards, or other arbitrary approaches. Some of my thinking on this came from reading Chris Mowles’ Complexity where we find discussions like this:
According to the interactions approach, reason didn’t evolve to enhance thinking on one’s own but as a tool for social interaction. We produce reasons to justify ourselves and to convince others… Why not envisage, then, the exchange of reasons and the mechanism of reason could have evolved for the benefit of the group, rather than for the benefit of individuals?
Evolution is known today to find value in cooperative behavior and not merely competition. So the idea that behaviors and biology develop to help humans work together in groups is in line with evolutionary cooperation.
That this is important arrives because “…all points of view on the situation are partial and informed by particular social positions.” Whether partial from bias or from viewpoint doesn’t matter as long as those in the group don’t allow bias to become entrenched. By discussing and arguing we build a much broader sense of the situation.
Strict logicians, though, would prefer that people become fully informed before attempting to find solutions. Except:
“[T]o better resolve complex social situations… we need to bring all the partial points of view into play, including our own, rather than denying them.”
“…it is impossible to separate the knowledge we need from the specific context in which it arises. Abstracting and systemizing of course has its value but not… for developing insight into situations which develop from a particular set of circumstances and are a product of them.”
It is important to hear what is perceived through the biases of those around us in order to arrive at a broader and bigger understanding of a complex situation. Each of us sharing and arguing our interpretations can lead to excellent results.
A More Practical View of Bias
The ultimate end of a skeptic focused on bias is to silence others as anything they say might be biased without them knowing it. The opposite would be for each individual to commit fully to their biases no matter how much damage they do. Neither of these approaches are workable in life.
A sad alternative we have seen too often in recent years is that of the echo chamber — where humans choose the people with whom they group as well as what they read, watch, or hear in order to avoid encountering different views. We have now experienced how destructive this is.
Yet there is a very positive alternative if we pay attention. When we work within groups where we are all committed to shared goals, our groups end up knowing far more — and knowing it more clearly — when they discuss and argue what they see. The resultant knowledge is more than any individual could ever know on their own. And, when facing uncertainty, a group will arrive at far better understanding and better decisions. Key to this process, though, is accepting that we need the viewpoints of others.
This does require both that we each argue with passion and that we do not argue to win the argument — but to expand our group’s knowledge leading to better decisions.
So enjoy the ride.
Innate, Kevin Mitchell, Princeton, ©2018. Quotes and discussion starting on page 129.
Complexity: A Key Idea For Business and Society, Chris Mowles, Routledge, ©2022, Starting on page 93
©2022 Doug Garnett — All Rights Reserved
Through my company, Protonik LLC, I consult with companies as they design and bring to market new and innovative products. I am writing a book exploring the value of complexity science for driving business success. Protonik also produces marketing materials including documentaries, websites, and blogs. As an adjunct instructor at Portland State University I teach marketing, consumer behavior, and advertising.
You can read more about these services and my unusual background (math, aerospace, supercomputers, consumer goods & national TV ads) at www.Protonik.net.
Categories: Business and Strategy, Complexity in Business