As I’ve described before, one of the benefits of logic-checking (that catch-all term for the set of critical-thinking practices you’ve been learning about on this site) is that it helps control for the kinds of biases that distort reasoning.
In that piece linked above, I talk about cognitive biases that all human beings are vulnerable to due to the way the human brain seems to be wired. The most significant of these is confirmation bias – the tendency to accept as true things that conform with what we already believe, while rejecting information that contradicts those pre-existing beliefs.
Confirmation bias is behind the tribal politics that that are responsible for so many of today’s controversies, tribalism enhanced by our tendency to sort ourselves into communities of the like-minded (both in the real world, and online) where those biases are never challenged.
We also find it easy to identify biased thinking when it leads to condemnable behaviors, such as bigotry, sexism, or other forms of hatred.
But when it comes to navigating ethical dilemmas, like those making up this year’s Ethics Bowl national cases, biases that tap into our better natures can also limit the options we consider when we try to find solutions to problems with no obvious answers.
I realized this last year when a Texas doctor asked me to join a videocast he created to help patients better navigate health-care choices using the critical-thinker’s toolkit. When we talked about steps patients could take to make better decisions, I started not with methods for translating and analyzing logical arguments, but rather with identification of one’s own biases.
In that discussion, I realized that a bias I am vulnerable to derives from my trust in professionals, such as lawyers, accountants, and doctors, who I assume are in a better position than I am to know what I should do when making important life decisions regarding legal, financial, or medical matters.
In many ways, this is a positive bias, or at least one most professionals would find acceptable (as opposed to biases that make me instinctively distrust them). But that trust also means I am more likely to take that advice without examining all of my options, including suggestions that I start taking potentially risky medicines or subject myself to dangerous surgery.
We have likely all had experience, or at least heard stories, of professionals making errors – including deadly ones – which should temper positive biases such as trust, without turning us into unthinking cynics.
Where this intersects with Ethics Bowl is that many of this year’s cases (like many ethical choices we face in life) are about issues we come to with perfectly reasonable, even laudable, positive biases.
For example, cases #1 (“The Medical Brain Drain” that talks about doctors leaving poor countries to work in rich ones), #2 (“Clothing of Calamity” which is about poor nations seeing their textile industries collapse due to huge donations of second-hand clothing by rich countries, donations many locals also find humiliating), and #12 (“Left Behind at Warp Speed,” a case about global distribution of COVID vaccine discussed last time) all talk about the poor getting shafted.
I would guess that most people reading this have a visceral dislike of unfairness, especially unfairness that involves wealthy people or countries benefiting at the expense of people living in impoverished places. This is not only a valid concern, but a commendable instinct which we should cultivate in ourselves and our children, rather than dismiss it as an unreasonable bias.
But, as noted in that discussion of “Left Behind,” there are other factors that need to be taken into account when making ethical choices. Should the needs of the state dictate the choices of its citizens, such as the choice to practice medicine outside of their homeland? Should people resist the urge to be generous (by donating unneeded clothing to the poor, for example) if that generosity might have negative as well as positive consequences? As with the distribution of COVID vaccines, there are factors in play that need to be considered, even if they point towards options that might contradict our natural concern for those less fortunate than ourselves.
Case #6 (“Man’s Best Friend” which covers animal health issues that result from selective breeding) likely cuts against another automatic emotional bias, the desire to not see animals suffer in any way. And the sort of instinctive rejection of racism I’ve been cultivating in my kids since they started talking likely gives them an opinion on Case #14 (“Digital Blackface” also discussed previously) based on those cultivated instincts.
But, as noted in that discussion of Case #14, there is a difference between loathing bigotry and deciding that a specific practice falls into that category. To make an ethical choice in this sort of situation, we need to maintain our natural aversion to racism while also using our reason to determine what falls into that category and what does not.
It would be great to live in a world where our natural inclinations towards fairness, concern for others (both human and non-human), and aversion to all forms of hatred gave us everything we need to make good choices. But when such instincts fail to provide all the answers (which is common in situations where we need to decide between competing goods, or select from only bad options), ethical principles – informed by critical thinking – are what we can continue to turn to for guidance.
Comments