The Good Argument


A new colleague I met at last week’s American Philosophical Association (APA) conference in cyberspace recommended that I share an example with LogicCheck readers that illustrates what a strong argument over a controversial issue looks like.


Given that topic on everyone’s minds (discussed here and here), the fallout from January 6th seems an appropriate subject to approach with as much thoughtfulness as possible. Which is why I chose this editorial from the Houston Chronicle (provided in full at the end of this piece) to show the value of bringing reason to the toughest topics.


The editorial takes on the hot debate over social media companies like Facebook and Twitter banning Donald Trump and other users whose communication might have played a role in the attack on Congress, including Amazon’s decision to wipe the alternative social media site Parler off its servers. The issues those decisions raise are highly contentious, but more importantly they provide the exact type of problem critical-thinking can help us work through.


Since we can’t know the future or know with certainty what is going on in people’s minds, we can only present arguments about what to do regarding unknowns that cannot simply be fact-checked as true or false, then determine if those arguments are any good. Critical-thinking principles can also help us navigate an issue like the one covered in the editorial where there are no obvious answers, just competing goods (or, in this case, competing bad choices) that we must choose between.


Specifically, there is no question that social media played a role in events that rightly outraged the nation and put lawmakers' lives at risk and that we must do things to ensure such events never happen again. At the same time, giving private companies like Facebook, Twitter and Amazon the power to decide who gets to participate in the online Commons that have become central to public discourse represents enormous power to put into the hands of for-profit organizations and hope for the best.


One of the things that makes the Houston Chronicle editorial both reasonable and persuasive is that it does not minimize these problems or dilemmas in any way. The prevalence of misinformation on the Internet, the provocations that came from the president’s Twitter account, and the fact that sites like Parler became places where attacks on the government were both planned and celebrated are all pointed out at the start of the editorial, indicating that the writers are not minimizing problems that have resulted from an unregulated Commons.


At the same time, they do not shy away from an equally troubling set of problems that emerge when speech is squelched, especially when that squelching is performed by unaccountable organizations with unclear rules over what gets you banned and Byzantine or non-existent methods of appeal. I also like how the writers avoided getting caught up in debates over the First Amendment (which only regulates government censoring of speech) by pointing out that tyranny of anyone (whether the government, the majority or private companies) controlling who gets to say what constitutes a legitimate threat to our freedoms.


The editorial also proposes an example of someone they claim deserved to be banned (Alex Jones) for specific behavior (tormenting the families of murdered children). While there are legitimate arguments that the banning of Jones represented the thin end of a wedge designed to establish principles that people could get banned from the Internet over the content of their speech, use of this example provides readers a benchmark illustrating where the editorialists feel a line can be legitimately drawn.


Given that this editorial contains a linked set of arguments, it might be easiest to map out the logic using one of the methods for organizing an argument visually (Toulmin diagrams or argument maps) you can learn more about on this site and elsewhere. However, I will take a stab at a simple informal argument that sums up the logic that structures this editorial:


Premise 1: The Internet, especially social media sites, includes disinformation and hate speech, some of which rises to the level of threat (best illustrated by the January 6th riots).

Premise 2: Much of this dangerous communication takes place on platforms controlled by large corporations, such as Facebook, Twitter and Amazon.

Premise 3: Those companies have the power to regulate what appears on their platforms and a responsibility to minimize the chances that what users post will lead to harm.

Premise 4: It is unclear what types of speech directly lead to violence or other forms of real danger.

Premise 5: Corporations regulating speech on their platforms also represents a threat to people’s freedom to say what they like.


Conclusion: Corporations should be allowed to control what appears on their platforms but should do so carefully based on clear rules and with some kind of oversight.


The editorial fleshes out the conclusion by proposing a set of specific deliberative (i.e., future-oriented) steps that might help balance the need to protect the public from harm with the need to protect free speech from abusive restrictions including:


  1. Users should take more responsibility for their own content by using facilities within social media platforms (such as blocking or voting down bad comments) to minimize the spread of disinformation and hatred online.

  2. Platforms should require users to comply with terms of service, especially with regard to false or malicious types of communication, but should do so judiciously, even-handedly and with methods of appeal clearly spelled out and made available to all.

  3. Government should consider regulation of Internet companies, or at least take steps to eliminate their monopolies over sectors of the Internet that involve important freedoms (especially freedom of speech).


Naturally, I would like it if the editorial also expanded recommendation #1 to call for users to stop treating the most important means of communication ever created to abuse, manipulate and wallow in confirmation bias but use it instead to engage, seek and spread the truth, and expand the range of things we find acceptable to discuss. But hey, that’s just me.


So five dumbbells for the strong argument written by the editorial team of the Houston Chronicle:





Editorial: Purge of Trump, Parler show Big Tech firms have too much power – The Houston Chronicle (January 17, 2021).


There’s no question that disinformation — outright lies or the misrepresentation of facts — is a worsening plague on our democracy.


It is not limited to any party, ideology or sector — nor do its purveyors respect any boundaries of basic decency and fairness.


Because of this, mounting pressure from concerned citizens and government officials to rid the internet of the worst offenses, and offenders, has led Twitter, Facebook and other social media companies to take strong action.


Often, these severe steps are welcome, as was the case with Alex Jones, the Austin-based creator of InfoWars.com whose loathsome videos were banned by Twitter and YouTube in 2018. A menace for decades, Jones’ reach wasn’t curtailed until he engaged in a prolonged harassment campaign against the grieving parents of children who were murdered at Sandy Hook Elementary. Jones branded one of the worst mass killings in American history a hoax and their parents liars, causing some to receive death threats.


It’s hard to fathom a more deserving recipient of the social media death sentence than Jones.


Yet, the recent response to President Donald Trump’s ban from Twitter, Facebook and YouTube was a cacophonous mix of cheers and outrage, even though the move came only after the president’s relentless posting of false claims about voter fraud spurred thousands to storm the U.S. Capitol in a deadly clash that rattled the underpinnings of American democracy.


The logic of banning Trump amid escalating threats of violence is clear. But so is the reason for concern. America is a country where censorship is viewed as an Orwellian harbinger of tyranny, a country where the commitment to free expression is so strong that words of hate enjoy the same protection as words of prayer.


The companies’ decision to silence Trump is “a simple exercise of their rights, under the First Amendment and Section 230, to curate their sites. We support those rights,” the Electronic Frontier Foundation said in a statement last week.


So do we. We also agree with the EFF’s concerns that the giant internet companies have accumulated so much power over public discourse that any actions to silence users ought to be taken with extraordinary care.


“Just because it’s private censorship doesn’t mean it’s not censorship,” EFF legal director Corynne McSherry told the editorial board.


Texas flag-burning


Few statements from the legal history of free speech have been more expressive than the opinion issued in 1989, authored by U.S. Supreme Court Justice William Brennan.


As Republicans gathered in Dallas for the 1984 national convention, Gregory Lee Johnson set fire to an American flag in front of city hall as protesters chanted “America, red, white and blue. We spit on you.”


Onlookers were rightly appalled and Johnson was arrested. Five years later, writing for the majority, Brennan explained why Texas could not prosecute Johnson without betraying the Constitution: “If there is a bedrock principle underlying the First Amendment,” he wrote, “it is that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.”


Of course, the First Amendment doesn’t actually apply to decisions made by Facebook and other firms. As private companies they are not subject to its constitutional guarantees. The Bill of Rights guarantees individual rights only against government infringement.


It’s also true that ideas that led the framers to adopt the First Amendment were bigger than just concern about government overreach. It was to safeguard individual rights from the whims of the majority, who in our democracy would have the power to write the laws and by association, control the exchange of ideas. In the next century, the English philosopher John Stuart Mill would argue for near-total freedom of expression, warning that only a society free from the “tyranny of the majority” is truly free.


Must balance harm


The commitment to that principle is tested daily on the internet, where lies and hate proliferate. We understand why companies are eager to rein in that speech.


But as they do, they must balance the harm that the speech poses with the harm — real and perceived — of censoring it. The companies must appreciate the value of freedom of expression in our society and also the unique, and unfortunately out-sized, role they play in facilitating it.


And that’s the biggest concern here. The president getting ousted from a platform might not be such a concern if there were more platforms. But the enormous swaths of public discourse controlled by Facebook and other companies amount to monopolies that can stifle the exchange of ideas.


For good reason, their decisions prompt scrutiny and suspicion from users about their motivations, their biases, their allegiances. Such concerns have led authorities in Europe and the U.S. to take steps toward breaking Big Tech, including Google and Facebook.

We support more competition among social media companies, which we will generously assume was the original appeal of the start-up app Parler.


As Twitter began cracking down on false claims made by Trump, attaching warnings to many of his tweets, conservative users rebelled and flocked to Parler, which promised fewer rules about hate speech and less strict oversight of the truth of claims made in posts.


Quickly, the fledgling social network became a haven for far-right extremist views and conspiracy theories, and it was among the sites used to plan the deadly riot at the Capitol. Street directions to avoid police were exchanged in comments, the New York Times reported, and people posted about carrying guns into the halls of Congress.


Within days, Google and Apple had banished Parler from their app stores. Amazon’s web-hosting service suspended the company indefinitely and took the entire network, not just its offending users, off line. That prompted fresh cries from conservatives about bias and civil liberties violations.


Parler has sued, alleging “political animus.”


That companies providing the infrastructure for the internet are now more actively moderating content raises alarms. It’s clear that Parler’s administrators were too lax in enforcing rules but such shutdowns should be the last resort. Big tech companies need to be transparent about rules and consequences and enforce them fairly.


Ideally, users themselves should be the first line of defense in moderating content — offensive comments can be ignored, blocked or voted down. Failing that, platforms such as Facebook and Instagram are best positioned to moderate since they’re the closest to the users, who have already agreed to terms of service.


The idea of the internet was an audacious one. Users would be granted a spot and invited to hang out their shingle — and to bear the responsibility for how they used it. The ISPs and other firms that provide access would be seen as conduits, not publishers, and except in limited cases not be liable for the content users create.


The model still has merit. And as Congress, the FTC, and state attorneys general consider reforms, they should look for ways to boost competition, make algorithms and privacy trade-offs more visible to all, and, when users are punished, to provide due process to those who wish to appeal.


No doubt, the insurrection at the Capitol prompted a national security crisis that’s still ongoing. A company should be given leeway for taking emergency action in response to explosive speech promoting violence. But as more of us rely on private companies to share our thoughts publicly, respect for the old principles of freedom of expression is needed more than ever.