Skip to content
Search
Generic filters
Exact matches only

Algorithmic Filtering and Technology May Exacerbate Censorship Worries

Mitchell Nemeth
Source: Unsplash / Elijah O’Donnell

The war on misinformation, famously dubbed “fake news,” began immediately after the 2016 election in light of revelations about foreign powers using social media to supposedly sway the electorate. With roughly four years of discussion in hindsight, we have seen various institutions, politicians, business leaders, and prominent individuals call for the regulation of technology companies, more specifically social media platforms. The argument follows: these social media platforms have become de facto sources of information, similar to the public square of days past, and the misinformation sometimes present on these platforms has become a public threat. Some go as far as calling this issue a “threat to our democracy.”

Two things can be true at once. 1) Social media platforms do, now, have significant influence over public opinion and information. 2) Misinformation has always existed, and it is only likely to increase as more news outlets are created. The tricky aspect of a democracy is that by encouraging further participation in public affairs or public dialogue we are simultaneously introducing the opportunities for falsehoods or “bad opinions” to spread.

Democracy has proven messy in most places it has been tried. As an advanced society, we hope to encourage public participation while retaining a semblance of control on public information. The Founders understood the threat posed by those who can withhold valuable insight via speech or the press. This is explicitly why they protected this natural right from government intrusion in the First Amendment of the Bill of Rights. Given that the Constitution is over 200 years old, the Founders did not envision a 21st-century economy where multi-national corporations presented the greatest threat to a free press, as they represent the “judge, jury, and executioner.”

The immense power held by the corporations cannot be understated. This by no means suggests we should oppose these companies, but it does imply that we must remain vigilant and hold these entities accountable. This standard remains regardless of the external circumstances, such as the current public health crisis caused by SARS-CoV-2.

Troubling Times

Beginning with the Congress’ interventions after the September 11th attacks, we have been slowly conditioned to accept the framing of major issues as crises or wars. As I wrote at The Mises Wire, “in times of crisis, governments have a tendency to overcompensate for risk.” This holds true for those tasked with disseminating or hosting information in the “public interest.” The “public interest” is a notoriously vague descriptor for what benefits society as a whole. We were told that implementing the USA PATRIOT Act was also in our interest despite the broad “spying” or surveillance tools it granted the United States Intelligence Community. In essence, we have come to associate crisis and war with dramatic increases in government intervention and a willingness to subvert our civil liberties.

Since the implementation of the USA PATRIOT Act, our society has been dramatically shaped by our increasing reliance on technology and the usage of data. As I wrote at Foundation for Economic Education, our operating abilities have in many respects created a “potential surveillance apparatus.” This surveillance apparatus operates based on the various data points that are monitored and collected, both knowingly and unknowingly, as we utilize various technological tools. Combining these data points with the vast market power of major technology companies presents a somewhat Orwellian dynamic, especially when we take into account how everyday industries are affected.

News media has been forced to alter its business model in order to survive in a social media-centric society. Unlike newspapers or magazines, modern day news is often discovered by scrolling through a seemingly endless Facebook or Twitter feed. This news feed is operationalized by relying on complex sets of algorithms that parse through scores of data to determine which content maximizes engagement. Often, these algorithms are tweaked without us even knowing. There have been examples of platforms like Facebook making changes that resulted in steep declines in user traffic for less “authoritative” sources.

Today’s Problems Disrupt our Relationship to Big Tech

Today’s public health crisis throws an even bigger wrench into the system. Freedoms are being curtailed under the guise of public health. We are suffering an unprecedented economic recession as a result of a government-mandated shutdown of “non-essential” businesses. Each of us is told that we must do our part to “flatten the curve” and save lives.

Facebook, Google, and other platforms have taken “unusually bold steps to keep misinformation about COVID-19 from circulating,” writes The Atlantic. This sort of aggressive action may please public health authorities, but it seriously damages public trust in these technology companies as mere “arbiters of truth.”

We have been in a complex situation since the very beginning of this pandemic. The virus at the center of this crisis is novel so information pertaining to its true transmissibility rate, fatality rate, degree of criticality, and potential treatments is either nonexistent or undergoing further study. There are absolutely some facts and measures that have demonstrated a level of success in combatting the virus, but the question becomes: why is a social media platform to dictate what is fact and fiction? From the onset of the crisis, renowned public health authorities like the World Health Organization have been parroting Chinese Communist Party narratives.

Unfortunately, through major missteps, renowned institutions and authoritative figures have been wrong about the threat of this virus since its emergence. Political opportunists have been searching for every opportunity to expose their opponents as being underprepared, but the reality is that most major countries were remarkably unprepared. This lack of preparedness has created an environment ripe for demagoguing and extreme actions.

Actions Taken

Facebook, Twitter, and YouTube are relying on automated tools to remove misinformation, wherein “speech becomes collateral damage in the mobilization around the pandemic, and a concession to the exigencies of the moment.” For example, Twitter and Facebook removed posts from the Brazilian President Jair Bolsonaro that allegedly violated its policies of posting false or misleading information.

YouTube’s CEO Susan Wojcicki has stated that anything that goes against WHO recommendations would be a violation of their policy. Wojcicki has also stated that YouTube would remove information “that is problematic” including “anything that is medically unsubstantiated.” Similarly, the popular blog site Medium is “aggressively taking down viral posts under a new policy on COVID-19 content, despite the site’s mission to be a platform for ‘whatever you have to say.’” Another popular content aggregating site, Reddit, has “added warning messages on two subreddits for boosting misinformation.” The popular event management website Eventbrite has been unpublishing “gatherings from its platform that encourage people to violate social distancing guidelines;” some government officials have pushed the idea that protesting is a non-essential activity and therefore is a violation of social distancing guidelines.

Platforms dictate the terms of usage by writing legalese-filled policies and terms of service, which dictate the platforms ability to evaluate (judge) the content at issue and to ultimately decide the contents outcome (jury) and punishment (executioner). Typically, these platforms use some form of the appeals process, but shelter-in-place orders have made this difficult. Instead, many are temporarily eliminating their appeals process.

Individuals have little to no legal recourse, as these companies are not accountable under the First Amendment. In Prager University v. Google, a panel for the U.S. Ninth Circuit Court of Appeals held that “despite YouTube’s ubiquity and its role as a public-facing platform, it remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment.” William McGeveran, a law professor at the University of Minnesota, says it is difficult to define a clear “boundary between fake news and alternative viewpoints” and “you may know it when you see it, but it is a danger to impose limits on speech.”

All of these actions taken together create a perilous environment for free speech and free press. As public health experts, foreign and domestic, have failed at multiple levels, there are serious concerns about potentially censoring those who may have alternative solutions to the problems of today. Less authoritative public health officials and statisticians may be the key to overriding some of the inherent biases of those in the greater public health establishment. While the concern of spreading misinformation is very real, we risk censoring those who may be the key to alleviating this crisis.

Conclusion

Rahm Emanuel’s line “never let a crisis go to waste” resonates as it is ingrained in the minds of government officials and powerful institutions. Past crises have laid the framework for demonstrating that short-term “solutions” are often cemented into long-term operations. Edward Snowden, a CIA whistleblower, famously exposed the Intelligence Community’s expansive use of the various programs initiated under the USA PATRIOT Act, which was sold to a panic-stricken America. Similarly, these technology companies have received praise for their actions despite the potentially devastating effects on our culture, which preserves freedom of speech and shuns censorship.

Many of these corporations are publicly traded so their shareholders may begin to realize that these temporary measures are in the long-term interest of their respective companies. Politicians may similarly understand that unlike governments, these platforms are not beholden to the First Amendment. This begs the question: What constraints exist to hold these platforms accountable for the free flow of speech? Do these corporations truly exist as mere social media platforms or are they a new phenomenon requiring new constraints?

As Evelyn Douek writes at The Atlantic, “Users have no way of forcing platforms to answer any of these concerns. Indeed, the state of emergency throws into sharp relief what is always true about the majority of regulation of online speech: the powers of rule making, enforcement, and review are all concentrated in the same hands. What’s happening during the pandemic is just an accentuated version of the norm.” We may accept some level of platform moderation to ensure that we save lives and “flatten the curve,” but as this public health crisis subsides, we must address the long-term questions surrounding non-governmental censorship.

Proponents of freedom of speech must continue to push back on seemingly questionable and “arbitrary and capricious” use of filtering algorithms, even as many of these platforms have taken steps to calm negative feedback. For example, Facebook has established a new, mostly independent content Oversight Board to “hear appeals from individuals objecting to the removal of individual pieces of content for violations of Facebook’s Community Guidelines.” While the initial scope of this board is rather narrow, this is a step in the right direction.

Joe Rogan Experience #1258 — Jack Dorsey, Vijaya Gadde & Tim Pool

Executives at Twitter have taken to popular YouTube shows like Joe Rogan’s Joe Rogan Experience to answer questions about past practices and the future state of the platform. Facebook’s Mark Zuckerberg has made a concerted effort to reach out to various ideological groups to calm their concerns about potential censorship. These executives understand the delicate balance that holds afloat public trust in our media and technological institutions. In order to ensure the continuation of public trust and fundamental fairness between these platforms and individuals, we must continue to hold these institutions accountable via dialogue and constant individual pushback.