Social Media’s Novel Epistemic Challenges

Hussain Khalil
10 min readApr 13, 2021

Unlocking the awesome epistemic potential of the Internet will require disentangling it from its equally awesome risks.

A Sumerian Cuneiform Tablet, c. 3100–2900 BCE. Credit: The Metropolitan Museum of Art, New York

Historically, technological advances have been epistemically advantageous, often allowing knowledge to spread further and faster, to more people, while reducing the cost of acquiring true beliefs. Such advances and inventions have greatly affected the course of human history, cementing the legacy of their creators in the process.

The earliest civilizations, those that emerged in Mesopotamia, Egypt and China roughly 5,000 years ago, were marked by their widespread use of the first writing systems. The intimate entwinement of this technology — used to record transactions, laws, decrees, events, and mythologies — with the development of early human settlements suggest that writing and recording are not merely central to but the defining quality of our modern society.

Johannes Gutenberg’s printing press, which is widely credited with altering the course of Western history by facilitating the rise of the Reformation and the European age of Enlightenment, represents a more modern example of the impactful advancements that have shaped modern society.

Sir Tim Berners-Lee, considered the inventor of the World Wide Web

Recently, the advent of the digital computer, its miniaturization and accessibility and the subsequent invention of the Internet and the World Wide Web by Sir Tim Berners-Lee have contributed to a state of unparalleled public access to knowledge. Surely, such an invention could only be considered an epistemic good?

However, these digital technologies have been associated with the spread of misleading information. The plethora of fake and misleading content spread on the Internet, and the potentially damaging consequences of this phenomena, have raised debate about its true epistemic impact. Could it be that the Internet and similar digital technologies harm rather than facilitate the spread of true beliefs?

The Framework of Relevant Alternatives

In Inquiry, Philosopher Christopher Blake-Turner describes his relevant alternatives framework. It describes what it means for us to have knowledge. The core idea of this framework, according to Blake-Turner, is:

S knows that p if and only if S is in a position to rule out all the relevant alternatives to p.

He makes the intuitive argument that we are right to question a piece of knowledge if a credible, contradicting alternative exists. And we cannot claim to know something if we are unable to rule out all such alternatives.

Blake-Turner’s framework provides an explanation of why misleading information on the Internet can be such a hindrance to Internet user’s gathering of justified beliefs. As the Internet and social media platforms within it offer access to a broad litany of viewpoints, representing nearly every niche and unsupported view held by a wide diversity of users, it is nearly guaranteed that a relevant alternative exists and can be found about any particular belief that one may hold.

Therefore, the nature of the Internet threatens the traditional process of using testimony to acquire and support justified beliefs. Propaganda spread through novel channels, such as through fake user profiles, false and misleading news given a spotlight by the open nature of the web and fabricated evidence generated by newly developed tools all may contribute to a general degradation of the epistemic environment of the Internet, making it less conducive to the gathering of justified belief and prone to reducing the quality and robustness of users’ knowledge.

Propaganda has existed for as long as governments and their rulers have tried to influence their citizens’ views, but the Internet has amplified the ability of governments to surreptitiously alter individuals’ views on a particular topic. In The Internet of Us, Michael Lynch describes how social “sock puppets” are employed to create the illusion of popular support for views deemed favorable by scheming governments. This form of propaganda is not employed solely by authoritarian governments. Lynch references Operation Earnest Voice, a US-government effort to influence the views of social media users in the Middle East.

Such a case demonstrates how the Internet distorts or abuses a traditional means of ruling out relevant alternatives: seeking testimony from others in our community. Whereas we can be sure of the identity of who we are listening or talking to in person, such a guarantee cannot be made about Internet users. The efficacy of this particular means of spreading propaganda lies in the unjustified assumption held by many Internet users that the identity of a social media user, as expressed through such properties as the user’s name and profile photo, serves in-kind to meeting someone and receiving testimony in person.

Why it matters: we are currently seeing how the impact of relevant alternatives, in the form of misleading beliefs shared on social media, can impact the efficacy of public health measures and prolong the human and economic cost of the pandemic. Not only does the Internet facilitate the spread of misleading information by giving these relevant alternatives access to millions of social media users, but Internet platforms designed to promote outrageous and extraordinary content often inadvertently accelerate the spread of relevant alternatives.

A false claim about the safety of vaccines presents a salient, relevant alternative to the consensus among health experts.

Confronted by salient alternatives to the official view of vaccines as safe and effective means of reducing the toll of the pandemic, the relevant alternatives framework suggests that users may be justified in receiving this sort of testimony and challenging their previously held views.

But far from suggesting the problem of misleading information on the Internet is intractable and unsolvable, this example demonstrates how the need to independently verify testimony with trusted sources of authority is heightened with the prevalence of relevant alternatives on the Internet. In the case of the above image, the CDC’s statement that no vaccines even existed at the time of the 1918 Spanish Flu, makes the claim in the image impossible, allowing readers to quickly rule out this relevant alternative.

Such a cautionary step was not often required in the past, where information accessible to readers often through magazines, journals, television programs, and other published sources would be fact-checked by editors or face other scrutiny before publication. Social media users can no longer make this assumption about the content they consume, as it could come from individuals or groups with no relevant authority in the subject matter. This is yet another adaptation users must make to minimize the epistemic risks posed by the Internet.

But such a step may not be as effective in some instances. During the pandemic, content has been widely shared purporting to be from a medical authority that espouses a relevant alternative to the consensus held by experts.

For example, Carrie Madej, ostensibly an osteopathic doctor, appeals to medical authority when she claims, in a series of viral videos on YouTube, that the vaccines alters the recipients DNA, and suggests that the true purpose of the vaccines is to “hook us all up to an artificial intelligence interface.” Faced by the salient claim of her medical authority, users may be more disposed to challenging official views about the safety of the vaccines.

Carrie Madej, who has made claims on social media about the risks of COVID-19 vaccines

Another statistical bias is at play here: the consensus held by the vast majority of medical and epidemiological experts can be undermined by a salient alternative presented by an apparent authority. In the case of Carrie Madej, the alarming claims put forth seem to amount to a credible repudiation of the belief in the safety of the vaccines, even though such a view has very little support among the relevant experts. The contribution of the Internet here is allowing such a niche, unsupported claim to be viewed and shared to millions of social media users, but the true damage is caused by the unjustified assumption held by users that this view represents a credible, well-supported alternative to expert opinion.

The Arbiters of Truth

The challenges of fake and misleading news on the Internet, though unprecedented in scale and breadth, mirror similar epistemic obstacles that arose with the widespread use of broadcasting technologies such as radio and television. In the United States, these domains are regulated by the Federal Communications Commission, or the FCC. Radio and television broadcasters, from amateur to professional, are required to register with the FCC and therefore, are scrutinized for false or misleading claims. The FCC prohibits, for example, “broadcasting false information that causes substantial ‘public harm.’” However, such regulations, including the repealed Fairness Doctrine, which required broadcasters to present both sides of controversial public issues, have fallen under fire as impinging on individuals’ First Amendment rights. In any case, current federal law does not classify social media platforms as broadcasters, so such regulation can do little to minimize the risks of misleading content. Democratic lawmakers are attempting to change this fact by modifying the scope of Section 230, a contentious policy that currently absolves Internet platforms of regulating user content.

Twitter has recently added warning labels to potentially misleading content

Similarly, the companies in charge of the social media platforms themselves have shown themselves to be ineffective or unwilling to stem the spread of misleading content. Whether because the sheer volume of content is impossible to effectively regulate, the platforms are unwilling to invest in a sufficiently effective review system or simply that the platforms benefit from the increased user activity associated with often viral and outrageous misleading posts, social media platforms have become ground zero for the fabrication and dissemination of false news on the Internet. Another critical problem with the approach of allowing social media companies to regulate misleading content is these firms are composed of people whose own biases may create a conflict of interest if the objective is to maximize epistemic benefit. Facebook’s Mark Zuckerberg validly points out that his firm “cannot be the arbiters of truth.” Whether or not social media platforms can be the sole arbiters of truth, as the unintentional facilitators of misinformation, they have a critical role to play in stemming it. Recent moves by these companies have been a promising step in the right direction. For example, Twitter’s use of warning labels to indicate content which may be misleading has so far been at least somewhat effective. And by still giving users the options to view and make their own decisions on the disputed content, this solution partially avoids the intrinsic conflict of interest described above. It’s particularly commendable because it provides a helpful tool for individual users looking to minimize the epistemic risks of misleading information on the Internet.

This is important because ultimately, the epistemic responsibility of gaining knowledge rests with the individual users. They must be given the tools to evaluate opposing viewpoints. As a society, we must educate individuals to be critical of the beliefs they are exposed to, particularly on the Internet. Ensuring that they have the necessary understanding of the functioning of the Internet and the underlying phenomena that enable the viral spread of salient alternatives will allow them to critically assess the information that is recommended or shared with them. Individual users must reflect on the biases of the viewpoints on social media as well as their own and recenter their habits on the epistemic objective of acquiring justified beliefs.

The very same characteristics of the Internet and the web that have made these technologies powerful tools for the spread of knowledge also lead to outsized epistemic risks. Anonymity and accessibility mean that small scale operations can distribute misleading knowledge, undermining users’ understanding of crucial topics. And widespread access to means of publishing and sharing information mean that unjustified beliefs can spread widely and rapidly with little epistemic scrutiny, and the prevalence of salient, shocking challenge to traditional sources of authority misrepresent the true consensus held by experts. Furthermore, a rapid evolution of digital manipulation means that the existing means of verifying testimony and ensuring robust, justified beliefs can no longer provide the same level of reliability that such methods have in the past.

However, despite the novel epistemic challenges brought forth by the Internet and associated digital technologies, such epistemic risks are not endemic to the platforms in and of themselves. The Internet can be used as a potent facilitator, rather than threat, to the spread of true beliefs. However, doing so requires a re-examination of the modal shift of evidence-gathering and the methods we use to inform our beliefs.

Maximizing epistemic benefit in the digital age will mean abandoning or reshaping the assumptions that knowledge-seekers could depend on in the past to guide them to true, justified beliefs. Government regulators, social media platforms, political leaders, scientific experts, but most of all, individual users will need to update their modes of seeking knowledge to be more resilient against unsupported and misleading information, wary of distortions in the testimony they receive caused by the underlying mechanisms of the Internet, and thoughtful about the sources of authority from which they derive their beliefs.

Discussion Questions

  1. Does the Internet, and technologies such as social media, present greater epistemic liability than benefit?
  2. How must social media users adapt to the unique epistemic environment of the Internet and social media?
  3. Can such adaptations offset the epistemic risks posed by the Internet?
  4. Should the responsibility of verifying or ruling out relevant alternatives fall on the individual user, social media platforms, or government regulators?

Sources

  1. Broadcasting False Information, Federal Communications Commission (2021)
  2. Coronavirus: False and misleading claims about vaccines debunked, Jack Goodman and Flora Carmichael, BBC (2020)
  3. Fake news, relevant alternatives, and the degradation of our epistemic environment, Christopher Blake-Turner, Imprint (2020)
  4. Here are some instances where Facebook has been an arbiter of truth, Salvador Rodriguez, CNBC (2020)
  5. History of 1918 Flu Pandemic, Centers for Disease Control and Prevention (2018)
  6. Sir Tim Berners-Lee, Paul Clarke, Wikimedia Commons (2014)
  7. Sumerian Language, Ignace Gelb, Encyclopedia Britannica
  8. The Internet of Us: Knowing More and Understanding Less in the Age of Big Data, Michael Lynch, W. W. Norton (2014)
  9. The radio drama that shocked America 80 years ago and the modern birth of fake news, DW.com, Deutsche Welle
  10. Twitter put warning labels on hundreds of thousands of tweets. Our research examined which worked best., Megan Brown et. al., The Washington Post (2020)
  11. Updating our approach to misleading information, Yoel Roth and Nick Pickles, Twitter (2020)

--

--