Abstract

With the constant advancement of technology and the advent of a global pandemic, as of 2023 it is safe to say our lives are as affected by our physical surroundings as they are by the virtual spaces we inhabit.

Social media and search engines constitute a big part of how people use technology to aid in their decision-making processes(“Social Media and News Fact Sheet” 2022). The current state of social media and search engines regulation in the United States is one of stall: while public discourse continues and concerns among experts keep raising at each iteration of existing platforms or introduction of new tools, policymakers have not made as much progress in regulating the actions of the companies owning these platforms, in spite of their awareness of the impact these tools have on users’ decision-making abilities. In this context, the spread of dis- and mis- information is a particularly dangerous phenomenon that can only be tackled with a radical change in the way platform administrators are held accountable for the content they allow the circulation of.

This paper aims to serve as a reminder of the effects of social media and search engines usage on the public’s agency, decision-making processes, and ultimately the real-life consequences of the exposure to and interaction with online misinformation and disinformation.

A specific focus on how COVID-19 disinformation (false information which is deliberately intended to mislead) and misinformation (false or inaccurate information)(“Misinformation and Disinformation,” n.d.) spreading on two specific platforms (Reddit, and Google) was handled by said platforms, will serve as a magnifying glass on the aforementioned issues. Analyzing the prominent design and ethical concerns in this landscape, I intend to propose a comprehensive set of heuristics to inform policy-oriented solutions to a set of issues that has been largely misunderstood and underestimated at the policy level in the United States. I argue that these concerns will only increase in complexity if left untackled, and the introduction of Artificial Intelligence (AI) tools to the general public might have already exponentially complicated the position of the United States governmental entities; while explicitly clarifying their intention to ensure people’s well-being, governmental bodies are struggling with either understanding the policy and regulatory options available to them or getting them passed.

Methodology

  • Literature review (psychology, communication)
  • Comparative Analysis of European and United States regulations in context
  • Newspaper archives and current issues on social media, misinformation, disinformation, recent court cases against big tech companies, recent developments in Artificial Intelligence.
  • Literature review (Reddit COVID-19 misinformation)
  • Literature review (Google COVID-19 misinformation)

User Psychology

The flow of information users are subject to on social media and search engines is characterized by a lack of editorial supervision, and quality controls. These features, along with the high speed at which information can spread worldwide, make the subsistence of these platforms directly dependent on user behavior.

Platform administrators and the labor of those who maintain them, operate in ways that are consistent with established knowledge of user psychology, while maximizing revenue for the businesses that these platforms represent or are part of. The public has mostly ignored the business component of the currently established very large social media platforms (such as Facebook [www.facebook.com], Twitter [www.twitter.com], Instagram [www.instagram.com])(Zuckerman 2023); the peculiarity of these platforms is that they place users in a large room-like context where information shared by other users from anywhere in the world is easily available and spreads according to group dynamics and actions guided by the numerous frailties of human judgment.

Misinformation and disinformation, oftentimes consisting of low-credibility content, spread particularly quickly. This depends several factors, including but not limited to the typical features of content containing mis- and dis- information: provocative claims, emotionally charged language, common man appeals, ad-hominem attacks, conspiratorial reasoning, “spectacle” and narrative writing(Molina et al. 2019). Also known as “fake news”, this kind of content creates a high level of engagement across platforms, which leads the various platforms’ algorithms — all driven by engagement — to push these falsities further(Ciampaglia 2018). Algorithmic decisions only exacerbate the problem, which stems from the actions of potentially clueless individual users in concert with those of malicious content creators.

The long-lasting effects of the exposure to mis- and dis- information have been extensively researched(Sanderson, Farrell, and Ecker 2022). It is known that even after knowing information is in fact false, users’ decision-making capabilities continue to be influenced by content they may have simply skimmed through or potentially shared and engaged with(Gordon et al. 2017). This alone should be a strong enough incentive to re-evaluate these platforms and the legislative limbo they currently operate within in the United States. What follows — while in no way comprehensive, — is a deeper discussion of some of the biases and psychological processes at play in this context.

Biases

The amount of information available on a single search or the seemingly infinite feed of any social media platform requires individuals to operate according to a set of heuristics to accelerate their decision-making process and make a credibility judgment on the content they interact with online. People’s cognitive capabilities are limited, and the use of heuristics is often unconscious; while this use is necessary to efficiently navigate these platforms, when applied in the wrong context heuristics have the potential to become biases(Ciampaglia 2018).

About three-in-ten US adults say they are almost constantly online(Perrin and Kumar 2022). This means their activity is constantly monitored, their data stored and analyzed, feeding user profiles that influences what content they see, in what order, when, and sometimes even the form in which the same content may appear. The practice of user profiling is one of the most profitable for social media and search engines, as it allows platforms to harness the power of confirmation bias: the tendency of people to prefer engaging with content they agree with. The same search query doesn’t yield the same results for different people; while several factors influence search engines’ search results — prominently geographic location — profiling is one of these influences. The echo chamber effect is a result of what profiling can create, with users merely interacting with like-minded users and consuming content that will keep them engaged. This phenomenon might influence the user selection process by suggesting content similar to that which they are usually exposed to, considerably limiting the amount of information they can consume on a given topic(Cinelli et al. 2021).

Among the most common shortcuts driving users’ judgment creation and decision-making efforts is the expertise heuristic. This heuristic leads users to believe more strongly in content that was shared by an individual or an entity they deem a subject-matter expert. Many social media platforms have systems in place to ensure accounts are attributed to their real-life counterparts (the blue checkmark being one of the main cues indicating a high level of credibility), however in recent developments many of these systems have enabled ways that allow anyone to purchase a credibility mark and associate it with an account. These unclear identity attribution dynamics make it so that users may believe and act upon false information on the basis of the expertise heuristic, because said information appears to have been shared by a trustworthy entity(Sundar 2008).

Another well-known heuristic at play in this context is the similarity heuristic. The usage of this heuristic leads decision makers who use it to judge the likelihood that an instance is a member of one category rather than another by the degree to which it is similar to other instances in that category(Read and Grushka-Cockayne 2010). The similarity heuristic can easily translate into confirmation bias, which some if not most social media and search engines seem to be optimized for. As discussed previously, users unconsciously act upon confirmation bias to select content to interact with, based on pre-existing beliefs or a number of similar instances of content they might have seen before or simultaneously.

Group Decision-Making Dynamics

The social feedback provided by social media platforms (in the form of likes, shares, votes, comments) is a prominent factor in determining users’ judgment of content they come across(Pennycook and Rand 2021).

Group decision-making (GDM) dynamics influence the way people make sense of and potentially apply what they encounter online in real life. A study conducted by Salehi-Abari, Boutilier, and Larson (2019) has shown that information diffusion is biased toward individuals who share a similar political, ideological, and social leaning(Salehi-Abari, Boutilier, and Larson 2019). In this context, the anonymity granted by social media cannot be ignored; the content we see is often influenced by malicious users who spread false information (disinformation)(Ureña et al. 2019), masking as members of specific communities, or acting as individuals with similar interests engaging with popular content.

The GDM dynamics of social media and search engines can be considered unique for the quantity of information users interact with in order to reach a decision. The interaction between users and trust-based mechanisms is central to this process; as outlined above, trust placement and credibility judgments are often challenging in virtual platforms, although users are tasked to perform these actions in a matter of seconds given the speed at which information flows in these digital spaces.

A study conducted by Flaxman, Goel, and Rao (2016) points out that social networks and search engines are associated with both an increase in the mean ideological distance between individuals and an increase in an individual’s exposure to material from their ideological spectrum(Flaxman, Goel, and Rao 2016). This can be explained by an empathetic social choice framework, according to which individuals make decisions about sharing and acting upon content based on their intrinsic preferences and empathetic preferences — based of the satisfaction of their ideologically proximal peers(Kovačević, Maljugić, and Taborosi 2022).

Online Mis-/Dis- information

Misinformation pervades the virtual spaces we inhabit. Trolls, fake accounts, social bots, organized fake networks(Menczer 2022) have penetrated the substratum of most if not all tech platforms, in some cases creating mass illusions about hot topics in different fields, such as healthcare (COVID-19 vaccines), politics (presidential elections), knowledge creation (conspiratorial material), institutional authority (legitimacy of well-established institutions like the World Health Organization [WHO]).

Consistently with the above discussion of user psychology, an experiment conducted in 2023 by Hadlington, Harkin, Kuss, Newman, and Ryding demonstrated that social media networks served a social purpose during the COVID-19 pandemic, and users saw them as “necessary for social connection and interactions with information”. The different sources of information available on social media, and the opportunity to publicly share opinions about the sensitive subject of COVID-19, were factors of confusion for many participants in the study. Ultimately, the study revealed that while participants engaged in some fact-checking activities, biases and common-sense assumptions guided the consumption of information online during the pandemic(Hadlington et al. 2023).

What follows is a focus on how the social media platform Reddit (www.reddit.com), and the search engine Google (www.google.com) handled the spread of COVID-19-related mis- and dis- information in their respective digital domains.

COVID-19 Mis-/Dis- information on Reddit

Reddit is a social media platform categorized as a many-room social network(Zuckerman 2023). The platform has a set of base rules, which apply to any subreddit — a sub-topic group in which conversations are organized on the platform; additionally, any subreddit has its own set of rules which apply to the conversations within it. The rules for a specific subreddit are stipulated and enforced by the subreddit’s moderators — i.e. one or a group of Reddit users.

In 2021 Reddit, a platform used by over eight percent of Americans to get daily news, banned the subreddit r/NoNewNormal and 54 more in an attempt to respond to groups spreading misinformation related to COVID-19 on the platform. Reddit administrators were pushed to make this decision by a number of subreddits protesting the administration’s unresponsiveness about the misinformation spreading on the platform from the beginning of the pandemic by going private. It is worth noting that r/NoNewNormal and other subreddits were not “quarantined” because of the misinformation they contained, but found guilty of violating Reddit’s policy against bullying and threats of violence to other users.

While Reddit’s attempt to mitigate the harm is commendable, the administrators’ actions do not constitute an effective way to combat misinformation. The banned subreddits were only de-platformed over a year after the beginning of the pandemic. Moreover, Reddit administrators did not condemn the spread of misinformation on the platform but simply acted when other existing rules were broken. Is misinformation about COVID-19 less damaging than bullying and threats of violence to other users? Reddit as a platform did not decide to cooperate with any governmental or non-governmental organization to promote the spread of factual information about COVID-19. A governmental authority wouldn’t have been allowed to enforce such a measure if they wanted to, according to the current state of social media legislation. However, people without the necessary expertise in specific legal and ethical matters — in this case, Reddit’s administrators — took the issue of COVID-19 misinformation in their hands, moderating it on their own terms.

COVID-19 Mis-/Dis- information on Google

In March 2020 Google released a brand new COVID-19 search experience as a response to the increase in the use of the search engine and other tech platforms during the pandemic. “What is COVID-19” was the top searched topic in the US in February 2020(“Finding Patterns in Our Need for Knowledge” 2020), with COVID-19-related search queries being top search trends in other countries as far back as 2019.

The updates consisted of a complete restructuring for COVID-19-related search queries: the updated results related to these queries contain authoritative information from health authorities, along with new data visualizations. Google’s efforts to promote the diffusion of factual information include links to pages from health authorities, and a carousel of official health authorities’ social media accounts and links to local civic organizations (Southern 2020).

Google’s efforts in this context, while not particularly quick and initially only limited to the United States, definitely had an impact on the spread of factual information from government authorities and the WHO around the world. This tacit cooperation between Google and governmental authorities in a time of crisis such as COVID-19 is unprecedented, and it sets a good precedent for future developments in terms of regulating misinformation and identifying malicious actors in short time frames — that is before misinformation spreads outside of the individual(s)’ range of action.

The Present of Online Misinformation Regulation

As of April 2023, the political discourse on misinformation on tech platforms seems as relevant as ever, with different political and civilian parties actively engaged. However, regulation on this matter is stuck decades back and struggling to move forward in the characteristically polarized political climate of the United States.

In other contexts, for example in Europe, progress is being made in terms of passing legislation targeting misinformation spreading on social media and search engines, increasing transparency demands from tech companies, and establishing a system of accountability pertaining to who should be held responsible for mis- and dis- information circulating online. What follows is an overview of two coexisting realities (the United States and Europe) and the fundamental differences that characterize the two nations’ approaches to regulating online misinformation.

United States

Section 230

In 1996 the United States Congress passed the Telecommunications Act, which contains the Communications Decency Act. Section 230 of the Communications Decency Act is considered the most sensitive piece of legislation concerning big tech companies and the way social media and search engines currently operate. Section 230 states the following:

“no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”(“FOREWORD: Section 230: A Hands-Off Approach to Big Tech Could Be Changing” 2022).

This single statement prevents tech companies from being liable for malicious content shared on their platforms, obliging them to remove said content only in case they become aware of it.

This legislation can be considered the most recent attempt at regulating the flow of information online in the United States. While some modifications to Section 230 have been passed in 2018, specifically with the “Fight Online Sex Trafficking Act”(“What Congress Is Doing on Content Moderation: The Two Parties Can’t Agree How to Reform Section 230” 2022) and the “Stop Enabling Sex Traffickers Act” holding platforms accountable for sex-related ads they allowed the circulation of, no other federal regulatory terms have been passed into law on the spread of different harmful content, specifically misinformation and disinformation.

First Amendment

The First Amendment of the United States Constitution is often mentioned in political debates on online misinformation regulation. This amendment protects freedom of speech, which is at the basis of the democratic institutions of the country. It is hard, if not impossible, to regulate what people post without incurring in the roadblock that the Amendment represents. The only way to bypass First Amendment entanglements is to prove the intent to deceive of malicious actors — whether they willfully and knowingly disseminated false claims.

The First Amendment prevents Congress and other governmental entities from passing any legislation that might limit freedom of expression. However, regulation of online misinformation could be framed many different ways: protecting citizens’ agency and preventing harmful content from influencing life-or-death decisions are not actions prohibited by the Constitution, and arguably the duty of any government is to protect citizens’ well-being.

Public Discourse

Several entities have been discussing the impact of mis- and dis- information circulating on social media and search engines on people’s lives. While this conversation has been ongoing since as far back as 2010(“The Google Algorithm” 2010), as of April 2023 not much has been done at the federal or national level.

Many states, and in particular California, Illinois, Texas, and Florida, have been drafting and in some cases passing legislation on social media, mostly concerning privacy violations. In the case of California, the state passed a controversial Medical Misinformation Law(Payerchin 2022) in 2022, holding physicians accountable for spreading COVID-19 misinformation online. The law was extremely controversial and it was ultimately banned by a Federal Court for violating First Amendment protections, and for having an “unconstitutionally vague” definition of misinformation(Myers 2023).

Many events, such as the January 6, 2022, Capitol riot, the Gonzales v. Google court case, Twitter’s ban of former President Donald Trump’s account, and the misinformation spreading online during the COVID-19 pandemic between 2020 and 2022, have involved different governmental entities, from the Federal Communication Commission (FCC), the Food and Drugs Administration (FDA), and the US President — Donald Trump at the time.

The FDA Commissioner Robert Califf recently declared that online misinformation, specifically that related to the COVID-19 vaccines, significantly worsened the COVID-19 death toll in the United States. Califf asked that FDA-related claims be moderated differently than other misinformation claims, as they have significant life-or-death implications(Brody 2023). While the preoccupation with health-related misinformation is well-founded and effectively demonstrated by research on the impact of COVID-19-related claims, an FDA-only online misinformation policy would not only be ineffective, as there’s no system in place to enforce potential restrictions to the operation of tech platforms, but also not conceivable at a time when online misinformation as a whole has not been regulated in the country.

Europe

While the US is seemingly stuck with polarized and vague discussions of online misinformation, Europe (EU) has been leading the tech platform regulation wave for the past few years. With the launch of the General Data Protection Regulation (GDPR) in 2018, European countries had already set themselves apart from every other nation for being the first to address the problem of data privacy and hold big tech platforms accountable for collecting and sharing users’ data without their consent. More recently, the EU passed the Digital Services Act (DSA), yet again the most forward-looking regulation of tech platforms on the topic of online misinformation at the time. The regulation calls for an increase in transparency on “algorithmic risks” on behalf of 19 platforms, including Alphabet’s Google Maps, Google Play, Google Search, Google Shopping, and YouTube, Meta’s Facebook and Instagram, Amazon’s Marketplace and Apple’s App Store(Chee 2023). The DSA was prepared by the current Executive Vice President of the European Commission for A Europe Fit for the Digital Age, Margrethe Vestager, who presides over the European Commission responsible for media and information issues such as telecoms and IT. The DSA official webpage states the following:

“While there are many benefits of the digital transformation, there are also problems. A core concern is the trade and exchange of illegal goods, services and content online. Online services are also being misused by manipulative algorithmic systems to amplify the spread of disinformation, and for other harmful purposes. These challenges and the way platforms address them have a significant impact on fundamental rights online.

Despite a range of targeted, sector-specific interventions at EU level, there were still significant gaps and legal burdens to address in the beginning of the 2020s. For example some large platforms control important ecosystems in the digital economy. They have emerged as gatekeepers in digital markets, with the power to act as private rule-makers. These rules sometimes result in unfair conditions for businesses using these platforms and less choice for consumers“(“The Digital Services Act Package” 2023).

The very existence of the European Commission for A Europe Fit for the Digital Age is proof of the EU countries’ willingness to rely on scientific evidence to protect and enhance the well-being of their citizens. Despite the different political leanings of the 27 countries that are part of the European Union (see the current Italian conservative right-wing Prime Minister Georgia Meloni(“Giorgia Meloni,” n.d.); the current Chancellor of Germany Olaf Scholz, affiliated with the Social Democratic Party(“Olaf Scholz,” n.d.); the current centrist Prime Minister of France, Élisabeth Borne(“Élisabeth Borne,” n.d.)) the European Commission for A Europe Fit for the Digital Age was able to pass a set of legislations that applies to all the EU countries, united by their democratic foundations and the fundamental duty to protect their citizens’ well-being.

The Future of Online Misinformation Regulation

The protection of citizen’s well-being can be considered one of the fundamental duties of any democratic government. Having established the impact of online misinformation on the public’s decision-making capabilities and overall well-being, and considering the aforementioned governmental duty as a given, democratic societies can no longer depend on companies doing the right thing if and when they wish, independently of any rules and democratic systems of accountability(Floridi 2021b).

Thriving on dis- and mis- information by design

The algorithms of search engines and social media platforms have an unparalleled degree of power in shaping the infosphere, or the information users come across online. In an experiment conducted in 2021 by Agudo and Matute, the researchers showed that in the context of political decisions, in the absence of additional knowledge about a political candidate, an algorithm was able to influence the participants’ voting preferences through one simple explicit recommendation(Agudo and Matute 2021). Tech platforms are constantly updating their algorithms with the goal of maximizing user engagement and providing content that is “most relevant for them” — which doesn’t equate to the most accurate or factual content. This constant update seems to go in the direction of reinforcing existing biases and exploiting the power of the data that makes up users’ digital footprints on and across platforms. It has been pointed out that newsfeed algorithms mostly “amplify our worst instincts”(Meserole 2022), and the higher reach that non-factual posts tend to have (see the case of Canadian journalist Natasha Fatah(Meserole 2022)) is evidence of that.

A leading user experience design company, Nielsen Norman Group, corroborates the claim that big companies should take responsibility for information-seeking behavior, as these are directly related to design changes: new designs can trigger new behaviors and eye-tracking patterns, particularly when they are frequently encountered (like with Google). In this ever-evolving context, even an individual design decision could be impactful in the long run(“How Search Engines Shape Gaze Patterns During Information Seeking,” n.d.).

Self-Regulation Pitfalls

Platforms like Google and Facebook have established a number of internal regulatory boards to address different scandals and political developments from the early 2010s to today. However, the various attempts at self-regulation clashed with the business models of said platforms. Self-regulation simply doesn’t work because of the business architecture of these companies, and the structural changes that facing the pressing ethical debates around misinformation and disinformation (just one of the many debates surrounding the activities of these platforms) would require(Floridi 2021a). As we all know, social media and search engines are free to use, but as businesses they make revenue by selling user attention to brands, governments, and non-governmental organizations(Brown 2021). As identified by both the public and governmental authorities, it is this very business model that hinders the possibility of self-regulation. Relying on these platforms’ administrators to re-imagine the attention-driven model they operate on, would equate asking them to go out of business.

Can the US government do something? — Yes

I believe the US has the possibility to act upon these issue, given what is being done in Europe and the similarities in governmental structures — unquestionably EU countries and the US are capitalistic democracies, more or less progressive and at the pioneering fringes of technological development.

Necessary Steps

My analysis will evidence a few critical issues which I believe need to be addressed in order to move forward with a robust, fundamentally democratic, nationwide effort to regulate tech platforms:

*Clearly define misinformation and disinformation under the umbrella term “fake news”. Semantic looseness leaves room for interpretation and reduces the ability to enforce future policies. This must be an international discussion. A location like the UN Council would be appropriate

*Educate government officials on technology: in various instances, Congress(“Tiktok Hearing Proves Congress Still Ignorant about Social Media” 2023), the FCC(Coldewey 2020), and other governmental entities have shown they are not always aware of the context in which they are tasked to operate. An increase in media literacy is paramount for these institutions to appropriately address the diverse range of issues — those they are aware of and those they might be unfamiliar with at this time.

*Define a system of accountability at the governmental level: at this point in time, individual states are operating in terms of tech platform regulation because there is not a clear system of accountability at the national and federal level. While the FCC looks like the most appropriate starting locus for the social media and search engines’ regulation effort, this governmental body has declared they “have reason to not be part of this discussion”. While this must change, adequate expert support must be provided to the FCC for them to be able to not only participate in the discussion of the harms of social media and search engines misinformation but also operate upon the aforementioned issues.

*Require transparency on the tech companies’ side: following in the footsteps of the European DSA, which requires 19 platforms to disclose information on their algorithms and other content moderation policies, I believe the US can and should demand companies communicate and disclose how and why their algorithms may influence the information users have access to.

*Invest in a large-scale research effort into how misinformation spreads: technology is constantly evolving, therefore to make accurate, timely, and informed decisions an investment in multidisciplinary research is paramount.

*Appropriately communicate to citizens the newly established apparatus: developing and maintaining public trust is critical when passing new policies, particularly in a case like that of tech platforms(Hyland-Wood et al. 2021) that represent a significant part of how people in the United States interact with each other and the world.

*Distribute decision-making power: it is known that government oversight of media outlets is not well-received by citizens of democratic countries, who see it as an easy way to silence opposition and criticisms to the government, potentially undermining the freedom of speech so fundamental to any democratic political system. I suggest the government focuses on regulating tech companies’ design decisions (both at the interface and algorithmic level) rather than what people share on these platforms.

New Frontiers: Artificial Intelligence and Misinformation

As previously mentioned, Section 230 was authored in 1996 by Reps. Ron Wyden and Chris Cox(Lima 2023). Both Cox and Wyden have declared that while discussions on the matter are still ongoing, AI chatbots will not enjoy the protections afforded by the law. Particularly interesting is Wyden’s statement, “Section 230 is about protecting users and sites for hosting and organizing users’ speech… (it) has nothing to do with protecting companies from the consequences of their own actions and products.” Where does this leave us?

AI misinformation

The case of an AI-generated picture of Pope Francis in a white Balenciaga jacket has been regarded as “the first real mass-level AI misinformation case”, as it caused widespread confusion; shortly after followed the case of AI-generated pictures of former President Donald Trump being arrested, which circulated online at a very fast speed, worldwide. Instances of scammers using AI to replicate people’s voices and blackmail friends and family have already been reported in the US and Australia as of April 2023. More recent is the case of AI-generated news anchors in Venezuela, which are spreading misinformation about the country’s economy on social media(“These ‘News Anchors’ Are Created by Ai and They’re Spreading Misinformation in Venezuela” 2023). On April 15, 2023, a German magazine published an exclusive interview with Michael Schumacher only to reveal shortly after they used AI to generate Schumacher’s quotes(Salt 2023).

In the relatively short time frame since AI chatbots have become extensively available to the public — ChatGPT was only released as a prototype on November 30, 2022 (“Chatgpt,” n.d.) — these tools have been used to generate and spread a significant amount of misinformation.

Blurring the Line Between Real and Fake

Artificial Intelligence has been regarded as “agency without intelligence” by influential researchers like Prof. Luciano Floridi, an expert in the Ethics of Information Technology. At this time AI tools increase users’ agency but do not possess semantic capital, one of the qualities that characterize human intelligence as being able to construct meaning by tapping into past experiences and contextual cues(Floridi 2018). Unethical practices with respect to AI are not just limited to its use; the design and development phases of these tools must be the starting point of any thorough ethical analysis of AI aimed at minimizing the negative impact of this technology on users’ real-life decisions and online experiences. Reactive legislation is sometimes the only option, but I believe the potential dangers of AI tools have been pre-announced by the evidence available on the impact of other tech platforms (such as social media and search engines) on the public’s decision-making capabilities and overall well-being.

Acknowledgments

This work is very far from being complete, for more reasons than I can list.

– This is a field that continuously evolves. I had to stop collecting information as of April 22, 2023, and I am sure much will change by the time anyone gets to read this report.

– My analysis is multidisciplinary by nature. My breadth of knowledge in the different subjects involved in this research (Psychology, Philosophy of Information, Tech Policy, US Law, EU law, Data Science, History of Computer Science…) differs widely, and I’ve found myself thinking whether I could do a good job in every domain.

– This report is the result of four months’ worth of work (of which one was spent exclusively writing and making sense of the knowledge I have acquired and recorded). I estimate that a report I’d be fully satisfied with would take at least double the time I had.

I want to thank them for their invaluable advice, help, and perspective: Prof. Arlen Moller, Prof. Carly Kocurek, Prof. Mar Hicks, Prof. Hannah Ringler, Prof. J.D. Trout, Prof. Sonja Petrovic, the librarians at the Library Learning Center and the staff of the Writing Center at Illinois Institute of Technology.

My admiration for Prof. Luciano Floridi’s work informed part of this research.

Finally, thanks to Ciro Petrone, a software engineer with a 35-year-long career and my father, who inspired me to expand the scope of this research to include an analysis of the challenges posed by generative AI tools.

References

Agudo, Ujué, and Helena Matute. 2021. “The Influence of Algorithms on Political and Dating Decisions.” PLOS ONE 16 (4). https://doi.org/10.1371/journal.pone.0249454.
Brody, Ben. 2023. ‘Truth Is Losing the Battle’: FDA Commissioner on Grappling with a Wave of Health Misinformation.” STAT, March. https://www.statnews.com/2023/03/09/fda-covid19-vaccine-misinformation-health/.
Brown, Sara. 2021. The Case for New Social Media Business Models. MIT Sloan. https://mitsloan.mit.edu/ideas-made-to-matter/case-new-social-media-business-models.
“Chatgpt.” n.d. In Wikipedia. https://en.wikipedia.org/wiki/ChatGPT#:~:text=ChatGPT%20launched%20as%20a%20prototype,identified%20as%20a%20significant%20drawback.
Chee, Foo Yun. 2023. “EU Singles Out 19 Tech Giants for Online Content Rules.” Reuters, April. https://www.reuters.com/technology/google-amazon-meta-microsoft-15-others-subject-eu-content-rules-2023-04-25/.
Ciampaglia, Giovanni Luca. 2018. Biases Make People Vulnerable to Misinformation Spread by Social Media. Scientific American. https://www.scientificamerican.com/article/biases-make-people-vulnerable-to-misinformation-spread-by-social-media/.
Cinelli, Matteo, Gianmarco Francisci Morales, Alessandro Galeazzi, Walter Quattrociocchi, and Michele Starnini. 2021. “The Echo Chamber Effect on Social Media.” Proceedings of the National Academy of Sciences 118 (9). https://doi.org/10.1073/pnas.2023301118.
Coldewey, Devin. 2020. “FCC Commissioner Disparages Trump’s Social Media Order: ‘The Decision Is Ours Alone.’ TechCrunch, August. https://techcrunch.com/2020/06/17/fcc-commissioner-disparages-trumps-social-media-order-the-decision-is-ours-alone/.
“Élisabeth Borne.” n.d. Wikipedia. https://en.wikipedia.org/wiki/%C3%89lisabeth_Borne.
“Finding Patterns in Our Need for Knowledge.” 2020. In Searching Covid-19. https://searchingcovid19.com/.
Flaxman, Seth, Sharad Goel, and Justin M. Rao. 2016. “Filter Bubbles, Echo Chambers, and Online News Consumption.” Public Opinion Quarterly 80 (S1): 298–320. https://doi.org/10.1093/poq/nfw006.
Floridi, Luciano. 2018. “Semantic Capital: Its Nature, Value, and Curation.” Philosophy & Technology 31 (4): 481–97. https://doi.org/10.1007/s13347-018-0335-1.
———. 2021a. “The End of an Era: From Self-Regulation to Hard Law for the Digital Industry.” Philosophy & Technology 34 (4): 619–22. https://doi.org/10.1007/s13347-021-00493-0.
———. 2021b. Trump, Parler, and Regulating the Infosphere as Our Commons. Philosophy & Technology. https://doi.org/10.1007/s13347-021-00446-7.
“FOREWORD: Section 230: A Hands-Off Approach to Big Tech Could Be Changing.” 2022. Congressional Digest 101 (6): 1–2.
“Giorgia Meloni.” n.d. Wikipedia. https://en.wikipedia.org/wiki/Giorgia_Meloni.
Gordon, Andrew, Jonathan C. W. Brooks, Susanne Quadflieg, Ullrich K. H. Ecker, and Stephan Lewandowsky. 2017. “Exploring the Neural Substrates of Misinformation Processing.” Neuropsychologia 106: 216–24. https://doi.org/10.1016/j.neuropsychologia.2017.10.003.
Hadlington, Lee, Lydia J. Harkin, Daria Kuss, Kristina Newman, and Francesca C. Ryding. 2023. “Perceptions of Fake News, Misinformation, and Disinformation Amid the COVID-19 Pandemic: A Qualitative Exploration.” Psychology of Popular Media 12 (1): 40–49. https://doi.org/10.1037/ppm0000387.
“How Search Engines Shape Gaze Patterns During Information Seeking.” n.d. Nielsen Norman Group. https://www.nngroup.com/articles/google-baidu-serp-comparison/.
Hyland-Wood, Bernadette, John Gardner, Julie Leask, and Ullrich K. Ecker. 2021. “Toward Effective Government Communication Strategies in the Era of Covid-19.” Humanities and Social Sciences Communications 8 (1). https://doi.org/10.1057/s41599-020-00701-w.
Kovačević, Aleksandra, Biljana Maljugić, and Srdana Taborosi. 2022. “THE ROLE OF SOCIAL MEDIA IN THE DECISION-MAKING PROCESS.” In Conference Proceedings: XII International Symposium Engineering Management and Competitiveness, 197–91. https://www.researchgate.net/publication/361232296_THE_ROLE_OF_SOCIAL_MEDIA_IN_THE_DECISION-MAKING_PROCESS.
Lima, Cristiano. 2023. “Ai Chatbots Won’t Enjoy Tech’s Legal Shield, Section 230 Authors Say.” The Washington Post, March. https://www.washingtonpost.com/politics/2023/03/17/ai-chatbots-wont-enjoy-techs-legal-shield-section-230-authors-say/.
Menczer, Filippo. 2022. “Facebook Whistleblower Frances Haugen Testified That the Company’s Algorithms Are Dangerous – Here’s How They Can Manipulate You.” The Conversation, December. https://theconversation.com/facebook-whistleblower-frances-haugen-testified-that-the-companys-algorithms-are-dangerous-heres-how-they-can-manipulate-you-169420.
Meserole, Chris. 2022. “How Misinformation Spreads on Social Media-and What to Do about It.” Brookings. https://www.brookings.edu/blog/order-from-chaos/2018/05/09/how-misinformation-spreads-on-social-media-and-what-to-do-about-it/.
“Misinformation and Disinformation.” n.d. American Psychological Association. https://www.apa.org/topics/journalism-facts/misinformation-disinformation.
Molina, Maria D., S.Shyam Sundar, Thai Le, and Dongwon Lee. 2019. ‘Fake News’ Is Not Simply False Information: A Concept Explication and Taxonomy of Online Content.” American Behavioral Scientist 65 (2): 180–212. https://doi.org/10.1177/0002764219878224.
Myers, Steven Lee. 2023. “A Federal Court Blocks California’s New Medical Misinformation Law.” The New York Times, January. https://www.nytimes.com/2023/01/26/technology/federal-court-blocks-california-medical-misinformation-law.html.
“Olaf Scholz.” n.d. Wikipedia. https://en.wikipedia.org/wiki/Olaf_Scholz.
Payerchin, Richard. 2022. “California Enacts COVID-19 Misinformation Law for Physicians.” Medical Economics, October. https://www.medicaleconomics.com/view/california-enacts-covid-19-misinformation-law-for-physicians.
Pennycook, Gordon, and David G. Rand. 2021. “The Psychology of Fake News.” Trends in Cognitive Sciences 25 (5): 388–402. https://doi.org/10.1016/j.tics.2021.02.007.
Perrin, Andrew, and Madhu Kumar. 2022. “About Three-in-Ten u.s. Adults Say They Are ‘Almost Constantly’ Online.” Policy Commons, December. https://policycommons.net/artifacts/616696/about-three-in-ten-us/1597382/.
Read, Daniel, and Yael Grushka-Cockayne. 2010. “The Similarity Heuristic.” Journal of Behavioral Decision Making 24 (1): 23–46. https://doi.org/10.1002/bdm.679.
Salehi-Abari, Amirali, Craig Boutilier, and Kate Larson. 2019. “Empathetic Decision Making in Social Networks.” Artificial Intelligence 275: 174–203. https://doi.org/10.1016/j.artint.2019.05.004.
Salt, Nathan. 2023. “Michael Schumacher ‘Exclusive’ Interview Slammed After Fake AI Quotes Are Revealed.” Daily Mail Online, April. https://www.dailymail.co.uk/sport/formulaone/article-11989707/German-magazine-slammed-promoting-exclusive-interview-Michael-Schumacher.html.
Sanderson, Jasmyne A., Simon Farrell, and Ullrich K. Ecker. 2022. “Examining the Role of Information Integration in the Continued Influence Effect Using an Event Segmentation Approach.” PLOS ONE 17 (7). https://doi.org/10.1371/journal.pone.0271566.
“Social Media and News Fact Sheet.” 2022. Pew Research Center’s Journalism Project, September. https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/.
Southern, Matt G. 2020. “Google Launches COVID-19 Info Site & New Search Experience for Coronavirus Queries.” Search Engine Journal, March. https://www.searchenginejournal.com/google-launches-COVID-19-info-site-new-search-experience-for-coronavirus-queries/356350/.
Sundar, S.Shyam. 2008. “The MAIN Model: A Heuristic Approach to Understanding Technology Effects on Credibility.” Edited by Miriam J. Metzger and Andrew J. Digital Media, Youth, and Credibility, 73–100. https://doi.org/10.1162/dmal.9780262562324.073.
“The Digital Services Act Package.” 2023. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package.
“The Google Algorithm.” 2010. The New York Times. https://www.nytimes.com/2010/07/15/opinion/15thu3.html?_r=3.
“These ‘News Anchors’ Are Created by Ai and They’re Spreading Misinformation in Venezuela.” 2023. CNN, April. https://www.cnn.com/videos/business/2023/03/30/venezuela-ai-avatars-misinformation-jc-jg-orig.cnn-business.
“Tiktok Hearing Proves Congress Still Ignorant about Social Media.” 2023. MSNBC, March. https://www.msnbc.com/the-reidout/reidout-blog/tiktok-hearing-proves-congress-still-ignorant-social-media-rcna76405.
Ureña, Raquel, Gang Kou, Yucheng Dong, Francisco Chiclana, and Enrique Herrera-Viedma. 2019. “A Review on Trust Propagation and Opinion Dynamics in Social Networks and Group Decision Making Frameworks.” Information Sciences 478: 461–75. https://doi.org/10.1016/j.ins.2018.11.037.
“What Congress Is Doing on Content Moderation: The Two Parties Can’t Agree How to Reform Section 230.” 2022. Congressional Digest 101 (6): 16–17. https://web-s-ebscohost-com.ezproxy.gl.iit.edu/ehost/detail/detail?vid=1&sid=f90c5518-1d16-40bb-82d0-41e8d6fd8510%40redis&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#AN=157059457&db=ulh.
Zuckerman, Ethan. 2023. “A Social Network Taxonomy.” New Public, February. https://newpublic.substack.com/p/a-social-network-taxonomy?ref=everything-in-moderation.