Policy Memo: We Need Cross-Sectoral Collaboration To Tackle Infodemics
Originally written for The Fletcher School’s Cyber in the Civilian Sector: Threats & Upheavals course, November 2020.
Prompt: Compare and contrast the online disinformation environment of the 2016 and 2020 U.S. presidential elections. Describe (1) the different factors contributing to disinformation during each election, (2) how regulators and content platforms changed their approaches to disinformation in the aftermath of the 2016 election and what impact, if any, those changes had in 2020, and (3) what lessons should be drawn from the 2020 election about online disinformation and how regulators and/or tech companies should act on those lessons.
Social media’s U.S. election “disinfodemic” was spawned in 2016, a year that seemed to present online disinformation’s worst-case scenarios. But as we’ve seen, remnants are still with us today. A noteworthy differentiator is the crux of election threats: 2020’s had a distinctive home grown nature whereas 2016 saw more exported influence operations conducted by foreign adversaries. Former FBI agent and disinformation expert Clint Watts is quoted as stating that: “Nothing that Russia or Iran or China could say is anywhere near as wild as what the president is saying. We cannot say this time that Russia, Iran or China interfered in a significant way. They don’t need to write fake news this time — we’re making plenty of fake news of our own.”
Rather than deploying troll farms through its Internet Research Agency, hacking DNC emails and fueling an arguably disruptive narrative around Clinton’s emails, or aiding voter suppression efforts of Black Americans by encouraging them to stay home — 2020 saw Russian operatives reposting screenshots of President Trump’s tweets claiming election fraud and related statements declaring victory. State-run media operation RT reported that “Trump calls results ‘big WIN’ & accuses opponents of ‘trying to STEAL’ election, gets ‘misleading’ label from Twitter.”
The rise of “fake news” brought forth by the Trump campaign and sowed through infamous tweets since his first presidential bid planted the seeds and prompted the widespread sharing of false and hyper-partisan news stories. This fueled the polarization that has come to define our political environment, and prompted platforms to reevaluate algorithmic amplification enabling the spread of divisive and deceptive content — a stained legacy it continues to grapple with today.
Russian bots were found to have produced and disseminated troves of disinformation in 2016, going so far as to recruit Americans to pen and peddle incorrect and distorted claims. Digital platforms were simply used as designed by bad actors, who didn’t need to game their systems to quickly go viral. Ahead of this year’s elections, tech companies found that success in clamping down on bots did not necessarily materialize in confronting a new type of inauthentic behavior coming from U.S.-based trolls.
The notable shift to mail-in ballots due to the pandemic also opened the door to less online and more IRL opportunities to sow disinformation, wherein compromised robocalls, faulty voting guides and misleading text messages attempted to dissuade voters from hitting the polls or spread false rumors about Democratic candidate Joe Biden. Former Homeland Security Cybersecurity & Infrastructure Security Agency Director Chris Krebs told reporters that making sure Americans had the facts was “one of the best tactics and techniques we have right now to counteract these disinformation operations and influence ops.” A Pew Research study found that the politicization of mail-in ballots and the question of their legitimacy — an unfounded hypothesis heavily pushed by Trump — left Americans divided, with almost half of Republicans believing mail-in ballot fraud was of concern.
Informed by 2016 blunders and non-stop accusations of content moderation mismanagement since, social media platforms braced for 2020 election mishaps with updated policies, expert new hires and a seemingly genuine commitment to fact-check their “informationscapes.” Pre-election month reports were centered on efforts to add friction to products and services, namely their highly coveted and profitable ad business, with Facebook putting in place more stringent approval processes for political advertisers and blocking new political ads for a period after election day. Twitter, on the other hand, banned political ads for the year altogether, and disabled sharing features on tweets containing misleading information about election results. Facebook later followed suit and suspended political advertising indefinitely.
Anticipating that election results could be contested or inconclusive for a few days, Facebook noted that it would pin a message to the top of Facebook and Instagram feeds telling users that vote counting is still underway, then reflect consensus results from Reuters, the Associated Press, and six independent decision desks at major media outlets once they’d been officially reported. In the event a candidate declared victory before authoritative results were in, the company confirmed it would not remove those statements but would pair them with a notice that voting remained underway and that no official result had yet been released. Twitter stated it would rely on reports from ABC News, AP, CNN, CBS News, Decision Desk HQ, Fox News and NBC News, and similarly explained it would remove or attach warning labels to such premature claims. Some of their posts also redirected users to voting information centers for details.
A quick count by CNN reporter Brian Fung found that “50% of Trump’s tweets from the last 24 hours (original tweets, not counting RTs) have received a contextual label from Twitter.” Beyond rapid-response labelling, Twitter also reduced moderated tweets’ ability to spread through requiring users to click “View” or by limiting replies, and even suspended an account promoting a deep fake ballot burning video quote-tweeted by Eric Trump. YouTube was arguably the most lax, leaving up a pro-Trump OAN news video proclaiming his victory, finding it did not violate community standards but labelling and demonetizing it nevertheless. Its policy explicitly states that it bans content “encouraging others to interfere with democratic processes, such as obstructing or interrupting voting procedures.”
In terms of more holistic changes, both platforms hired additional personnel to assist with the heavy lift — with Facebook dubbing its operation the Elections Operations Center, composed of threat intelligence, data science, engineering and legal leaders serving as traffic control for problematic content. Additionally, the platform developed partnerships with bipartisan fact-checkers, bypassing responsibility about being the “arbiters of truth” by relying on independent support to identify and review potential misinformation, enabling the platform to take targeted action as cases arose. In tandem, it debuted a “virality circuit-breaker” to give fact-checkers enough time to evaluate suspicious stories. The company also demoted News Feed content that contained election-related misinformation, making it less visible, and limited the distribution of election-related Live streams.
Some digital rights experts affirm that companies should consider making some of these changes permanent, which leads us into what lessons we can draw from this election cycle on social media, and what regulators and private companies still need to learn to effectively protect them moving forward. The Chief Executive of the Leadership Conference on Civil and Human Rights Vanita Hupta echoes that “these are important steps but we’re going to be vigilant about how these play out in real time.”
It is clear that social media platforms will continue to struggle in the fight against disinformation, and must keep striving to protect the integrity of the election process and democratic deliberation it affords. Some experts concede that the rush to install emergency processes this time around may not be sustainable or scalable, and that while such precautions prevented any catastrophic collateral damage, the question arises around whether such hypervigilance will last. Regulation like the bipartisan Honest Ads Act is a good start, as it further scrutinizes paid political ads, specifically calling for more careful considerations they meet codes of conduct and increased transparency through the maintenance of public databases of all political advertisements.
To this point, continuing to closely monitor for and identify local and foreign trolls who pay to target certain demographics will also be crucial. From a civil society perspective, continuing to interrogate the overarching business model and poking holes in platforms’ algorithmic engagement backend strategies can further help illuminate problem areas to address when it comes to egregiously false, conspiratorial, and extremely hyper-partisan content. Contextual clues are equally important for companies to tackle, while media literacy efforts take shape on the policy end.
UNESCO has published some policy and strategy guidelines that propose governance processes and harmonizes critical needs around information access and interpretation, interactions with the right to free expression, and solutions to enable users to gain critical competencies to navigate the 21st century communication and information ecosystem. Combatting disinformation often falls on unsuspecting individuals, so thoughtful action to arm them with the knowledge and tools to discern truth from falsehoods, distinguishing reputable news sources from disingenuous blogs and accounts, and diversifying media diets is one way to begin the public service effort.
In the private sector, Twitter recently began adding fact-checking and context to its trending topics, giving viewers more information about why a topic has become the subject of widespread conversation. It plans to launch a similar feature on its U.S. For You page this month. Tools like the “Spot the Troll” quiz have also been created to raise awareness about such ills. Policymakers can empower their constituents with critical media and information literacy through partnerships with public schools and libraries, and sponsorships from news publishers — who can offer subject matter expertise around disinformation, local and national elections in order to revitalize our fragile information environment.
Cross-sectoral collaboration will be essential to tackle this existential crisis, and rebalance what and how information gets disseminated online. Social media platforms did not invent disinformation and manipulation campaigns, but they have the power to adjust infrastructure that favors and enables their virality. Policymakers should hold them to that.