Policy Memo: Proposal to Facebook on Content Moderation Strategy Changes

Adriana Lamirande
7 min readNov 21, 2020

--

Originally written for The Fletcher School’s Cyber in the Civilian Sector: Threats & Upheavals course, October 2020.

Prompt: Suppose you are a policy and communications adviser at Facebook. You are asked to draft a memo proposing changes to the existing content moderation strategy that will improve the company’s ability to respond to flagged content more quickly and effectively and minimize user dissatisfaction and negative publicity. Identify 2–3 specific content moderation decisions that have caused public outcry and propose how the company could have handled them better and what steps they should take to improve their approach moving forward.

Last week, leading disinformation scholar and Shorenstein Center research director Joan Donovan was quoted in the New York Times: “Policies are a guide for action, but the platforms are not standing behind their policies. They are merely reacting to public pressure and therefore will be susceptible to politician influence for some time to come.” This sentiment encapsulates the onslaught of criticism we at Facebook have dealt with since November 2016, which compels me to highlight two specific instances we missed the mark on, how we can learn from them and adapt our content moderation decisions to avoid future blowback accordingly. The first case study is around our failure to promptly respond to reports that right-wing militia groups were organizing and encouraging violence in private Groups ahead of the Kenosha shooting. I’ll be presenting tangible policy changes we must make to avoid further scrutiny around incitement to violence verbalized on Facebook and perpetrated offline shortly thereafter.

On the intensifying disinformation front ahead of 2020 U.S. elections, the second case study is centered on our ill-conceived delay in banning QAnon accounts and content. Our final call came months after an initial but nominal effort, and this miscalculation has cost us — we are accused of enabling the spread of dangerous conspiracies and saw the departure of many employees as a result. Concerns that hate speech and disinformation is profitable for Facebook, and thus not properly stymied, should ring alarm bells for C-Level leadership, Trust & Safety teams, and content moderation managers. We have the opportunity to respond to the public, policymakers, and civil society’s call for action, and I hope to see this realized. Our company faces an unprecedented moment in history, and we can, and should, try to do the right thing by addressing the specific ills of our platform features, ponder how to strengthen Community Guidelines to tamp down on incitement to violence and disinformation, and regenerate governing values we can be proud of.

The debut of the “Real Facebook Oversight Board” was spurred by our lack of expedient action on tamping down on our role in facilitating the organization of militias, amongst other issues. The “Protecting Americans from Dangerous Algorithms Act” Democratic bill introduced this week proposed amendments to Section 230 on the grounds that algorithms used by social media platforms — namely Facebook — facilitate extremist violence that deprives U.S. citizens of constitutional rights. The suit specifically accuses Facebook of abetting violence in Kenosha by “empowering right-wing militias to inflict extreme violence” and depriving plaintiffs of civil rights. Gizmodo reported that a Facebook event started by the Kenosha Guard encouraged an explicit “call to arms” and encouraged vigilante violence at protests against racial injustice (one user stated: “I fully plan to kill looters and rioters tonight”). The event was flagged by users 455 times but never taken down. Deemed an “operational mistake” by CEO Mark Zuckerberg, this decision cost the victims of the Kenosha shooting their lives.

Clearly, the guardrails we have in place are not high nor strong enough, and instead of constantly dodging claims of content mishandling — it’s time we implement drastic harm reduction measures. Regarding Groups, we have continuously been railed against for making claims that we did not follow through on concerning white national organizations — notably, a public statement we would ban all related groups in March 2019, but waiting until June of that year to remove nearly 200 accounts with white supremacist ties. An August 2020 policy restricting the activities of “organizations and movements that have demonstrated significant risks to public safety,” including “US-based militia organizations,” was criticized as not coming too late and leaving many problematic pages up. As such, I recommend Facebook disable the ability for users to create private and public Groups and Pages altogether. Banning all groups rather than selectively removing a few will ensure parity towards the right and the left, quell accusations of bias and censorship, and circumvent our supposed role in U.S. political polarization through an objectively non-discriminatory and universal policy tool.

Next, it is imperative authorization around our Events feature becomes more restrictive. While it is impossible to monitor content on all Events for violations, we must prioritize the removal of those that display clear calls to action that could cause real-world harm. To handle this gargantuan task, we propose creating a dedicated in-house moderation management team (rather than outsourcing to third-party content moderation staffing agencies) that will scan upcoming U.S.-based events. In order to narrow search queries and surface a batch of highest concern, mirroring Trending Topics that appear on our News Feed (and related hashtags) is a good first step. Additionally, creating a library of key search terms including tags for BLM counterprotests, fringe groups with white supremacist ties, and anti-mask rallies, will further delineate effective enforcement of event activity and potential veering from free speech into calls for incitement to violence. If egregious behavior is identified, a one-time user notification process can be deployed, and if reported posts are not deleted or adjusted to remove any reflections of hate speech or incitement to violence within 24 hours, the event will be deleted within 48 hours.

Our approach to disinformation must also be severely revised, particularly around political content or posts that could have political connotations, by expanding the scale at which we fact-check and directly attack misinformation. According to German Marshall Fund research, engagement with misinformation on Facebook has nearly tripled since 2016, and content from websites that “fail to gather and present information responsibly” has increased by 293%. It has become clear we reside in an epistemic system of right and wrong, wherein Americans on our platform have the right and will choose what to believe around current events and civic matters. Trust in mainstream media for some reflects trust in smaller niche outlets for others. As far as Facebook is concerned, we have inadvertently become a top source for news, with Pew Research finding that 43% of Americans get their news on social media, though many express concerns about its accuracy. Knowing this, human curators behind our News tab initiative should actively promote reputable sources and factual content, reviewing stories with “news judgement” as a traditional editor would. Fact-checking remains crucial in newsrooms, and not doing so — in the case of myriad QAnon posts from both everyday users and public figures — lends the information a kernel of truth, setting a dangerous precedent seen in the meteoric rise of the conspiracy movement.

Our initial crackdown to address its popularity was not far-reaching enough, as members adopted tactics such as renaming groups and toning down messaging to make it seem less jarring in order to get around community violation rules. We must adapt a more nimble strategy around such fringe groups, and go beyond simply policing activity that indicates calls for violence. Capturing how bad actors shift dissemination tactics to stymie new barriers should also be top of mind. Our final call to take down nearly 100 groups and pages, many of which held thousands of followers, demonstrates we have the power to slow the spread of disinformation, but our delay empowered members by consolidating a dedicated follower count through Groups and Pages, lending an air of legitimacy to owned “research” and propaganda, and supporting the movement’s monetization through the display of related advertisements. We also inadvertently emboldened affiliated political candidates like avowed “truther” and Congressional candidate Marjorie Taylor Greene, who leveraged her hefty following to posit that “Q is someone who very much loves his country.” In the longer term, we should reference our consistent removal of recruitment propaganda for ISIS and other Islamist groups, and deploy similarly aggressive methods to fringe conspiracy and white supremacist groups.

Finally, doubling down on accurate labelling of reputable news stories (featuring perspectives across the aisle), offering geolocalized context, and limiting ads with questionable or ambiguous political affiliation can help bolster a healthier and more diverse forum for online political, social and cultural debate. To this point, there are certain editorial guidelines we must systematically abide by, and user-generated content that goes viral or is supplied by an influencer must be stringently monitored and checked for accuracy, given the power millions of followers and empty conspiracy theories can have on political discourse and the democratic process. Such updates must also be transparent and easily accessible by users. Comprehensive political speech policies to strengthen an information environment that we are a big player in will not only demonstrate our commitment to taking action on adhoc platform law, but also serve as a shield for when inevitable charges around bias arise.

Facebook remains the face of the Stop Hate for Profit campaign. We don’t want this to be our legacy. To combat this narrative, we must abandon what has become an absolutist stance on free speech, untether from many features that force us to become an arbiter on speech and truth in the online public square, and recognize the potential of a technosolution while we await inevitable regulation. Recentering on fostering healthy political debate online, protecting the digital rights of marginalized communities, and connecting users with crucial civic information from reputable and verifiable news sources should define our mission today.

--

--

Adriana Lamirande
Adriana Lamirande

Written by Adriana Lamirande

A place to gather policy memos, academic research papers, op-eds, and creative musings. Interests include the internet, psychoanalysis & video art.