What is the greatest challenge prompted by social networking? Why?

Adriana Lamirande
6 min readApr 9, 2020

Originally written for Harvard Kennedy School’s Digital Platforms, Journalism & Information course, February 2020.

The advent of the internet and digital media have introduced a new paradigm composed of a richer stream of information that travels quickly across time and space, and where the role of traditional news media has been casually co-opted by major social media platforms. An accompanying lack of constraints on user-generated content and its potential for monetization through algorithmic distribution models have ignited a crucial debate around how to tackle flourishing disinformation and hate speech campaigns without infringing upon freedom of speech.

While online communication networks have positively expanded the court of public opinion and equipped us with myriad information sources to better understand our world, it has also perversely transformed a news landscape we used to trust. The key qualm we find with the “changing normal” is the absence of knowledge workers through which information was once processed before it reached audiences. Such a structural collapse has left a gaping hole that untrained and unsupervised online conspirators and extremist “citizen journalists” are all too ready to fill. A participatory movement defined mainly by racism, sexism and alt-right philosophies is seemingly satiating a wide swath of users alienated by the information ownership by coastal elites, and providing those seeking community in the face of social fragmentation their own version of a safe space. Fringe ideologues have started a firestorm in our hyper-partisan society, and catapulted tech companies, government regulators and civil society to consider this new reality, and ponder whether there is anything they can individually or collectively do about it.

YouTube presents one “polluted” news stream, where celebrity influencer-style authenticity and relatability is highly valuable, content is self-reinforced by shared values and commentary (never mind algorithmic recommendations), and social ties are strong. Should we and can we restrain this reactionary Pandora’s Box? Can we prove that YouTube’s business model is enabling similar algorithms of oppression that its parent company has been accused of building? While there are certainly implications for power and accountability mechanisms to investigate curbing “fake news” and hate speech inciting violence against vulnerable and marginalized groups, they remain out of reach due to amorphous Community Guidelines, Terms of Service and a historically uneven response from Big Tech to reported cases. Regulators too have struggled to identify feasible oversight rules for platforms to abide by alongside incentives to more carefully monitor content in a manner that doesn’t violate the First Amendment.

In Allbright’s “Untrue-Tube,” we are introduced to the idea that YouTube’s search and recommendation algorithms upvote conspiratorial content, enabling the perpetuation of rumors and disinformation and allowing content creators to financially benefit from harmful material. His concluding policy proposals suggest the addition of optional filters and human moderators to protect children and others from shocking and vile content, though he doesn’t go far enough in mapping how existing backend technology operates, and poking holes in it given its lack of efficacy thus far.

Additionally, his suggestion to hire more moderators could be countered by the recent spate of PTSD rates skyrocketing amongst these workers and last month’s disclosure that YouTube employees were forced to sign a document acknowledging the role’s adverse mental health impact under dubious circumstances. Could such reports hint that the moderator pipeline will dry up? Hard to say. While Allbright briefly touches on the fact that the algorithmic optimization process makes it harder to counter disinformation campaigns with factual content, he doesn’t elaborate on why nor how a backend fix could offer a possible solution to curbing its depth and breadth. So, it’s unclear if moderation could more effectively be automated beyond its scope today. Adding in the challenge of combing video and images rather than text can paint a situation that feels both inordinate and pretty hopeless.

Surveying today’s “saturated attention economy” and digging deeper into the type of content and profile of creators that thrive on YouTube, Lewis touches upon the most nefarious aspects put forth through a stark rejection of progressive politics, feminism and social justice warrior ethos: the lack of integrity, objectivity and fact-checking systems that define news institutions. In “Twitter and Tear Gas,” Zeynep Tufecki posits the potential of social media platforms to unite dissidents with shared grievances through digital technologies and communications networks, and empower citizen journalists on the ground. Often, these individual organizers are filling a gap in reporting actual events left open by state-sponsored media sources and propaganda.

As Tufecki states, computer networks present a “reconfigured logic of how and where we can interact, with whom, and at what scale and visibility.” From the right to petition one’s government, to freedom of the press and of assembly, they put forth a new approach for building democracy. Accessible by anyone with a WiFi connection, modern communication favors sovereign individuals over sovereign governments by transferring information away from elites and empowering everyone on the network to both originate and receive messages, affording free association that brings both autonomy and influence.

Flipped on its head, one could argue that AIN-affiliated YouTube stars are performing the similar “noble” work of activists by calling out authorities concealing the “truth” around such issues as economic recession trends and employment statistics, rigged elections to favor certain subgroups, and the peddling of PC culture and multiculturalism by the elite with the goal of brainwashing and subjugating the masses. As a demographic with an amorphous understanding of what constitutes news and a culture of being “very online” as a key mode of peer interaction, the audiences of these channels either innocuously or intentionally feed the cycle, sharing and reposting content from a person they feel they know. As viewers, they can themselves also be led to extremism. Another parallel we could draw is the mirroring of outrage by alt-right account owners if they were to be “de-platformed,” which we could imagine may not be dissimilar from heated (and warranted) reactions from activists whose voices are shut out and arrests enabled by social networking technology when leveraged by authoritarian surveillance regimes.

A final underlying and critical consideration is around who can and should define what falls under the categories of hate speech, fake news, disinformation and outright violent provocations. For example, Exley’s profiling of notorious online figure Black Pigeon, who has mastered an innocuous tone, measured language and a subtle toeing of the line on the subject of race and gender compared to his more overtly countercultural peers, may easily escape scrutiny if not flagged or labelled under any of these banners. We don’t know because as things stand today, there isn’t a single definition or clear parameters within which all social media sites operate, instead relying on making up adhoc rules as they go.

Given the status quo, there is no guarantee their content moderation and removal choices aren’t further engendering discriminatory behavior and stifling certain political views over others, though we assume many such mistakes are made, and often. However, we must keep in mind Facebook and YouTube’s motivations in keeping their accountability mechanisms vague: audience-driven monetization incentives that spell out cold-hard cash. And lots of it. If their coffers are filled by rewarding “bad actors,” it seems their attitude boils down to: so be it. That is getting stickier, however, as Zuckerberg and others’ recent calls for more stringent regulation from federal authorities illuminate the difficulties in being such arbiters, and the PR and legislative headaches that can sometimes follow.

Without active oversight from regulators and civil society, nor the existence of a transparency playbook all parties can reference to better understand, and perhaps improve, current best practices — we can predict further engagement on these issues is on the horizon. As more regulatory frameworks shape up abroad, pressure is growing for American legislators to begin taking such discussions more seriously, as tech behemoths gain more power and capital, especially amidst the chaotic political climate. In the face of continued inaction, Facebook has recently taken matters into its own hands through the creation of an internal Supreme Court, which sees struggles ahead in its quests to prove legitimacy, ensure it’s nimble enough to tackle hate speech and problematic content before it goes viral, and weigh free speech principles encompassing vastly different standards abroad, and at home. At this point, all we can do is wait and see.

--

--

Adriana Lamirande

A place to gather research papers, academic projects, op-ed columns and creative musings. Interests include internet policy, psychoanalysis & video art.