Facebook has made itself a front-runner in the race to bring accountability, ethics and human rights to social media platforms with its twin values and oversight board announcements of the last two weeks. If 2018 was the year of discovery—when we started to realize the extent of how social media platforms are being used to further violence and conflict—then 2019 has been the year of trying to figure out what we should do about it. And during this time Facebook has endured the brunt of our collective criticism for its hesitance in joining that conversation, let alone leading it.
That arguably changed this month with two important developments. First, the social media giant announced an update to the values that form the basis of its Community Standards. The updated values now explicitly incorporate international human rights standards as a guide to making content moderation decisions. Facebook subsequently unveiled the founding charter of an independent oversight board, its so-called content moderation “Supreme Court.”
In the wake of these announcements, stakeholders from across the spectrum—from free expression activists to mainstream bloggers—have weighed in with a litany of critiques. Some point out that the not-yet-formed board could imminently implode, a la Google’s attempt at an ethics board, others dismiss the move as insufficient given the scope of the challenges Facebook faces, and some worry that the company is effectively outsourcing their own human rights and other corporate social responsibilities.
After spending the better part of the last two years calling for greater oversight and accountability in the tech industry, we at JustPeace Labs are optimistic about the Facebook updates. We are convinced that this new development is a step in the right direction. At the least, it represents (albeit incremental) change toward the company’s assumption and acceptance of the responsibilities to which it is beholden, namely to respect the human rights of all its users and any global citizens affected by its products.
The Oversight Board could have far-reaching power. It’s decisions “will shape the definition of acceptable discourse on the world’s largest social network. They’ll define what sort of speech constitutes hatred and violence and will have a say in whether or not it’s permitted to spread.” So it’s important that the Oversight Board gets it right, and that it takes a truly inclusive and global approach to its decisions and policy recommendations. And part of that means looking beyond the Facebook Community Standards.
With the almost-contemporaneous announcement of its values update, we know from Facebook itself that it will now be looking to international human rights standards to guide it in difficult content moderation calls. Human rights is a vital framework for the assessment and direction of corporate behavior in the area of content moderation, but it is only one of several relevant frameworks. And if such a framework is relied on solely and exclusively at the expense of a broader, more holistic view that also incorporates ethical duties (beyond those codified in legal or human rights instruments) and, perhaps most importantly, a conflict sensitivity framework, then the result will likely be insufficient to secure real, meaningful change.
Need for Standards
And even that—the onerous journey of determining the best framework for reviewing potentially dangerous speech—is only the first step. Frameworks provide the lens through which the problem is viewed and perhaps (at best) a method for its resolution, but they do not provide a set of standards for review, the actual rules which will govern the company’s decision making. Such standards are still missing in this conversation, unless you count Facebook’s one-size-fits all community standards. But given that they reference neither human rights, conflict sensitivity nor ethics, we simply cannot consider those sufficient given the scale of the problem at hand. We believe that the best standards can only be found through a consultative and iterative process involving not just industry (including Facebook’s biggest competitors) but all relevant stakeholders.
Likely only time will tell whether this commitment delivers on all it has promised, but it doesn’t seem like mere public relations window dressing, nor just your run of the mill green-washing attempt, although the company’s been accused of both. Facebook has spent considerable time and expense on intensive, global consultations about the Oversight Board. It has also engaged civil society groups in a Civil Rights audit, and has made important updates to its content moderation tools to help improve the way it implements its policies.
However you view this announcement from Facebook—a revolutionary assumption of responsibility from the world’s most power social media company or a desperate attempt to rehabilitate a tarnished company reputation—we at JustPeace Labs view it as an important first step in solving the seemingly intractable problem of how unchecked social media content can catalyze mass human rights violations and conflagrate violent conflict, and an indication that the time is now right for the collaborative, multi-stakeholder process that we’ve been calling for.
Facebook has already jumpstarted this effort with its recent announcements. It is creating the Oversight Board because, as its CEO Mark Zuckerberg has said, private companies should not “be making so many important decisions about speech on our own.” Facebook vice president Nick Clegg claims that the Oversight Board is “a model for our industry.”
All we need now is for other members of industry, along with academics and civil society, to follow suit and join Facebook in its endeavor so that all relevant stakeholders can come together to collaboratively and collectively fill in the gaps in governing standards and learn from one another.
Imagine that Facebook’s independent oversight board wasn’t just Facebook’s. Imagine that it applied not an exclusively Facebook-focused set of rules, but a broad framework for protecting human rights and fragile democracies, arrived at through a consultative and iterative process involving experts in the fields of business and human rights, peace and conflict and ethical tech from across industries and sectors. Imagine that we had open, transparent processes about how the industry as a whole was addressing dangerous social media content.
Facebook has clearly subjected itself to a lengthy and expensive iterative process in the development and launch of its Oversight Board and its values update. The company is now undeniably leading the pack of tech companies that have proven slow to make meaningful policy changes in an industry that prides itself for moving at the speed of light. So why would Facebook stop short of working toward the realization of industry-wide standards that will equally constrain their competitors and continue to spread both the responsibility for and risk of unforeseen consequences?