This is the first in a series of blog posts highlighting how ICT platforms can foment violent conflict. JustPeace Labs is working to reverse this trend and, together with industry leaders, create ethical standards and conflict sensitive approaches to doing business.
When Facebook is the Internet
Software companies working in emerging markets face a number of significant challenges. Especially when that market involves armed conflict, a developing economy, or a history of human rights abuses. These complex environments raise particular ethical and security challenges companies need to address to maximize their presence and protect their interests.
Take, for example, Facebook. With the recent news about Cambridge Analytica using massive amounts of Facebook data to influence elections, the company is under intense scrutiny. Many users are indignant and supporting a massive campaign to #deletefacebook. But in many contexts, it’s not that simple, and users don’t have the privilege of deleting the platform.
In many emerging markets, Facebook is the de-facto internet. It is quickly becoming the primary source of news and information for millions of users. While this obviously brings in business opportunities for the corporation, it also raises the stakes for the corporation as it heavily influences politics and public knowledge. And, in other contexts, is used as a driver of violent conflict.
Driving Violent Conflict
Facebook has recently come under fire for passively enabling human rights abuses in Myanmar, where there has been an explosion of internet users in the past few years. The UN has concluded that hate speech and rumors posted on Facebook contributed to brutal crimes against Rohingya, playing a critical role in what some say may be a genocide.
Facebook was also accused of censoring activists and journalists documenting and raising awarenessabout ethnic cleansing of the country’s Rohingya minority. Several Facebook users who posted reports about the crimes committed against the Rohingya said they had their posts, and sometimes accounts, removed or blocked by Facebook. In some instances the posts showed graphic violence, but in others, they did not. In one case, the post was a poem protesting the government’s actions.
On the other hand, anti-Rohingya posts that included misinformation intended to incite violence easily went viral, and Facebook has become a breeding ground for anti-Rohingya sentiment. These include misinformation campaigns created by the government.
Its algorithmic content curation and targeting give a greater visibility and credibility to posts promoting ethnic cleansing and spreading misinformation. These posts widen longstanding ethnic divisions and stoke the violence against the Rohingya ethnic group.
Relying on the Community to Flag Hate Speech is Not Enough
Facebook relies on a set of “community standards” and user reports to monitor the billions of posts uploaded everyday. Reports are manually assessed and, sometimes, removed.
The vagueness of Facebook’s internal policies and how they are applied leads moderators to flag or delete posts documenting military actions against the Rohingya while inflammatory misinformation continues to be posted.
Because it relies heavily on user-input, government supporters can flag activist posts and accounts, leading to a disproportionate scrutiny of those accounts while anti-Rohingya posts slip through porous protocols. The government itself conducts a social media monitoring program looking for individuals who “threaten the country’s stability.”
Conflict is Bad For Business
The prolific spread of hate speech on social media is not limited to Myanmar. Posts on Facebook have spread violent conflict and human rights abuse in South Sudan, the Philippines, and Sri Lanka.
Not only does this create grave human and societal costs, but it’s also bad for business. In Sri Lanka, the government shut down Facebook and other platforms Facebook owns including WhatsApp and Instagram as well as other social media platforms in an effort to curb an outbreak in violence. Last year, India blocked 22 social networking services, including Facebook, WhatsApp and Twitter, for one month to curb street protests in the disputed territory of Jammu and Kashmir.
Moreover, it opens Facebook to widespread criticism–and potential legal liability–for its role in spreading violence, terrorism, or human rights abuses.
Facebook claims it is taking steps to better evaluate censored posts and that it will only remove graphic content or content that celebrates violence. It is navigating a delicate line between what may be considered newsworthy, and what may be considered inflammatory. It says it is working with governments and civil society to identify and remove such content.
Facebook has taken other steps to help keep communities safe. For example, it created illustrated print copies of their community standards in Burmese, created a Facebook safety page for Myanmar, and partners with local civil society groups.
We Need to Do More: Developing an Ethical and Conflict Sensitive Approach to ICT
Nevertheless, Facebook could do more to grapple with the harms its platform can contribute to when used in complex settings like Myanmar. It’s efforts are reactionary, and it should be more proactive to avoid being used as a conflict driver. This would help protect the corporation from potential legal liabilities for profiting from violence or acting negligently or recklessly. It would also, we argue, just be the right thing to do.
We’re aiming to help companies like Facebook better understand and approach business in complex settings. Working with key stakeholders, this year we will be creating a comprehensive set of guidelines and standards for software companies working in complex settings. We will show companies the business case as well as the moral case for approaching these new markets with a heightened duty of care and strive to foster peace, rather than provide a platform to foment conflict.
19,594 thoughts on “ICT as a Conflict Driver: A Look at Facebook in Myanmar”