The last installment in this series focused on account suspensions as one means of limiting the potential of social media to contribute to extremist violence. Since then, Nigeria suspended Twitter after Twitter froze its president’s account, and Facebook removed a network of accounts ahead of Ethiopia’s June elections. Social media platforms are still being widely misused in conflict settings. Pro-military forces in Myanmar used TikTok to threaten opponents of the country’s recent coup, and Indian Prime Minister Modi strong-armed Twitter into banning several hundred accounts there in the wake of widespread protests.
However, observers continue to wonder what value account suspensions offer in the fight against online extremism and how it leads to real-world violence. Account suspensions usually come after damage has been done and therefore should be considered as an accountability measure. If a company only takes action after violence has occurred in reaction to dangerous speech, then there is still a massive gap in terms of the tools needed to prevent that violence in the first place.
Algorithmic Integrity
As one preventative measure, some (including within the industry’s own research teams) have suggested that the solution may lie with the algorithms on which social media company business models rely. So what does algorithmic integrity mean in the context of the weaponization of social media?
The challenge has long been presented as a tension between unfettered free expression online and protecting people from the violence resulting from that speech. It is now clear that this purported tension may be a smokescreen to avoid addressing the real challenge: that social media platforms’ business models rely on algorithms that amplify divisive speech. Divisive speech tends to keep users engaged and therefore more exposed to advertisements, and so the algorithms promote them more.
Moreover, the algorithms are often in a “black box” or inaccessible for external evaluation and audit. If users and civil society cannot know how they work or challenge their decisions, it is difficult to prevent dangerous speech on social media from increasing violence and conflict. The lack of public oversight and transparency also contributes to unjust asymmetries of power between those deploying the algorithms and those (often unknowingly) using them.
Algorithmic accountability
Although compelling and greatly needed, algorithmic integrity is difficult to achieve in practice. From a social media perspective, regulating which speech should be allowed to be amplified and in what manner is extremely challenging and varies radically by context.
Algorithmic accountability is one way to ensure algorithmic integrity and mitigate the harms stemming from dangerous speech on social media. Several academics and civil society organizations have raised the complex challenges of ensuring algorithmic accountability. It is also the subject of legislation in the US, EU, and Canada.
And from a practical perspective, algorithmic impact assessments (AIAs) are one tool for getting to algorithmic accountability and integrity. As Data & Society explains in a recent report:
Algorithmic impact assessments have emerged as a centerpiece of the conversation about algorithmic governance. Impact assessments integrate many of the chief tools of algorithmic governance (e.g., auditing, end-to-end governance frameworks, ethics reviews) and speak to the algorithmic challenges of algorithmic justice, equity and community redress.
However, it seems that algorithmic impact assessments largely mean different things to different people. It’s also unclear how AIAs will interact with human rights impact assessments (HRIAs) since most AIA models propose including human rights in their assessments, but not limiting their analysis to them.
AIAs, though, must do more than just touch on implicated human rights. They must be conducted in concert with specifically tailored human rights assessments that measure the impacts of these algorithms on individuals’ human rights. And, as we’ve noted elsewhere, doing this in high-risk or fragile markets adds additional complexities and the need for inclusive, conflict-sensitive approaches. There are also quite a few difficult questions that remain unanswered:
How do industry and regulators prioritize which algorithms to assess, given that it is not feasible to evaluate every algorithm? The proposed EU regulation, for example, will only apply require third-party auditing and oversight for what EU-based regulators determine are “high risk” AI systems. But how do we know what “high risk” means for users across radically diverse contexts? As noted above, even the algorithms used to generate your Twitter feed or next YouTube video can lead to violence and death. How can we ensure that the assessments and criteria of AIAs be based on, and reflect, the lived experience of harms?
If algorithmic impact assessments are to measure potential harms, how and who defines what these harms are? Determining what a “harm” is requires value judgments and therefore is not objective. Thus, processes for determining what a harm is must be inclusive of traditionally marginalized voices and incorporate collective harms and harms to society as a whole. Those perceived harms may also relate to conflict drivers, and even if inclusively determined, can play into conflict dynamics. In the absence of a targeted approach, we risk not only the algorithms but also the AIAs reinforcing existing asymmetries of power and influencing conflict.
What are other impacts of these algorithms in conflict or otherwise fragile settings? What is required for true algorithmic integrity in FCS? The findings of Data & Society reinforce the need for bespoke approaches to AIAs: “a singular, generalized model for AIAs would not be effective due to the variances of governing bodies, specific systems being evaluated, and the range of impacted communities.”
By way of conclusion, it seems that algorithms are a significant part of the problem of the weaponization of social media. The solution likely lies in some form of algorithmic integrity or accountability, which would include (among other things) regular and robust algorithmic impact assessments, but what exactly they should look like and how they should be conducted so as to be effective are still largely open areas. As with human rights impact assessments, there are good and bad ways of conducting these types of exercises. Too frequently they devolve into box-checking exercises and fail to consider the perspectives of the local communities and the unique characteristics of each context.
To address these challenges, JustPeace Labs is currently developing research and tools for heightened human rights due diligence for when powerful emerging tech is used in FCS, and algorithmic impact assessments are a part of that. Stay tuned for more.
742 thoughts on “Peacebuilding, Extremism and Social Media, Part 3: Algorithms”
Comments are closed.