Log into your member account to listen to this article. Not a member? Join the herd.

Kenya is less than a year away from the 2022 general elections. The role of social media in the forthcoming polls has been the subject of dialogue in recent weeks, and for good reason. In electoral contexts, social media platforms have been lauded as equalisers, levelling the playing field for politicians. Providing instantaneous peer-to-peer communication while dispensing with traditional gatekeeping has made social media a potent tool for grassroots organising, one-to-many communication, and broad engagement. Its potency in Kenya is only amplified by the number of internet subscriptions, which stood at 43.7 million as at March 2021, approximately 83 per cent of the total population.

Social media in democracy: a boon?

The benefits afforded by social media are not only enjoyed by politicians. Social media has also provided citizens the space for civic engagement that is not readily available offline. One need not look far to identify the tangible effects of this democratisation. Kenyans recently took to Twitter under the hashtag #JusticeForKianjokomaBrothers to protest the tragic death of two brothers, Emmanuel and Benson Ndwiga, who were in police custody for an alleged curfew violation.

A section of Kenyans also held digital protests under the hashtag #LightsoutKE, going offline for half an hour from 9 p.m. every Sunday night in remembrance of victims of police brutality. Shortly after the online uproar, the Independent Policing Oversight Authority (IPOA), announced it was launching an investigation into the brothers’ deaths at the hands of the police. The investigations resulted in indictments of the police officers involved. While there isn’t enough evidence to draw causal relationship between these digital protests and the resulting action, the very fact of such online organising is enough to highlight the potency of social media in civic engagement and political participation. It would also not be far-fetched to assume that the public outcry online influenced IPOA’s decision to act promptly.

This is also not the first time that digital protests have supplemented offline complaint mechanisms. Earlier this year, students at the Kenya School of Law protested the Council of Legal Education’s (CLE) procedurally flawed decision to go ahead with bar exams despite giving short notice and facing numerous logistical challenges. One of these challenges was a government-imposed lockdown of certain areas to stem the spread of COVID-19 that made it difficult for students outside the affected areas to access the designated examination venues.

Using the hashtag #CLEwi (a clever play on words merging the abbreviation CLE, with sielewi, the Swahili word for “I do not understand”), students voiced their concerns while at the same time pursuing offline channels, in this case, an anonymous complaint to the Commission on Administrative Justice (CAJ). The CAJ eventually intervened, directing that the CLE postpone the exams. These examples seem to highlight the increasingly seamless integration of online and offline spaces. Unfortunately, this integration also extends to more nefarious elements of human interaction. In some cases, exacerbating the effect of these elements.

Double-edged sword: harmful content

The very characteristics that make social media such a potent tool for civic engagement and political participation also make it an effective vector of harmful content. Over the past few years, the nexus between social media and democracy has featured prominently in news reports, academic articles, and general discourse. Part of this trend is attributable to the perceived failings of social media in democracies around the world. Reports of harmful content such as misinformation, disinformation (both sometimes wrongly conflated and labelled as “fake news”), and hate speech appear to now be commonplace during elections.

Recent experiences in Brazil, Qatar, and the United States, provide some examples for these challenges. Such harmful content is not novel. However, recent events such as the 2016 US elections have widely popularised concepts such as “fake news”, with former US President Donald Trump going as far as to claim he invented the term (he did not). The tangible outcomes of such content through social media platforms have understandably resulted in calls for accountability, and for the regulation of these platforms. For example, in Myanmar, Facebook was reportedly used by military personnel to spread inciteful rhetoric against the Rohingya Muslim minority in the country, contributing to violence against the Rohingya.

These calls for accountability, and the broader concern with the unchecked power of the largest technology companies (Big Tech), has been referred to the “techlash”. In recent years, countries have been grappling with law reforms aimed at mitigating the spread of harmful content online. For example, Germany passed the Network Enforcement Act (popularly, NetzDG) which sought to impose large fines on social media platforms that fail to take down illegal content promptly. Facebook was fined under this law. At the same time, social media platforms have sought to respond to the techlash by implementing their own transparency and accountability mechanisms such as Facebook’s recently established Oversight Board.

The very characteristics that make social media such a potent tool for civic engagement and political participation also make it an effective vector of harmful content.

The urgency of figuring out a solution to this problem rapidly escalated in early 2020, when the World Health Organisation (WHO) declared COVID-19 a global pandemic. Perhaps there is nothing more emblematic of the promise and peril of social media than the range of behaviour witnessed in the early days of the pandemic, and even more recently with the development of vaccines. While public health officials were able to widely disseminate accurate and up-to-date information regarding the virus, individuals were equally able to spread false information. In some cases, this information was inciteful, fuelling anti-Asian sentiment and, in a few instances, resulting in violence. More recently, the spread of such information threatens global efforts to inoculate against COVID-19. This inundation with information, both false and true, was termed as an infodemic by the WHO.

The public health measures which have been adopted to mitigate the impact of COVID-19, such as social distancing and the wearing of masks, have served to enhance the role played by digital platforms in our lives. People are increasingly reliant on these platforms for, among other things, work and school. With such high levels of online activity, it is expected that the problem posed by exposure to harmful content will only worsen.

Kenyan perspective

The challenges posed by the spread of harmful content are not far removed from Kenya. It is reported that disinformation was spread through social media during the 2017 general elections. Cambridge Analytica, a political consulting firm accused of using improperly acquired personal data from Facebook to engage in political microtargeting, was reportedly active in Kenya during those elections, providing its services to one of the political parties. Since then, Kenya has made attempts at regulating online speech and the use of personal data, enacting the Computer Misuse and Cybercrimes Act in 2018, and the Data Protection Act in 2019. Despite these efforts, it is apparent that Kenya is yet to overcome the spread of harmful content online. For example, a recently authored report revealed a whole industry in Kenya dedicated to the spread of disinformation through social media.

Increasingly, government entities and some politicians have taken to social media to disavow content attributed to them on the basis that the content is fabricated. At the same time, a number of social media accounts have been engaging in what appears to be a coordinated campaign to disparage certain political actors, with the hashtags #RutosViolencePlan and #RailaHatesMtKenya most recently trending. These developments are quite concerning, and the National Cohesion and Integration Commission (NCIC) has previously warned against the trajectory of the country’s politics.

On 26 August 2021, the Cabinet Secretary for Interior & Coordination of National Government, Fred Matiang’i, cautioned Kenyans against misusing social media ahead of the general elections. The Cabinet Secretary highlighted the use of vulgar language, insults, and the spread of “fake news” as conduct which the government intends to clamp down on. Speaking at a youth forum, he reiterated that any excesses would be met with “equal force”. In a region where there have been increasing concerns about internet shutdowns by governments during elections, the Cabinet Secretary’s words may raise concern. To his credit, the Cabinet Secretary has publicly assured Kenyans that the government would not shut down social media over hate speech although, in the same breath, he affirmed that the government would deal “ruthlessly” with those purveying hate speech. Now, the spread of hate speech should never be tolerated, particularly in Kenya where inciteful rhetoric resulted in election-related violence in 2007/8.

Perhaps there is nothing more emblematic of the promise and peril of social media than the range of behaviour witnessed in the early days of the pandemic.

Regulating speech on social media to prevent the spread of harmful content necessarily means impacting the freedom of expression and right to assemble online, both of which are constitutional rights and crucial during elections. The approaches that governments and social media platforms use to achieve these important goals significantly impact the balance that is ultimately achieved. Put another way, in attempting to stop the spread of content that undermines healthy democratic activity, governments or private platforms may inadvertently subvert healthy online engagement. The entire endeavour of regulation therefore implicates a balance that must be carefully threaded. The fickle nature of this balance is further exacerbated by the COVID-19 pandemic.

Search for balance

Recognising the link between conduct on social media and electoral integrity, Kofi Annan, through his foundation—the Kofi Annan Foundation—established the Kofi Annan Commission on Elections and Democracy in the Digital Age in 2019. Consisting of leading experts drawn from different disciplines and jurisdictions, the Commission synthesised the concerns around social media in elections into five focus areas: polarisation, hate speech, disinformation, political advertising, and foreign interference. In its report, the Commission put forth practical recommendations for various stakeholders involved in the electoral ecosystem – governments, businesses, and civil society.

What is apparent from these recommendations is the importance of a collaborative approach to safeguarding electoral integrity in the digital age and achieving the earlier mentioned balance. The nature of the problem at hand is such that actions taken in isolation may not be very effective, especially where clear links exist, such as between regulation of personal data use and the activities of political advertisers. This is particularly important to consider as various stakeholders in Kenya commence preparations for the 2022 elections. For example, the National Cohesion and Integration Commission announced a plan to keep tabs on social media activity in the run-up to the elections while the Kenya Editors’ Guild commenced a series of elections preparedness trainings.

This is the first of a five-part op-ed series that seeks to explore the use of personal data in campaigns, the spread of misinformation and disinformation, social media censorship, and incitement to violence and hate speech, and the practical measures various stakeholders can adopt to safeguard Kenya’s electoral integrity in the digital age ahead of the 2022 elections. This five-part op-ed series is in partnership with Kofi Annan Foundation made possible through the support of the United Nations Democracy Fund.