Log into your member account to listen to this article. Not a member? Join the herd.

In recent years, it has become relatively common for public entities and politicians in Kenya to disavow certain content shared through social media, claiming it to be false. This year alone, the Independent Electoral and Boundaries Commission (IEBC) has had to issue two statements dismissing election-related content as fabricated. In July 2021, it was reported through mainstream media that the Directorate of Criminal Investigations (DCI) had arrested an individual involved in online fraud. According to the report, the police suspected that the individual hacked into the IEBC’s database and accessed personal details relating to 61,617 registered voters. Shortly after the news broke, the IEBC claimed it was false. Even more recently, in September, the IEBC had to issue a public statement clarifying that a call for applications for jobs in their voter education programme was fake.

These instances have not taken place in isolation; there is a broader discernible upward trend in false or inaccurate content in Kenya. In a 2018 survey of 2,000 Kenyans by Portland Communications, 90 per cent believed they had interacted with false information relating to the 2017 Kenyan elections, while 87 per cent believed that such content was deliberately false. The issue of deliberately spreading false information was the subject of Odanga Madung and Brian Obilo’s research for the Mozilla Foundation. In their report, they highlighted the extent to which the spread of disinformation in Kenya through Twitter was coordinated and well -organized.

These local developments also occur against a backdrop of global trends with striking similarities. Several governments—including Kenya’s—have attempted to rein in the spread of false or inaccurate information through regulation. For example, in 2018 Kenya enacted the Computer Misuse and Cybercrimes Act (CMCA) which criminalizes sharing false information. Noting that some of these countries have directly linked their regulatory objectives to the safeguarding of their democracy, it is worth exploring the ways in which false or inaccurate content compromises democracies, and in particular, electoral integrity.

To a post-truth world

Undoubtedly, the ability to agree on basic facts is a core tenet of democracy. To optimally make a collective decision, voters ought to have access to the same accurate information. While arriving at a single “objective truth” is not always possible due to mediation in communication, it is important for the citizenry to at least have access to, and acknowledge the basic facts that underpin the political processes they are participating in. The increasing spread of false or inaccurate content in recent years points to the solidifying of a “post truth” age where political rhetoric often appeals to emotion and sentiment with little regard to factual rebuttals.

This post-truth concept is not entirely novel. For example, climate change denial and anti-vaxxer sentiments have long persisted despite widespread availability of evidence to refute them. But in recent years, it has gained significant popularity perhaps due to the increasingly populist nature of political campaigning in the digital age. On the same year Donald Trump won the US elections, Oxford English Dictionary’s word of the year was “post-truth”. In his campaign, Trump made it a habit of dismissing mainstream news reporting as “fake news” when it contradicted his narrative and even went ahead to later falsely claim he coined the term. Likely emboldened by these trends, other populist leaders around the world began dismissing news reporting as fabricated when it did not suit their narratives – Jair Bolsonaro of Brazil and Rodrigo Duterte of the Philippines have both accused journalists of spreading “fake news”.

Rather dangerously, some leaders who sound the alarm over false or inaccurate content are often either linked to the deliberate spread of such content or have benefited from it. Trump’s campaign was boosted by a group of Macedonian teenagers who, driven by advertising revenue on Facebook, generated several seemingly genuine news articles that either directly supported Trump or discredited his opponent, Hillary Clinton. The combination of these leaders casting aspersions as to the integrity of traditional media and the spread of “alternative facts” on social media results in a political environment where voters are highly distrustful of each other and of core institutions such as the media. The danger is exacerbated by the nature of social media and how third parties—often with the aid of social media platforms—are able to curate the type of content users are exposed to in a subtle manner as we discussed in our previous article.

Rather dangerously, some leaders who sound the alarm over false or inaccurate content are often either linked to the deliberate spread of such content or have benefited from it.

Distrusting institutions is not the only risk to democracies. In some cases, false content results in violence. In 2016, following claims that Hillary Clinton was running a ring which that was exploiting children sexually in the basement of a pizza restaurant, a man armed with a gun broke into the restaurant to find out if the claim was true – it was not. More recently, a large-scale attack on the US Capitol took place following the outgoing president’s false claims through social media that the election was stolen. In this case, it was reported that Facebook was aware of the potential for violence arising from the false claims but failed to limit their spread. With real dangers like this in mind, the desire to “regulate truth” is understandable. However, attempts to do so have raised a novel set of challenges. For one, it is extremely difficult to define truth let alone purport to regulate it.

Getting the terminology right

The terms “fake news”, “disinformation”, and “misinformation”’ have featured prominently in discourse on the spread of false content. While these terms are generally used to assert that something is untrue, they are sometimes wrongly conflated. This conflation then impairs any attempts at regulation. The term “fake news” does not necessarily refer to one specific type of content. Clair Wardle of FirstDraft has rightly noted that it is an entire eco-system that includes both misinformation and disinformation. Elsewhere, one of us has categorized the nature of this content into two conceptions for purposes of understanding how to regulate it: the deliberate action and the culture around it.

The deliberate action essentially refers to disinformation. Spreading disinformation is the act of intentionally and knowingly sharing false or inaccurate content.  For example, the teens in Macedonia spinning fake articles for advertising revenue were involved in a disinformation campaign. In Kenya, Madung and Obilo identified groups of bloggers who were paid to push trends with false content that maligned certain political actors such as those who filed a petition to oppose the Building Bridges Initiative. These disinformation campaigns are often well coordinated and targeted at a particular outcome. Due to the potency of such campaigns in electoral contexts, they have previously been referred to as “distributed-denial-of-democracy attacks”.

These disinformation campaigns are often successful because of the second categorization – the culture of misinformation. In other words, the increasing likelihood of individuals to share false or inaccurate content unintentionally or inadvertently. Misinformation can range from misleading or alarmist headlines to demonstrably false claims passed on by people who had a good faith belief in the accuracy of those claims. For example, a few years ago, the Kenya Bureau of Standards had to issue a statement denying the existence of “plastic rice” in Kenya following the circulation of a video on WhatsApp implying there was. WhatsApp is a particularly notorious avenue through which misinformation is shared locally. Even mainstream media is sometimes susceptible to sharing misinformation as was seen most recently when several newsrooms reported that a Kenyan Senator dialled into a parliamentary debate session from a bar due to a poorly edited clip that was circulating on social media. They later had to recant upon discovering it was an altered clip.

Even mainstream media is sometimes susceptible to sharing misinformation.

Unlike coordinated disinformation campaigns, which may often be linked to a central source, misinformation entails the public playing an active role in both creating and amplifying narratives. As a result, Renée DiResta has referred to misinformation as ampliganda amplified propaganda.  This culture of misinformation has been enabled by several things. First, the use of social media as a source of news content has led to a decline in gatekeeping or fact-checking of content. Second, the nature of social media is such that it amplifies one’s biases and exposes them to content which often confirms their worldview. This in turn results in their likelihood to consume false or inaccurate content unthinkingly. Lastly, the existence of disinformation campaigns, and the discrediting of claims as false by politicians further muddies the waters, making people unsure of what is “objectively true”. This has made addressing the problem of fake news difficult.

Regulating truth 

Conceivably due to a focus on disinformation, regulation seeking to rein in false or inaccurate content has often been quick to criminalize the spread of fake news. As it was noted in the Kofi Annan Commission on Elections and Democracy in the Digital Age (KACEDDA) report, there is insufficient data regarding the individuals, motives and means behind the spread of fake news, possibly hampering regulatory efforts.  Across the world, governments seeking to rein in fake news have either targeted the individuals involved in spreading such content with penal sanctions, or the platforms hosting the content with financial liability. Both approaches are wanting for various reasons. For one, they both purely ignore the broader conception of fake news as a culture enabled by several factors. Second, they also pose a threat to the freedom of expression which, in relation to political speech, is vital.

Take for example Kenya’s CMCA, mentioned above. It criminalizes disinformation, identified in the Act as the intentional publication of false, misleading, or fictitious information which that is disguised as authentic. Those found guilty of committing this offence would be subjected to either a fine not exceeding KSh5 million (approximately US$45,000) or to a term of imprisonment not exceeding two years, or to both. More severely, the CMCA also makes it an offence to knowingly publish false information through any media in a manner calculated to cause disorder or to harm the reputation of a person. While the fine for this offence remains the same as the previous one, the potential prison sentence is a term not exceeding ten years. In no way should the intentional spread of false information be condoned. However, such laws may be the subject of abuse by governments seeking to suppress political activism and allowable expression. For example, Mutemi Kiama was arrested over claims that he violated the CMCA when he shared a poster with President Kenyatta’s image, identification number and a statement ostensibly from Kenyans to the rest of the world renouncing him as Kenya’s representative for purposes of seeking financial loans. This occurred in the context of a broader discourse on Kenya’s debt burden and a section of Kenyans’ displeasure with the economic trajectory of the country.

This abuse is also discernible from the fact that evidence suggests the law is selectively applied. In 2020, a Member of Parliament, John Kiarie, posted a Twitter thread where he raised alarm at the number of people the government had in quarantine following the first confirmed COVID case in the country. His posts directly contradicted the Ministry of Health’s official position, indicating that the situation could be much worse than was officially reported. He was neither arrested nor charged, and the Twitter thread is still available online. The existence of a law that limits the freedom of expression in a subjective manner with the risk of stiff financial and penal sanctions, let alone its abuse, is likely to stifle free expression. More so with respect to political discourse which is crucial in campaigns and elections.

Aside from targeting individuals, some governments have sought to shift the burden of regulating fake news to platforms such as social media by imposing liability on them for their users’ behaviour in certain instances. Arguably the most notable example of this is Singapore’s anti-fake news law which that would enable the government to order platforms to take down false statements which that are against the public interest. Where platforms are at risk of incurring liability for user conduct, they are more likely to pre-emptively censor content they deem problematic. The net effect of fake news laws aimed at platforms would therefore be the suppression of protected speech in an unprocedural manner by private entities. At the same time, the core issue of fake news would not be addressed.

These attempts at regulation which focus primarily on disinformation campaigns, while well–intentioned, seem to have missed the mark. Jeff Kosseff, an Assistant Professor at the US Naval Academy recently remarked that the discourse around fake news has not sufficiently focused on the reason behind people’s susceptibility to such content. Instead, attempts at regulating fake news seem to focus on the individuals and platforms involved, and the mechanics through which it is spread. However, to address both the disinformation and the broader culture of misinformation that enables it, one must go beyond such regulation.

A layered approach 

Outlawing disinformation alone will not address the spread of fake news; it may in fact cause more harm than good. This does not mean that it should be tolerated in the name of respecting the freedom of expression, particularly in any context that is likely to lead to widespread violence due to long-standing tensions (e.g., deep running economic or ethnic tensions). Any attempts to outlaw certain speech ought to be contextual, measured, and proportionate to the ends sought. However, beyond this, they ought to be supplemented by policy interventions aimed at reforming the culture which that enables disinformation to take root.

Most of these policy interventions would involve education of one form or another. Crucially, governments should engage in both civic education and media literacy campaigns. Empowering the citizenry to both identify accurate sources of information and understand the role of different institutions in a democracy would contribute significantly to stemming the inadvertent spread or consumption of misinformation. This, coupled with collaborative fact-checking initiatives between the government and mainstream media, would enable voters to discern fact from falsehood.

To address both the disinformation and the broader culture of misinformation that enables it, one must go beyond such regulation.

Considering the centrality of social media to everyday news consumption, it would also be prudent to engage these platforms in such fact-checking initiatives. For example in Mexico, the National Electoral Institute (INE) collaborated with social media companies in support of Verificado 2018, a fact-checking initiative that saw the Mexican presidential debates livestreamed on social media from INE headquarters. The collaboration also supported the development of Certeza 2018, a fact-checking system used online which—through a combination of human and machine review—monitored online activity, assessed instances of misinformation, and took action by disseminating the relevant notices.  In recognition that fact-checking may occur late after false information is shared, it is also worth mainstream media exploring pre-bunking initiatives. These would involve identifying the common tropes around false narratives and priming audiences to receive them critically. Such efforts have been proposed as solutions to the current spread of misinformation around COVID vaccines. Sander van der Linden likens pre-bunking efforts to inoculation against disinformation and misinformation.

Even where pre-bunking efforts are not adopted, entities involved in the fact-checking initiatives proposed above may collaboratively engage in a debunking campaign by developing counter-messaging once misinformation is disseminated. For this, Indonesia’s example is instructive as noted in the KAF Commission’s report. In Indonesia, the electoral bodies collaborated with civil society and the Ministry of Communication and Information Technology to monitor social media activity and to spread counter-messaging in instances of misinformation, among other things.

While it is indeed necessary to curb the spread of false or inaccurate content, attempting to do so may pose several risks. Governments, social media platforms, and mainstream media ought to collaborate and make use of a few legal and policy-based initiatives to stem the culture of misinformation. The IEBC fortunately has several examples to draw from on how, as an electoral body, it can coordinate efforts around addressing the culture of misinformation around elections. In all, when seeking to curb the spread of fake news (both deliberate and inadvertent), it is important for governments to consider why their citizens are susceptible to false information as opposed to how and by whom that information is spread.

This is the third of a five-part op-ed series that seeks to explore the use of personal data in campaigns, the spread of misinformation and disinformation, social media censorship, and incitement to violence and hate speech, and the practical measures various stakeholders can adopt to safeguard Kenya’s electoral integrity in the digital age ahead of the 2022 elections. This op-ed series is in partnership with Kofi Annan Foundation and is made possible through the support of the United Nations Democracy Fund.