Log into your member account to listen to this article. Not a member? Join the herd.

I have been engaged by The Elephant to provide its readers with a series of pieces about surveys related to the forthcoming election. They will both review and comment on such surveys published during the weeks since the previous piece, and highlight events that are deemed likely to influence the results of subsequent polls that will be likewise interrogated over the coming months. In doing so, the pieces aim to provide readers with several analytical tools to better enable them to make their own assessments of such surveys in terms of their credibility and potential impact — an issue that will also be considered, based mainly on the expressed reactions to them by key political actors and analysts. Given such ambitious aims, the author welcomes readers’ comments and suggestions.

More broadly — though not always explicitly — the series also seeks to address the question as to the contribution — positive or otherwise — such surveys make to Kenya’s still very much in-progress “democracy”, though largely leaving aside the profound and never-ending debate among scholars as to just what “democracy” means, and how its criteria can be empirically measured. Such an evaluation must unfortunately include both scientifically sound and bias-free polls, as well as those that fail to conform to such standards, whether or not such failure is obvious to those who consume them.

For transparency, let me begin with a brief personal note. As is widely known, I have been associated with several market research firms in Kenya as a research analyst, beginning with The Steadman Group in 2005. While we conducted a number of surveys on a variety of governance topics for individual and in some cases, confidential, clients, most of my work was in connection with the periodic surveys conducted on various public issues. This includes, in the “early years”, the 2005 constitutional referendum for which we conducted two polls, and the 2007 election, both of which attracted considerable attention from both the general public and the political class, and quite prominent, if not always accurate, media coverage.

I continued in this capacity after Steadman was sold to Synovate, a UK firm, in 2008, and then from 2012, when Ipsos, a French company, bought Synovate.  In March 2019, the management in Paris made a decision (for reasons that remain unclear – to me, at least) that the Kenya office would no longer conduct and release survey results “of a political nature” and therefore had no further need for a political scientist/analyst like me. Reverting to my earlier role as an independent consultant, I began part-time work with Ms. Maggie Ireri, who after resigning as Ipsos CEO in 2015 had launched her own market research firm, Trends and Insights for Africa (TIFA). In the last two years, we have conducted six national surveys on public issues (among other work) and are planning to continue to do so, with an increasing focus on the 2022 elections.

In brief, therefore, whereas I shall seek to be objective in my assessments of the current polling environment in Kenya in general and of the survey products of particular firms, including TIFA’s, I cannot guarantee my analyses will be completely free of unconscious bias, which shall be for my readers to judge. At the same time, I am confident that I now know much more about survey research than I did twenty years ago and recognize that I, along with my colleagues, could have done and still could do certain things better.

In subsequent articles, I will address the technical requirements of methodologically sound surveys, but for now, let us assume that all such requirements are met and that their results are accurate, within the limitations of survey science. Adopting this assumption, we can ask: of what use are election-related surveys?

In the last two years, we have conducted six national surveys on public issues (among other topics) and are planning to continue to do so, with an increasing focus on the forthcoming election.

Consumers of such surveys can be roughly divided into several overlapping categories, reflecting their needs and interests: candidates for elective office together with their strategists; the public at large and especially voters; the media; actual and potential campaign donor-supporters and investors; and regional and more distant governments.

While perhaps obvious, the specific ways in which such information may be put to use vary, largely depending on the user. Political parties and coalitions, for example, need to establish the popularity of potential candidates ahead of nominations. Subsequently, nominated (and independent) candidates may want to know how viable their campaigns are, and the potential impact of particular campaign strategies.

For their part, the media may use polls to decide how much coverage to give particular candidates and issues based on their popular appeal, in order to focus scrutiny on those deemed more likely to occupy important public offices, while at the same time aiming to attract a wider audience for purely commercial purposes.

Similarly, those prepared to invest in campaigns would want an accurate measure of the popularity of parties and candidates in order to more accurately calculate their potential for success, especially if their primary motivation is to benefit from the eventual victors, whether directly in terms of contracts, for example, or indirectly in terms of policies that support their material and/or ideological interests.

Yet another category of potential poll consumers are conflict prevention and mitigation actors who could use them to assess the likelihood of violence based (at least in part) on an assumption that the closer the result, the more likely there will be post-election contestation (especially if the official results are less than universally acknowledged as “true”). In addition, having a more accurate understanding of the issues that divide the country and the intensity of such divisions can also inform the development of strategies early enough to at least mitigate more serious conflict outcomes.

Such pre-election assessment benefits also apply to Investors whose decisions may depend on the likely future policy environment based on which parties and candidates, with what agendas, will be in power. Similar concerns also apply to foreign governments and NGOs whose operations and interests in the country are also likely to be affected by the make-up and orientation of the next government (in some cases, at both the county and the national level).

Three additional categories of poll-users can be identified.  First, there are academics and other researchers seeking to test hypotheses about campaign activity, political party processes, voter motivation, turnout levels, and the salience of particular political parties’ or candidates’ policies and identities in terms of attracting votes. Such analyses can be country-specific or part of wider, cross-national studies.  Second are the survey firms themselves.  They may use elections to test various methods of data collection and analysis both for internal purposes and to publicly demonstrate their ability to gather reliable information as a way to attract future business.

Finally, we have the voters – at least those whose votes are not “set in stone” due to embedded patronage relations or any of the various forms of automatic “demographic support” including but not limited to common ethnic identity or religious affiliation. They may wish to know the viability of particular political parties and candidates to ensure they don’t “waste” their votes, especially if the race(s) in question appear close.  (One challenging area of post-election research is to discover whether any voters actually changed their ballot choices based on an awareness of polls, since many respondents are not prepared to admit this.)

The obvious focus for all of these various entities is on the election’s outcome. However, beyond the “horse races”, such surveys can be used to reveal just how deeply divided any political community is, as well as the levels of confidence in election integrity among particular sections of the electorate, and how much faith they have in the utility of elections in terms of actually making a difference in their lives. The latter would be partly reflected in the level of participation in various aspects of the electoral process such as registering to vote, attending campaign rallies and other meetings and engaging in party nominations — in addition to turning out to vote on election day.

But all of these uses of election-related polls depend on one crucial factor: their credibility.   Unless poll consumers can be sure that the results are reasonably accurate, it would be folly to rely upon them for anything more than “entertainment”.  Here, a key factor inspiring such confidence is having results from a number of reputable firms that are largely similar.

Beginning with the first factor, we can speak of “safety in numbers” — the “numbers” here being not the survey results themselves, but the number of firms undertaking such election-related polls.  That is, the more firms that undertake such surveys, the easier it is to spot “outliers”, while also allowing for the calculation of average results in order to minimize inevitable, but hopefully minor, variations in methodology.

Unfortunately, with less than six months to the election, far fewer “political” polls have been released/published than was the case in connection with any of the last three general elections. This is mainly a consequence of the reduced number of firms engaged in such survey work for public release. Over that period, the most prominent among them was Ipsos, but which has been largely silent since late 2018, with its few releases on public issues completely avoiding “politics”.  In this regard, it has joined Strategic Africa (formerly Strategic Public Relations) and Consumer Insight, both of which undertook/released voting-intention polls prior to the 2007 and 2013 elections but have been “silent” since prior to the 2017 election. Their “disappearance” has only been partly compensated for by the arrival of TIFA, which has undertaken several national and county-level polls, and more recently RealField, a British firm that released the results of its first Kenya survey last January. On the other hand, most visible throughout this period have been Radio Africa with its now monthly polls (and which broadcasts results on its various radio stations while publishing them in its newspaper, The Star), and Infotrak, even if, over the years, the latter’s results — as will be shown in subsequent pieces — have often been somewhat at odds with those of other “established” firms.

The “disappearance” of these firms has only been partly compensated for by the arrival of Trends and Insights for Africa.

Before reviewing their results – and setting aside any insinuations of deliberate falsification – it is important to note the main factors that could explain different results that fall outside standard margins-of-error among two or more surveys. Most relevant are the following: the samples are of considerably different sizes and/or do not match in terms of relevant demographics (and which any post-survey data-weighting has failed to rectify); the data collection dates are different so that at least some respondents in more recent surveys have been influenced by relevant events; questions even on the same topics are worded differently (in whatever interview languages are used), or are placed in a different sequence-order, so that the subject matter of what has been asked previously differentially influences responses to the question-data being compared; the interviewers are not equally qualified and/or fastidious in terms of accurately recording responses, coupled with different levels of quality control in data-capture and analysis among the firms involved.

Keeping the above factors in mind, we may conclude this first election poll piece by trying to answer the question: how congruent have the presidential contest poll results released by the currently active firms been? Based on the figures, several points can be made.

Recent Survey Presidential Contest Results (Rounded Figures in Percent)

First, what might explain the significant reversal of position between Deputy President William Ruto and Raila Odinga as shown in Radio Africa’s most recent survey: an increase of 20 per cent by the latter with the former gaining just 5 per cent, even as the proportion who declined to name any candidate decreased by 24 per cent (thus largely accounting for combined gains of the two main candidates)? Such a major change awaits confirmation by future polls.

Second, in comparing Radio Africa’s previous poll with TIFA’s most recent one, even if Ruto’s lead over Odinga was nearly identical (just over 10 per cent) why is it that the combined figures for “undecided” and “no response” about respondents’ preferred presidential candidate were so different (i.e., 30 per cent for TIFA vs. only 13 per cent for Radio Africa)? Also, and in large part based on the results of these two firms, how, according to Infotrak’s poll of late December (thus conducted a few weeks before the Radio Africa and TIFA polls) could these two main candidates be in a statistical tie? Further, even if the gap between Ruto and Raila (5 per cent) reported by RealField is at a mid-point between those of Radio Africa/TIFA and Infotrak, how could its combined figure of those who declined to mention any preferred candidate (7 per cent) be so much lower than that of any of the other three firms?

Reported differences in methodology do not provide a sufficient basis for answering such questions, even if, for example, Infotrak indicated that its 1,600 respondents represented only 26 counties – which seems strange, given that TIFA’s slightly smaller sample included respondents from all 47. Regarding RealField, among its methodological details was the declaration that interviews were conducted by 500 “fielders”, who were somehow able to complete nearly 22,000 interviews in just four days — meaning an average of eleven interviews per day by each of them, a very ambitious “completion rate” for a household-based survey, even for interviews of shorter duration.

One thing is clear, however: the variation in sample sizes cannot explain the variations in findings. An additional note about the RealField survey: based on its abundant sample size, data-collection alone would likely cost at least US$10,000, and with professional costs and company profit, at least twice that.  Further, several questions were raised by The Star, the only mainstream media outlet that gave the survey any coverage. Among these was the source of funding for the survey, which a representative of the firm identified (as required by Kenyan law for all voter-intention surveys conducted during the twelve months preceding an election) as the Kenya National Muslim Advisory Council. This largely unknown entity was described to me by a senior official of another, much better known, Muslim organization as a “one-man show”, with the “one man” in question not known for his personal wealth and thus not in a position to afford such a massive survey. He was also a quite vocal supporter of the BBI amendment bill last year, according to several media sources.

Finally, something should be said about the confusion, or deliberate “spinning”, of the issue as to whether Ruto’s lead, as reported by Radio Africa and TIFA in their February polls, is sufficient to achieve an outright victory in the first round of the presidential election. When both Radio Africa and TIFA released the results of their November surveys (as shown above), the Deputy President and several of his political associates attacked them (without naming either company) during one or more campaign rallies in western Kenya. According to the Deputy President, as reported in several TV newscasts, these polls were false “because we’ve done our own survey that shows us at 56 per cent”.

It is hard to determine how the Ruto “campaign” figure was arrived at. Yet if the figures for the Radio Africa and TIFA polls are re-calculated having removed those stating they were “undecided” as well as any other respondents who failed to identify a preferred candidate (i.e., “will not vote”; “no response”), the results are both nearly identical and mirror the DP’s claim as shown in the chart below:

Chart 1

In both cases, they suggest a first-round win for Ruto, even if claiming this so far in advance of the election, with so many uncertainties remaining, including choice of running mates, would be highly misleading.  Moreover, it cannot be assumed that all, or even a significant proportion, of those who failed to mention a preferred candidate will not, in fact, vote, suggesting that one analytical challenge is to try to discern from the data which way such respondents are “leaning”, and thus how they are likely to vote — among those who will do so.

Just how such an analysis can be undertaken, and how reliably the media report such findings, are subjects to be considered in the next piece in this series.