Connect with us

Features

A NIGERIAN STORY: How Healthcare is the Offspring of Imperialism and Corruption

Published

on

Pourous Institutions

As a Nigerian, the greatest scorn often finds you when you argue for Nigeria. Other Nigerians will mock you, denounce you as impractical or a dreamer, when you say that Nigeria is where your future lies. But why?

Nigeria as a heritage that separates the Nigerian from the Black American is awarded a loud (though false) superiority. The Nigeria that is evoked in jollof rice debates is praised. Even the Nigeria that must beat Ghana in the football match is supported. Yet, it remains that the Nigeria that will gain a Nigerian’s abuse is the real Nigeria – with its abusive civil servants, its police haggling for bribes and its megachurches auctioning salvation. This real Nigeria is the child of a mean parent called corruption. It’s useful to trace the family tree of this corruption but also useful to think about the way corruption earns Nigeria scorn to the degree that anyone who argues for that Nigeria is unworthy in some way—or should we say, she who argues for Nigeria is worthy of its corruption?

The Nigeria-corruption association has been repeated so often that it has long since become the small talk of world leaders; David Cameron’s aside to Queen Elizabeth II about “fantastically corrupt” Nigeria is but one example. That corruption touches every facet of life in Nigeria is a banality. As Michael Ogbeidi, a history professor at the University of Lagos, put it so accurately in his article, Political Leadership and the Corruption in Nigeria Since 1960, “Indeed, it is difficult to think of any social ill in [Nigeria] that is not traceable to the embezzlement and misappropriation of public funds, particularly as a direct or indirect consequence of the corruption perpetrated by the callous political leadership class since independence”.

Bureaucratic corruption affects healthcare and this is a very old problem both in Nigeria and throughout the formerly colonized world. When Nigeria was incorporated by Imperial Britain, it was conceived of as a repository of natural minerals and riches that could be exported for the benefit of the master race and country. The profits of colonial exploitation are so large they inspire disbelief. For instance, the British Ministry of Food made profits of 11 million pounds sterling in some years, according to Walter Rodney. As Rodney’s seminal text, How Europe Underdeveloped Africa, so clearly explains: this obscene figure of 11 million pounds sterling per annum was the result of artificially low prices set by private capitalist investors in Britain. The British government allowed dummy organizations, like the West African Cocoa Control Board (est. 1938) to lie to and bully African farmers, while pretending to advocate for them. Moreover, farmers were mandated to sell their crops no matter what price they were given. The farmers did not have the might to stand up against the military and political power of the British government. They did not have a choice. They were not economic players in the game, just chess pieces to be thrown around the board. At any rate, 11 million pounds accounts for the profits of just one body, the British Ministry of Food, so we can only imagine the cumulative profits enjoyed by the British Empire.

When Nigeria was incorporated by Imperial Britain, it was conceived of as a repository of natural minerals and riches that could be exported for the benefit of the master race and country.

Whatever the final profits, the people of Nigeria didn’t share in the wealth generated from such exports. The people were simply the machinery of the capitalist endeavor. They were machinery in the sense that the colonial political and economic government had absolutely no consideration for their physical well-being. Instead, by allowing missionaries to overrun the landmass, they rid the country of traditional doctors and what is now referred to as homeopathic medicines. For all the superstition and abuse that occasionally accompanied it, traditional medicine functioned as a rudimentary healthcare infrastructure across the African continent. Aspects of these so-called primitive practices have real and proven benefits.

For instance, West African medical practice is the foundation for inoculation and vaccination. In fact, when inoculation was introduced in colonial Boston during the 1721 smallpox epidemic, the origins of inoculation were so widely known that it was derided as “African” medicine and “Negroish thinking” in the press. Cotton Mather, who is credited with introducing inoculation into North America, wrote extensively about how a West African born slave, Onesimus, told him about inoculation practices. After learning from Onesimus, Mather began interviewing other enslaved Africans who backed up Onesimus’ testimony of being inoculated as children. Mather then tested inoculation on slaves born outside of Africa and when it proved successful, he introduced it to the white population. But as the practice of inoculation became widespread throughout colonial America, and the rest of the West, its origins were conveniently forgotten.

Once the traditional healer was undermined by new religious concepts, Imperial Britain continued to loot the land and exploit the people. Never was there any real investment in an alternative healthcare infrastructure. There are those who quote the 19th century European lie: they brought us civilization; they brought us religion and railways and doctors! But the numbers don’t bear that out. Rodney notes that in the 1930s, the British colonial government maintained a 34-bed hospital for Ibadan when the city had a population of 500,000 people! The colonial government later expanded their medical facilities, but this was only after pressure from nationalist movements set up by people tired of economic and political exploitation.

For instance, West African medical practice is the foundation for inoculation and vaccination. In fact, when inoculation was introduced in colonial Boston during the 1721 smallpox epidemic, the origins of inoculation were so widely known that it was derided as “African” medicine and “Negroish thinking” in the press.

It’s obvious that the dearth of medical and healthcare infrastructure was inherited by the national government in the 1960s. Understanding this history, it can be easy to excuse Nigeria and the Nigerian elite. In fact, this is precisely the hope of the Nigerian political and economic elite.

But we can’t let this excuse win the day since the post-1960 era hasn’t seen a marked continual commitment to the healthcare infrastructure system. The initial investment in healthcare wasn’t bad. In fact, as AO Malu, of Benue State Teaching Hospital, points out, when the Ashby Commission on Higher Education recommended the expansion of educational facilities in 1960, the year of Nigeria’s independence, Medical Faculty at the London College of Ibadan (now known as the University of Ibadan) was expanded and new medical schools were established in Lagos and in Northern Nigeria. The newly independent government continued to found and support teaching hospitals, particularly in the southwestern and northwestern region of Nigeria (Malu).

These teaching hospitals were instrumental in educating the vast majority of licensed nurses and doctors in Nigeria. Up until the late 1980s, they were known for professional teaching quality, their rigor, cleanliness and commitment to medically-appropriate technology. There is many a “middle class” Nigerian that can testify to their own birth or treatment in a Nigerian teaching hospital. Graduates in this 25-year span, from 1960 to 1985, also willingly testify to the maintenance of the facilities, which is no small thing since it both reflects and demands pride from the facilities’ users. It also reflects real material investment and demands it as well. But all of these testimonies are historical. The testimonies are about what the teaching hospitals used to be. Neglected by federal and state governments, the hospitals are today decrepit artifacts that are stuck with the technology of the last decade. I know one doctor who cried when she visited her alma mater in Rivers State, such was the state of the place with debris and rats. Another physician I know refused to discuss her medical school; she stammered, shook her head in anger and walked away. When she returned to the subject, she said only, “It was never, never like that before. The standard has really fallen.”

These teaching hospitals were instrumental in educating the vast majority of licensed nurses and doctors in Nigeria. Up until the late 1980s, they were known for professional teaching quality, their rigor, cleanliness and commitment to medically-appropriate technology.

But these “historical” hospitals are still hospitals. They still admit patients and attempt to treat them; they still admit students and attempt to educate them. Their treatment is curtailed by the lack of technological investment, the deteriorating facilities and the stagnated curriculum that Nigerian medical students are afforded. This is not the doing of some late 19th century Briton. It is the result of the rampant and insidious corruption executed by the political elite and their counterparts in the financial sector. As Professor Ogbeidi, notes in his article, citing this 2004 Reuters interview with then anti-graft chief Nuhu Ribadu, “Incontrovertibly, corruption became endemic in the 1990s during the military regimes of Babangida and Abacha, but a culture of impunity spread throughout the political class when democracy returned to Nigeria in 1999. In fact, corruption took over as an engine of the Nigerian society and replaced the rule of law”. In other words, the neglect of healthcare infrastructure is a product of recent and present-day choices that continually disregard the health of the people who are the machinery of the nation.

The teaching hospital model was never capable of nor adequate in caring for Africa’s most populous nation. It was a step in the correct direction, but a step that has been halted. As Professor Ogbeidi puts it: “As a consequence of unparalleled and unrivalled corruption in Nigeria, the healthcare delivery system… [has]become comatose and [is] nearing total collapse.”

So what are Nigerians left with? The vast majority of Nigerians who were never able to access teaching hospitals must rely on book doctors and unlicensed and unregulated pharmacies. A book doctor is a person who has learned about the practice of Western medicine solely from books. This book doctor never attended medical school, never sat for a medical certification or license exam and never completed a residency or rotation under the supervision of more experienced medical practitioner. Book doctors are common in areas outside of the major Nigerian cities. Having been to one myself, I can attest to the fact that they are not clandestine operations, but clearly marked persons with public enterprises. Neither the federal nor state governments make any attempt to investigate them in the interest of the people.

My experience with the book doctor was fine. He was affable. All the materials I observed were clean and unused. His nurses were well-trained and products of nursing schools. Yet the facility did not have electricity from the Nigerian energy grid, running water, nor a toilet. (Outside of major Nigerian cities, it is not rare to go 2 or more months without electricity from the Nigerian energy grid, this is despite the fact that Nigeria sells energy to Togo, Benin, and Niger.) The book doctor instead powered his facility with a generator and bathroom functions were undertaken in a darkened room at the back of the property. The patients brought their own water.

Book doctors are common in areas outside of the major Nigerian cities. Having been to one myself, I can attest to the fact that they are not clandestine operations, but clearly marked persons with public enterprises.

Despite my benign experience, Nigerians die daily from inadequate care from book doctors, just as they die from the inadequate healthcare system throughout Nigeria. Death is the fruit of corruption.

The other fruit of corruption is the bankruptcy of Nigeria’s national wealth.

In making adequate healthcare difficult or impossible to access, the political class is making it an absolute necessity for people to seek medical help outside of Nigeria’s borders. This drives those people who can afford it, to go to African countries like Ghana and South Africa, or ever further to Europe, India, the Middle East or the Americas for medical care. This is an insane situation for a citizen of an oil-rich country.

The Nigerian government acknowledges that sending medical tourists abroad is a real problem that has cost the country at least 1₦ billion –the equivalent of 690 million pounds sterling. This is money that was made in Nigeria but spent elsewhere; money that should be circulating in the Nigerian economy. Bu a real investment of capital into the construction and maintenance of medical infrastructure would not only stem this but also enrich the country, especially if the construction materials were purchased from Nigerian companies and Nigerians were employed in the labor.

But the same government that is legislating against “medical tourism” is led by President Mohammed Buhari who has become the “face of medical tourism.” President Buhari spent 7 weeks, from January to March, in London before offering up a vague explanation about his health. The lack of specificity was an allusion that was meant to be understood in the mind of the Nigerian citizen as you know we no get oyibo (white man) medicine na. Buhari left Nigeria for London again in May. When the Nigerian populace, aided by journalists, demanded that the President return and govern after an absence of more than 3 months, the president reluctantly returned. He has refused to say how much money the Nigerian government spent on his almost 5-month stay in London. No matter. The failing Nigerian healthcare system is implicit in the president’s long stay in high-priced London and the unstated, exorbitant price tag is yet another example of political corruption.

The Nigerian government acknowledges that sending medical tourists abroad is a real problem that has cost the country at least 1₦ billion –the equivalent of 690 million pounds sterling.

This drama, of course, comes after the 2010 death of President Umaru Musa Yar’adua whose 3-month medical stay in Saudi Arabia ended when the Nigerian government sent a delegation to “check on his health.” Yar’adua’s absence was explained to the Nigerian people as medical treatment, but during those 3 months, he was not seen in public and this fueled both rumor and a real leadership crisis in the federal government.

The travels of Yar’adua and Buhari demonstrate in a practical, evidentiary manner that the Nigerian healthcare system has been abandoned by its political elites. They seek their health and medical care elsewhere and as a result, they have left the funding and maintenance of the healthcare infrastructure to the birds.

Yet, still the middle class, takes the political and financial elite as “leaders” and follows them abroad. They are not leaders; they are elites by virtue of being on top of the capitalistic structure and because they are elitist, believing that only those at the top should have access to what are now called “basic human necessities,” including electricity and running water. If they were not elitist, they wouldn’t rob the country to the detriment of the health and very life of the people.

In going abroad, middle-class Nigerians are increasingly identifying service sectors and medical acumen with the West. This is dangerous because such identification alleviates the pressure to improve the facilities within Nigeria. The determination to go abroad should instead be replaced by the determination to improve the healthcare infrastructure at home.

The travels of Yar’adua and Buhari demonstrate in a practical, evidentiary manner that the Nigerian healthcare system has been abandoned by its political elites. They seek their health and medical care elsewhere and as a result, they have left the funding and maintenance of the healthcare infrastructure to the birds.

The portion of the Nigerian middle-class that does utilize the healthcare system have little encouragement. Added to the corruption that robs the system is the dearth of physicians who might otherwise provide superior care and demand attention from the political and financial elites. It is not that Nigerian isn’t training medics, but the problems already noted drive them to ply their trade abroad.

A 2013 article by the Foundation for the Advancement of International Medical Education and Research (FAIMER) is titled “Nigerian Medical School Graduates and the US Physician Workforce” and the title says it all. Despite the corruption and deteriorating conditions, Nigerian-educated medical professionals are skilled physicians who are able to practice throughout the world. This is good for them but bad for Nigeria.

According the statistics of the Educational Commission for Foreign Medical Graduates, at least 4300 Nigerian medical graduates were certified to practice in the United States between 1980 and 2012. That is 4,300 doctors who are not practicing in Nigeria. What would Nigeria be like with 4,300 more doctors? Before answering, consider that this is only one type of certification program doctors in the United States and Canada; it does not account for the medical graduates who have emigrated to mainland Europe, the UK, Australia, the Caribbean nations, India, or the increasingly, alluring South American republic of Brazil. Now consider that President of the Healthcare Federation of Nigeria, thinks that the correct estimate of Nigerian doctors practicing abroad is closer to 37,000. This is a real exodus with dangerous ramifications.

With the flight of medical graduates, Nigeria must educate another person to become part of the healthcare infrastructure. With the flight of medical graduates, Nigeria loses another bloc of people capable of putting pressure on the political class to fix the healthcare infrastructure. With the flight of medical graduates, Nigeria loses people who might create real national wealth by buying Nigerian made goods and supporting local industry, rather than the cheaply made, imports – the shine shine – that litter the market stalls of the subsistence worker and the Instagram pages of the so-called middle class. With the flight of the medical graduate, Nigeria is left stagnant.

Now consider that President of the Healthcare Federation of Nigeria, thinks that the correct estimate of Nigerian doctors practicing abroad is closer to 37,000. This is a real exodus with dangerous ramifications.

It is this stagnant Nigeria that earns a Nigerian the ridicule of his countrymen. At home, everyone (or so it seems) wants to travel abroad. Abroad, home is just a green-and-white outfit, a party theme on October 1st. Healthcare in Nigeria is a fatal casualty of continued political corruption. Medical tourism will cease only after the government has demonstrated sustained and responsible investment and maintenance of healthcare schools and facilities. Until then, the middle class will follow its political and economic elites in seeking medical treatment abroad; they will spend their hard-earned money in other countries and continue to wonder why death and bankruptcy follow them home to Nigeria.

Comments

Maurine Ogbaa is a Nigerian writer based in the USA.

Features

THE TIES THAT MAY NEVER BIND: Chasing the mirage of SPLM reunification

Published

on

THE TIES THAT MAY NEVER BIND: Chasing the mirage of SPLM reunification

The Sudan People’s Liberation Movement/Army (SPLM/A), a southern Sudan-based national liberation movement, sprouted in 1983 in the Sudanese and regional political theatre at the height of the Cold War that witnessed ideological and superpower rivalry in the Horn of Africa and the Middle East. Many South Sudanese and people on the political left received its declared objective of constructing a united socialist “new Sudan” with a pinch of salt. A handful of highly educated individuals formed its officer corps but the bulk of the army, the SPLA, was drawn not from an industrial working class but from sedentary and agro-pastoral communities – unlikely material for building socialism.

However, the united socialist new Sudan disappeared imperceptibly from the SPLM/A written and oral literature with the collapse of the Soviet Union and the world socialist system before the turn of the century. This led to an ideological shift in the SPLM/A system. This shift coincided with the demand by the people of South Sudan to exercise their inalienable right to self-determination.

The war of national liberation ended in a political compromise: the comprehensive peace agreement (CPA), which the SPLM and National Congress Party (NCP), representing the government of Sudan, spent eleven years negotiating in Nairobi, Machakos and finally Naivasha under the auspices of two successive Kenyan presidents. Dr. John Garang de Mabior and Sudan’s Vice President Ustaz Ali Osman Mohammed Tah signed the peace agreement in Nairobi on 9 January 2005 in a colourful ceremony presided over by President Mwai Kibaki of Kenya and witnessed by President Yoweri Museveni of Uganda, Meles Zenawi of Ethiopia, President Omar al Bashir of Sudan and Colin Powell, the US Secretary of State, among other African and world leaders.

In the second edition of “The politics of liberation in South Sudan: An insider’s view”, I posed the question: “What is the SPLM and where is it?” I was trying to provoke a debate in the SPLM/A that had since 1983 evolved like Siamese twins who are conjoined at the head and who cannot be separated surgically because it would lead to their death. There was no clear separation of functions with the SPLA being the military organ of the liberation movement and SPLM its political organ. The two subsumed and eclipsed each other’s respective functions, blurring and indeed distorting internal political and democratic development to prevent the emergence of a genuine and authentic national liberation movement.

The lack of an ideology and the absence of organisation and institutions in a national liberation movement can negatively influence its development and the relationship between its members and the masses of the people, as well as the nature of the resultant state. The state in South Sudan, in its current disposition regardless of the international recognition it obtains, is a façade. The lack of political organisation and the absence of democratic institutions and instruments of public power resulted in the personalisation of the SPLM/A’s power and public authority. These were the principal drivers of the internal contradictions, splits and factionalism within the SPLM/A.

The SPLM/A was such an informality that only Garang could manage it and keep it moving. His sudden demise in 2005 released the negative forces hitherto kept under tight lid by military authoritarianism. The power transfer to Commander Salva Kiir Mayardit went without a glitch. Nevertheless, Kiir’s leadership style, unlike that of Garang, enabled the emergence of “power-centres” around his presidency of the Government of South Sudan. The interim period, before the carrying out of the referendum on self-determination, witnessed internal power struggles among the SPLM’s first and second line leaders characterised by intrigues, short-changing and an upsurge in ethnic nationalism, as well as the emergence of ethnic associations and caucuses in the executive and legislative branches of government, widespread corruption in government and society, insecurity in the form of ubiquitous ethnic conflicts and localised civil wars.

The state in South Sudan, in its current disposition regardless of the international recognition it obtains, is a façade. The lack of political organisation and the absence of democratic institutions and instruments of public power resulted in the personalisation of the SPLM/A’s power and public authority. These were the principal drivers of the internal contradictions, splits and factionalism within the SPLM/A.

The independence of South Sudan found the SPLM (South Sudan’s governing party) in a state of acute dysfunctionality due to internal power wrangles. The leaders miserably failed to separate and transform the SPLM into a mass political party guided by democratic principles, a constitution and a political programme. Its internal situation was toxic and ready to implode. The pressure lid that tightly compressed its internal contradictions had suddenly ruptured with the death of Garang. It was only the general concern about secession from the Sudan among the majority of the Southern Sudanese that sustained the unstable calm, enabling the orderly conduct of the referendum on self-determination.

The structural drivers of SPLM/A internal splits

The internal and external socio-political conditions under which the SPLM/A formed in July 1983 laid the foundations of its perpetual internal instability. Without going into details, the failure to unify the remnants of the mutinies of elements of Sudan Armed Forces (SAF) in Bor (16 May) and Ayod (6 June) with the Anya-nya 2, which was formed by former officers and soldiers of Anya-nya, who had been absorbed into the SAF following the 1972 Addis Ababa Agreement and who rebelled in Akobo in February 1976, through the agency of the Derg defined the militarist character of the nascent movement. When the Anya-nya 2 flipped back to the liberation movement in 1988, no structural changes had occurred within the SPLM/A, particularly at the leadership level. Like a dinosaur, the SPLM had a tiny head resting on a huge body that it carried with immense difficulty. The suffocating military environment resulted in the 1991 Nasir Declaration that split the movement, leading to internecine fighting along ethnic contours. By the end of 2003, when Dr. Riek Machar and Dr. Lam Akol, who had authored the declaration, returned to the fold, the SPLM/A remained structurally unchanged.

The institutions created by the SPLM First National Convention in 1994, like the National Liberation Council (NLC) that was established to perform legislative functions and the National Executive Committee (NEC) that was to exercise executive functions of the SPLM/A, had disappeared into oblivion. The SPLM/A power and public authority had begun to centralise, concentrate and personify in Garang, its Chairman and Commander-in-Chief. The return to the SPLM/A of Machar and Akol on the eve of the peace agreement with Khartoum, coupled with Machar’s ambition to become Number One in the SPLM/A hierarchy, heightened rumour-mongering in the SPLM/A targeted at ousting of Salva Kiir as the deputy Chairman and SPLA’s Chief of General Staff. Kiir, who had stayed loyal to Garang throughout the turbulent years, would not take the rumours lying down. This triggered what came to be known in the SPLM/A as the Yei Crisis, which in November 2004 pitted Kiir against his boss.

Although the Yei crisis was an internal, structurally-driven SPLM/A matter, its ethnic overtones and provincial contours were prominent, feeding into a general dissatisfaction with Garang in Bahr el Ghazal (where he had in the course of time differed, split with and executed several leaders) spearheaded by prominent individuals linked to the National Islamic Front regime in Khartoum. A conference called in Rumbek to resolve this crisis, which addressed only its symptoms but not its structural underpinnings. This conference was typical of the SPLM/A meetings that always ended up fudging the substantive issues under the canopy of “opening a new page”. As a result, the attempts to resolve the crisis were frustrated, creating conditions for the resurgence or eruption of another crisis along the same lines.

Kiir, who had stayed loyal to Garang throughout the turbulent years, would not take the rumours lying down. This triggered what came to be known in the SPLM/A as the Yei Crisis, which in November 2004 pitted Kiir against his boss.

The splits in the SPLM/A have always been more political and personal than ideological, hence they transcended and permeated into the ethnic and provincial domains, acquiring different dimensions and dynamics. The splits in 1983/4 and 1991 quickly acquired ethnic dimensions because of the lack of an ideologically-driven agenda, although the commanders in Nasir had raised the right of the people of southern Sudan to exercise self-determination. However, the question of power and who wielded it was the common denominator in all these splits. It was the perception of power as a personal birthright rather than an institutional assignment that set the patterns for achieving it. In a militarist environment like the SPLM/A, the pattern for capturing and holding onto power was inevitably violent.

The SPLM split and the civil war

In the absence of democratic institutions and instruments of power and public authority, the SPLM/A became a huge informal patrimonial network of political patronage. This system became more pronounced after Garang’s death, the rise of Kiir within the SPLM/A and the independence of South Sudan. The lack of a political programme to manage the social and economic development of the new state of South Sudan rendered the interim period (2005-2011) what the SPLM leaders cynically called “payback time”: they dolled themselves up in self-aggrandisement, thanks to the easy availability of oil revenues. The nexus between personal power and wealth accumulated in a primitive fashion without consideration for law and order resulted in a life and death situation.

The patrimonial political patronage system that the SPLM leaders controlled accentuated and amplified the SPLM’s internal contradictions. The personalised power struggle became a fireball in December 2013, barely three years into the independence and birth of the Republic of South Sudan. The resultant civil war was initially viewed by many people as a war between Kiir and Machar (and by extension a war between the Dinka and the Nuer) but it was in fact a reflection of the SPLM’s failure to address its structurally-driven internal political contradictions.

The SPLM reunification

In all these SPLM/A disruptions, eruptions or implosions, these contradictions have always been buried under the talk about “return to the fold” or “reconciliation and peace”, which have left these contradictions intact and ready to rekindle. In December 2013, the eruption of violence, and its scale and ferocity, caught the IGAD region and the whole world unawares. South Sudan had not completely emerged from the effects of the 21-year war of liberation and from the border war with the Sudan (2012) and so nobody could understand why a people who had endured suffering for that long would go to war again. Thus, the interventions to help resolve the conflict were frenetic but superficial. Nobody cared to solicit a scientific understanding of the conflict’s causes.

The extraordinary summit of IGAD Heads of State and Government, held in Nairobi on 27 December 2013, resolved to bring the warring parties, namely the Government of the Republic of South Sudan and the rebel movement christened the Sudan People’s Liberation Movement/Army in Opposition [SPLM/A (IO)], to the negotiating table to thrash out their difference and reach a peace agreement. The United Nations Mission in South Sudan (UNMISS) became the contact between Machar and the IGAD Special Envoys to South Sudan. The negotiations began in Addis Ababa.

In December 2013, the eruption of violence, and its scale and ferocity, caught the IGAD region and the whole world unawares. South Sudan had not completely emerged from the effects of the 21-year war of liberation and from the border war with the Sudan (2012) and so nobody could understand why a people who had endured suffering for that long would go to war again. Thus, the interventions to help resolve the conflict were frenetic but superficial. Nobody cared to solicit a scientific understanding of the conflict’s causes.

The ruling parties in Ethiopia (EPRDF) and South Africa (ANC) came up with a joint initiative, which aimed at resolving the SPLM’s internal contradictions that triggered and drove the civil war. It is worth mentioning that the ANC and the Norwegian Labour Party had earlier, before the eruption of the violence, tried to help the SPLM leadership to overcome its differences, which had been triggered by rumours that Salva Kiir had decided not to contest for the presidency come 2015. President Kiir reacted to the rumours in a manner similar to somebody who sets his house on fire to treat bug-infested pieces of furniture.

As if not sure that the SPLM’s 3rd National Convention, scheduled for May 2013, would return him as the Party Chairman and hence the SPLM’s flag bearer for the presidential elections in April 2015, Kiir blocked the democratic process of SPLM state congresses and the National Convention, suspended the SPLM Secretary General and paralysed all SPLM political functions. These actions halted the political process towards the presidential and general elections for national, state and county governments. He also brushed away any reconciliatory talks with Machar, Pagan Amun Okiech or Mama Rebecca Nyandeng Garang, who had shown interest in contesting the position of the SPLM Chairman.

The ANC-EPRDF initiative was the right approach. These were the SPLM first row leaders and it was absolutely imperative to reconcile and unify their ranks to alleviate the suffering of the people. Except the eruption of violence and the ethnicisation of conflict had rendered impossible the task of reconciliation. The grassroots opinion solicited in 2012, before the war, indicated widespread disenchantment of the masses with the SPLM as a ruling party. (Later, the people would quip that when the SPLM leaders split they killed the people and when they united they stole the people’s money.)

However, Machar turned down the initiative in favour of a full-blown peace negotiation under IGAD mediation, suggesting that the conflict and war was no longer an affair of the SPLM. In September 2014, on the sidelines of the UN General Assembly, President Kiir met the Tanzanian President, Jakaya Kikwete, and requested his indulgence and assistance to reunite the feuding SPLM factions, namely, the SPLM in government (SPLM-IG), the SPLM in opposition (SPLM-IO) and the SPLM former political detainees (FPDs). President Kikwete obliged and the process kicked off in November 2014 under the auspices of Chama Cha Mapenduzi (CCM). On 21 January 2015, the three factional heads – Kiir [SPLM (IG)], Machar (SPLM/A (IO)] and Okiech [SPLM (FPDs] – signed the SPLM Reunification Agreement in a ceremony in Arusha witnessed by President Kikwete, President Yoweri Museveni and President Uhuru Kenyatta, as well as then Deputy President of South Africa, Cyril Ramaphosa.

The impact of the SPLM reunification agreement on the IGAD peace process in South Sudan was not immediately obvious given that the civil war not only raged throughout South Sudan, but also considering that the people had become weary of the SPLM as a ruling party. The SPLM reunification agreement was supposed to moderate and ease the tension between the SPLM leaders in order to accelerate and facilitate the sealing of a peace agreement and return the country to normalcy. The motivations of the SPLM leaders crossed rather than aligned with each other. The SPLM/A (IO) fell off the reunification process. The guarantors of the reunification agreement, CCM and ANC, proceeded with the two remaining factions to implement the Arusha agreement on SPLM reunification. They eventually consummated the process with the reinstatement of the comrades to their respective positions: Okiech as the SPLM Secretary General, and Deng Alor, John Luk and Kosti Manibe to the SPLM Political Bureau.

However, once disrupted, relations based on social considerations rather than principles of politics and ideology rarely mend. It did not take long before the four former political detainees stormed out of Juba and did not return till after the signing of the Agreement on the Resolution of the Conflict in South Sudan (ARCISS) in August 2015. The SPLM reunification process had flopped.

The Entebbe and Cairo meetings

I headed the SPLM/A-IO delegation to the reunification talks in Arusha. In a report to the SPLM/A (IO) NLC meeting in Pagak, December 2014, I said that the SPLM reunification was like chasing a mirage. I still believe it will never take place, given the political dynamics since the fighting in J1, which rekindled the war in 2016.

The IGAD-sponsored High-level Revitalisation Forum (HLRF) process has outpaced the SPLM reunification in a manner that confirms the statement I made above that the SPLM faction will never unite; the ties will never bind. The former political detainees who were enthusiastic about reunification seem to have had second thoughts when they pursued the project of a UN Trusteeship of South Sudan, which they later changed to exclude Kiir and Machar from participating in a Transitional Government of National Unity (TGoNU) made up of technocrats. The failure of the HLRF to achieve the desired peace agreement prompted the IGAD Council of Ministers to propose a face-to-face meeting between Kiir and his principal nemesis, Machar, under the auspices of the Ethiopian Prime Minister, Dr. Abiye Ahmed, This face-to-face meeting was modelled on the “handshake” between President Uhuru Kenyatta and opposition leader Raila Odinga that had eased the political standoff in Kenya following the disputed 2017 elections.

The Kiir-Machar face-to-face meeting took place on the sidelines of the 32nd Extra-Ordinary Assembly of the IGAD Heads of State and Government. President Kiir categorically rejected the idea of working with Machar, who was flown in from Pretoria in South Africa where he had been kept under house arrest since November 2016. Reflecting the level of distrust between the two leaders, the failure of the meeting prompted IGAD to mandate the Sudanese Head of State, President Omer Hassan Ahmed al Bashir, to facilitate a second round.

The failure of the HLRF to achieve the desired peace agreement prompted the IGAD Council of Ministers to propose a face-to-face meeting between Kiir and his principal nemesis, Machar, under the auspices of the Ethiopian Prime Minister, Dr. Abiye Ahmed. This face-to-face meeting was modelled on the “handshake” between President Uhuru Kenyatta and opposition leader Raila Odinga that had eased the political standoff in Kenya following the disputed 2017 elections.

This mandate was ostensibly in the belief that Bashir might prevail on the two antagonists given their relations in the not too distant past. The aim of this round was to herald a discussion between the South Sudanese leaders to resolve outstanding issues on governance and security arrangements, taking into consideration the measures proposed in the revised IGAD Council of Ministers’ Bridging Proposal on the Revitalisation of ARCISS, and to rehabilitate South Sudan’s economy through bilateral cooperation between the Republic of South Sudan and the Republic of the Sudan. President Museveni was conspicuously absent in the Addis Ababa summit. Many people believed it was a loud register of his disapproval of the Kiir-Machar face-to-face meeting. Museveni has never disguised his contempt for Machar and his support for Kiir. On the eve of Kiir’s travel to Addis Ababa, Museveni sent to Juba his Deputy Prime Minister, Moses Ali with a letter to him; perhaps that was his desperate last attempt to torpedo the talks.

In a surprising twist in this intricate diplomatic and political maze, the transfer of the process to Khartoum triggered regional kinetic energy. Museveni flew to Khartoum on 25 June to witness the Kiir-Machar face-to-face meeting now under the auspices of President Bashir. This unexpected convergence in Khartoum of Museveni and Kiir was not so much about the face-to-face meeting but about the rehabilitation of South Sudan’s oil fields and the Sudanese involvement in their protection as echoed in the Khartoum Declaration of Agreement (KDA) between Kiir, Machar and Gabriel Changson (SSOA), Deng Alor (FPDs) and Peter Manyen (Other Political Parties) signed in Khartoum on 26 June. Only one thing – the prospect for renewed flow of South Sudan’s oil to international markets – motivated both Bashir and Museveni into the scheme to rehabilitate South Sudan’s economy. This reads into the Bashir-Museveni’s rapprochement and the new-found friendship between the two erstwhile hostile leaders.

Thereafter, the South Sudan government and the opposition groups signed in Khartoum on Friday 6 July, 2018, the Agreement on Outstanding Issues of Security Arrangements. The process moved to Kampala on Saturday, 7 July this year, where Salva Kiir, Riek Machar and the other political opposition signed the agreement on governance. On 10 July, the two agreements were presented to President Kenyatta, marking the consummation of the peace agreement and the end of the South Sudan conflict. Indeed the HLRF had outpaced and overtaken the SPLM reunification.

The intervention of President Omer al Bashir, on account of Sudan’s national security and economic interests, rescued from collapse and embarrassment the IGAD peace process. The clever involvement of President Museveni was necessary to allay Kiir’s fears and build confidence in Sudan’s mediation, although he still has an axe to grind with South Sudan over the Abyei border demarcation and many other issues that have not been resolved in the post-referendum process. The success of the IGAD process and the failure of the SPLM reunification is a diplomatic slap in the face of CCM and ANC, the two parties that had laboured to bring together the SPLM factions.

However, the agenda for the people of South Sudan is not SPLM reunification but the political process of socio-economic rehabilitation to translate the signed agreements, which are essentially political compromises, into practical plans and programmes. South Sudan’s leaders have to act strategically looking into the future rather than tactically to win elections at the end of the transitional period.

Continue Reading

Features

NAMIBIA’S BIG CAMPAIGN: Why direct cash transfers can still change the world

Published

on

NAMIBIA’S BIG CAMPAIGN: Why direct cash transfers can still change the world

In 2008, the Namibian government launched a pilot universal basic income project known as the Basic Income Grant (BIG). The results were amazing, with crime rates dropping by more than one-third and the number of malnourished children almost halved. In just 12 months after its launch, the BIG project showed to be more than able to actively contribute to achieving the Millennium Development Goals set by the United Nations (now known as the Sustainable Development Goals). It was a tremendous opportunity to set the foundation for a new age of prosperity for the entire African continent, and it served as a paradigm around which other similarly successful programmes have been modeled.

Sadly, despite its initial success, the BIG campaign was never implemented on a national scale, and the project was eventually discontinued, never to be heard of again. Since then, however, many things have changed, not just in Namibia and in Africa, but in the entire world. The latest advancements in technology (namely, the amazing leaps forward in automation and artificial intelligence) are forcing many governments to face a new issue – that machines are quickly becoming better than humans at performing many jobs. Artificial intelligence (AI) is soon going to substitute many human workers, leading to a widespread fear that massive unemployment rates could bring many highly industrialised countries to their knees.

Universal basic income (UBI) is regarded by many as a potential solution, and the leaders of the most developed nations are looking at past practical examples of such policies. In this regard, the Namibian BIG project might represent an archetype which might spearhead humanity towards the next step of its evolution. Although the chances of seeing it implemented again in Namibia on a larger scale are very slim, it can still be a fundamental lesson for other countries who look at UBI as a fundamental weapon in the war against poverty.

BIG: A brief history

According to the World Bank, in 1991, whites, who comprised about 5% of the total population in Namibia, controlled over 70% of the country’s wealth. Today, more than 25 years after independence, Namibia is still a country plagued with deep social, ethnic and economic inequalities and extreme poverty. Much of the country’s political agenda focused on reducing income inequalities and poverty levels, and, in truth, much has been done in the last two decades. In 2016, Namibia’s GINI coefficient (a globally accepted standard for measuring inequality in wealth distribution) stood at 0.572, a relatively bad figure as a coefficient of 0 is used to represent a perfectly equitable society, while a coefficient of 1 represents a completely unequal one.

According to the World Bank, in 1991, whites, who comprised about 5% of the total population in Namibia, controlled over 70% of the country’s wealth. Today, more than 25 years after independence, Namibia is still a country plagued with deep social, ethnic and economic inequalities and extreme poverty.

However, back in 2002, Namibia’s GINI coefficient was even higher, reaching up to 0.633. The Namibia Tax Consortium (NAMTAX) was appointed by the government to find a sustainable solution to fuel the nation’s economic growth. Too many African countries, in fact, lean far too much on the help of more developed countries or on non-governmental organisations (NGOs), but it is common knowledge that their policies do not always help to achieve development goals in the long term. Even worse, many bona fide offers of aid often contribute to widening the already unbridgeable gap between Western societies and the poorest countries.

Eventually, the Consortium published a report stating that “by far the best method of addressing poverty and inequality would be a universal income grant.” The idea was eventually put into practice by implementing the Basic Income Grant (BIG), the first universal cash-transfer pilot project in the world. In 2005, a coalition of churches, trade unions, and NGOs joined forces to provide each Namibian with a cash grant of N$100 (approx. US$7) to be paid monthly as a right. The fund would cover all Namibians, regardless of their socio-economic status, from their day of birth until they were eligible to the existing universal State Old Age Pension of N$450. According to the Consortium, the new tax system would make the BIG affordable, amounting to just 3% of the country’s GDP. Debating and lobbying kept going on for another two years until a pilot project was finally approved to test the programme in practice. In January 2008, the BIG pilot programme was finally launched in the small village of Otjivero.

 

The amazing positive effects of the Otjivero experiment

About 1,200 people resided in Otjivero, a small town of retrenched former farm workers who lived in abject poverty conditions. The Namibian government chose this rural settlement to monitor the impact of the BIG project over a two-year period until December 2009, and appointed a team of local and international researchers to document the situation prior to and after the implementation of the programme.

After less than one year, the population of Otjivero reaped the benefits of this project with amazing results. Both children and adults enjoyed a substantial improvement in their quality of life. Child malnutrition levels in the village dropped in just six months from 42% to 17%. Parents finally had enough money to pay school fees as well as the equipment needed by their kids, such as stationery and school uniforms. Schools had more money to purchase teaching material for the students, and dropout rates fell from between 30% and 40% to a mere 5%.

The introduction of the BIG grants helped the community grow and thrive, and allowed people to focus on more productive jobs. Many young women become financially independent without having to engage in transactional sex. A substantial amount of money was spent on starting new small enterprises and engaging in more productive activities that fostered local economic development. As a direct consequence, economic and poverty-related crimes fell by over 60%.

After less than one year, the population of Otjivero reaped the benefits of this project with amazing results. Both children and adults enjoyed a substantial improvement in their quality of life. Child malnutrition levels in the village dropped in just six months from 42% to 17%.

The sanitary conditions of the local population improved significantly, with five times more people being able to afford treatment in the settlement’s health clinic and, even more importantly, to buy food. Before the introduction of the BIG, most HIV-positive residents faced numerous difficulties in accessing antiretroviral (ARV) therapy due to poverty and lack of proper means of transportation. The project helped them to afford better nutrition and more reliable transport to get their medications. Even critics who argued that free money would lead to more alcoholism were proved wrong, even when a committee that was trying to curb alcoholism was established.

Some years later, during the 2012-2013 summer months, Namibia was struck by one of the worst recorded droughts, leaving over 755,000 people (36% of the population) exposed to starvation in the subsequent years. After the President declared a state of emergency, the three Lutheran Churches in Namibia implemented a cash grant programme modeled on the BIG pilot in Otjivero. The grant helped approximately 6,000 people with enough money to buy the food they needed to survive. The Namibians reached by the grant spent about 60% of the money received to ensure food security for their families. However, it is interesting to note that people used the remaining 40% of the money to meet their other fundamental needs, such as to covering health care expenses, paying for their children’s schooling and even investing in their farming equipment. Once again, the basic income project brought direct positive changes to the quality of life of those who received it and to the local economies as well.

The initial findings vastly exceed the expectations of the BIG coalition, and were encouraging enough to suggest that the introduction of the project on a national scale was possible. Some critics tried to depict these results as unscientific and unreliable, casting a shadow of doubt on the whole project. However, the analysis, published by the now defunct Namibia Economic Policy Research Unit, was itself later found to be methodologically flawed. Wrong and grossly inflated figures about the projected costs of the implementation of the programme at the national level started circulating and, even after NEPRU retracted its statements, they still kept circulating in the media. Some local politicians joined this (rather questionable) wave of criticism and argued that the BIG was a less effective strategy than other extremely generic attempts at “creating more jobs”, ignoring the fundamental strength of the project – its ability to emancipate the poor financially.

Eventually, after the Namibian president, Hifikepunye Pohamba, officially took a position against the grant in 2010, the programme was discontinued, if not forgotten. In 2015, the Minister of Poverty Eradication and Social Welfare, Zephania Kameeta, stated that the government was once again evaluating the implementation of the BIG as one of the key elements of its strategy in the war against poverty. Sadly, the efforts of the former bishop and relentless advocate of UBI were swept away just one year later when the BIG project was set aside and replaced by a much more traditional, growth-based economy programme known as the “Harambee Prosperity Plan”.

Some local politicians joined this (rather questionable) wave of criticism and argued that the BIG was a less effective strategy than other extremely generic attempts at “creating more jobs”, ignoring the fundamental strength of the project – its ability to emancipate the poor financially.

Despite some recent talks about the potential positive effects of the BIG, universal income doesn’t seem to be part of Namibia’s foreseeable future. However, it has already been proved to be an unexpectedly efficient tool for bringing prosperity to the Namibian population. Many other countries around the world can still learn from the amazing results it brought about.

Lessons for other countries

The industrialised world is facing its own shares of different problems, and poverty has recently resurfaced even in the richest countries where its existence had been long forgotten. A “fourth world” made up of vast numbers of immigrants, refugee, and homeless people is swelling the ranks of these invisible new poor that are systematically exploited even in the most highly industrialised Western democracies. Today, one-third of American families struggle to buy food, shelter or medical care, and in some European countries, such as Bulgaria, Romania, and Greece, more than one-third of the population is at risk of poverty or social exclusion.

And things are about to get even nastier. Automation, robotics and the never-ending technological race are raising serious issues, such as the ethical consequences of substituting some human professions with AI. A recent research study estimated that the upcoming technological advancements are putting a huge proportion of jobs at risk. The numbers are absurdly high – up to 50% in the United States, 69% in India, 77% in China, 80% in Nepal, and 88% in Ethiopia. Installing a robot in place of a human worker is becoming increasingly cheaper, and the current AI revolution is making machines better than humans in almost everything (including thinking). If even the strongest economies are on the verge of social failure already, how can we brace ourselves to face a future where machines are going to strip a huge proportion of the population of their jobs?

A recent research study estimated that the upcoming technological advancements are putting a huge proportion of jobs at risk. The numbers are absurdly high – up to 50% in the United States, 69% in India, 77% in China, 80% in Nepal, and 88% in Ethiopia.

Some, such as Elon Musk, Mark Zuckerberg, Richard Branson and Bill Gates, have become advocates of the UBI as a solution to guarantee social stability. If fewer humans are needed to do the same jobs, it doesn’t mean that fewer humans have the right to live a quality life they can truly enjoy. The Namibian BIG project eventually failed, but not because of its lack of merit. It was ended by those who were too short-sighted to understand its full potential. It was a great idea, but maybe just ahead of its time. However, this apparently small experiment started ten years ago in this small African village could be the first step towards a better world.

Namibia taught us one simple yet extremely important lesson – that UBI is not just viable and absolutely doable, it is one of the most cost-effective ways to stave off poverty at all levels.

Namibia taught us one simple yet extremely important lesson – that UBI is not just viable and absolutely doable, it is one of the most cost-effective ways to stave off poverty at all levels. It can help people become more productive, more creative, more able to focus on the things that matter, exactly as in the case of Otjivero’s residents. It is an extraordinary force that could drive humanity forward into a new era of equality and social sustainability.

Continue Reading

Features

JOBS, SKILLS AND INDUSTRY 4.0: Rethinking the Value Proposition of University Education

Published

on

JOBS, SKILLS AND INDUSTRY 4.0: Rethinking the Value Proposition of University Education

In my last feature, I wrote on the six capacity challenges facing African universities: institutional supply, resources, faculty, research, outputs, and leadership. In this essay, I focus on one critical aspect of the outputs of our universities, namely, the employability of our graduates. To be sure, universities do not exist simply for economic reasons, for return on investment, or as vocational enterprises. They also serve as powerful centers for contemplation and the generation of new knowledges, for the cultivation of enlightened citizenship, as crucibles for forging inclusive, integrated, and innovative societies, and as purveyors, at their best, of cultures of civility, ethical values, and shared well-being.

Nevertheless, the fact remains that higher education is prized for its capacity to provide its beneficiaries jobs and professional careers. Thus, employability is at the heart of the value proposition of university education; it is its most compelling promise and unforgiving performance indicator. The evidence across Africa, indeed in many parts of the world, is quite troubling as mismatches persist, and in some cases appear to be growing, between the quality of graduates and the needs of the economy. This often results in graduate underemployment and unemployment.

The Employability Challenge

There are two powerful mega trends that will determine Africa’s development trajectory in the 21st century. The first is the continent’s youth bulge, and the second the changing nature of work. Employability is the nexus between the two, the thread that will weave or unravel the fabric of the continent’s future, enabling it to achieve or abort the enduring historic and humanistic project for development, democracy, and self-determination.

As we all know, Africa’s youth population is exploding. This promises to propel the continent either towards a demographic dividend of hosting the world’s largest and most dynamic labor force or the demographic disaster of rampant insecurity and instability fueled by hordes of ill-educated and unemployable youths. According to United Nations data, in 2017 the continent had 16.64% (1.26 billion) of the world’s population, which is slated to rise, on current trends, to 19.93% (1.70 billion) in 2030, and 25.87% (2.53 billion) in 2050, and 39.95% (4.47 billion) in 2100.

The African Development Bank succinctly captures the challenge and opportunity facing the continent: “Youth are Africa’s greatest asset, but this asset remains untapped due to high unemployment. Africa’s youth population is rapidly growing and expected to double to over 850 million by 2050. The potential benefits of Africa’s youth population are unrealized as two-thirds of non-student youth are unemployed, discouraged, or only vulnerably employed despite gains in education access over the past several decades.”

Thus, the youth bulge will turn out to be a blessing or curse depending on the employability skills imparted to them by our educational institutions including universities. Across Africa in 2017 children under the age of 15 accounted for 41% of the population and those 15 to 24 for another 19%. While African economies have been growing, the rate of growth is not fast enough to absorb the masses of young people seeking gainful employment. Since 2000 the rate of employment has been growing at an average rate of 3%. Africa needs to double this rate or more to significantly reduce poverty and raise general standards of living for its working people.

Not surprisingly, despite some improvements over the past two decades, the employment indicators for Africa continue to be comparatively unsatisfactory. For example, International Labor Organization data shows that in 2017 the unemployment rate in Africa was 7.9% compared to a world average of 5.6%; the vulnerable employment rate was 66.0% to 42.5%; the extreme working poverty rate was 31.9% to 11.2%; and the moderate working poverty rate was 23.6% to 16.0%, respectively.

This data underscores the fact that much of the growth in employment in many African countries is in the informal sector where incomes tend to be low and working conditions poor. In sectoral terms, there appears to be a structural decline in agricultural and manufacturing employment, and rise in service sector jobs. Yet, in many African countries both the declining and rising sectors are characterised by high incidence of vulnerable, informal, and part-time jobs.

The structural shifts in employment dynamics across much of Africa differ considerably from the historical path traversed by the developed countries. But the latter, too, are experiencing challenges of their own as the so-called fourth industrial revolution unleashes its massive and unpredictable transformations. In fact, the issue of graduate employability, as discussed in the next section is not a monopoly of universities in Africa and other parts of the Global South. It is also exercising the minds of educators, governments, and employers in the Global North.

The reason is simple: the world economy is undergoing major structural changes, which are evident everywhere even if their manifestations and intensity vary across regions and countries. As deeply integrated as Africa is in the globalized world economy, it means the continent’s economies are facing double jeopardy. They are simultaneously confronting and navigating both the asymmetrical legacies of the previous revolutions and the unfolding revolution of digital automation, artificial intelligence, the internet of things, biotechnology, nanotechnology, robotics, and so on in which the old boundaries of work, production, social life, and even the meaning of being human are rapidly eroding.

The analysis above should make it clear that employability cannot be reduced to employment. Employability entails the acquisition of knowledge, skills, and attributes, in short, capabilities to pursue a productive and meaningful life. To quote an influential report by the British Council“Employability requires technical skills, job-specific and generic cognitive attributes, but also a range of other qualities including communication, empathy, intercultural awareness and so forth…. Such a perspective guards against a reductive ‘skills gap’ diagnosis of the problems of graduate unemployment.” The challenge for universities, then, is the extent to which they are providing an education that is holistic, one that provides subject and technical knowledges, experiential learning opportunities, liberal arts competencies, and soft and lifelong learning skills.

As deeply integrated as Africa is in the globalized world economy, it means the continent’s economies are facing double jeopardy. They are simultaneously confronting and navigating both the asymmetrical legacies of the previous revolutions and the unfolding revolution of digital automation, artificial intelligence, the internet of things, biotechnology, nanotechnology, robotics, and so on in which the old boundaries of work, production, social life, and even the meaning of being human are rapidly eroding.

But in addition to the attributes, values, and social networks acquired and developed by an individual in a university, employability depends on the wider socio-economic and political context. Employability thrives in societies committed to the pursuit of inclusive development. This entails, to quote the report again, “a fair distribution of the benefits of development (economic and otherwise) across the population, and allows equitable access to valued opportunities. Second, while upholding equality of all before the law and in terms of social welfare, it also recognizes and values social diversity. Third, it engages individuals and communities in the task of deciding the shape that society will take, through the democratic participation of all segments of society.”

In short, employability refers to the provision and acquisition, in the words of an employability study undertaken at my university, USIU-Africa in 2017, “of skills necessary to undertake self-employment opportunities, creation of innovative opportunities as well as acquiring and maintaining salaried employment. It is the capacity to function successfully in a role and be able to move between occupations…. employability skills can be gained in and out of the classroom and depend also on the quality of education gained by the individuals before entry into the university. As such the role of the university is to provide a conducive environment and undertake deliberate measures to ensure that students acquire these skills within their period of study.”

Universities and Employability

The African media is full of stories about the skills mismatch between the quality of graduates and the needs of employers and the economy. Many graduates end up “tarmacking” for years unemployed or underemployed. In the meantime, employers complain bitterly, to quote a story in University World News “unprepared graduates are raising our costs.” The story paints a gloomy picture: “The Federation of Kenya Employers (FKE) – a lobby group for all major corporate organizations – says in its latest survey that at least 70% of entry-level recruits require a refresher course in order to start to deliver in their new jobs. As a result, they take longer than expected to become productive, nearly doubling staff costs in a majority of organizations.”

[E]mployability cannot be reduced to employment. Employability entails the acquisition of knowledge, skills, and attributes, in short, capabilities to pursue a productive and meaningful life

The situation is no better in the rest of the region. The story continues, noting that a study of the Inter-University Council for East Africa, “shows that Uganda has the worst record, with at least 63% of graduates found to lack job market skills. It is followed closely by Tanzania, where 61% of graduates were ill prepared. In Burundi and Rwanda, 55% and 52% of graduates respectively were perceived to not be competent. In Kenya, 51% of graduates were believed to be unfit for jobs.” The situation in Kenya and East Africa clearly applies elsewhere across Africa.

But the problem of employability afflicts universities and economies in the developed countries as well. Studies from the USA and UK are quite instructive. One is a 2014 Gallup survey of business leaders in the United States. To the statement “higher education institutions in this country are graduating students with the skills and competencies that my business needs,” only 11% strongly agreed and another 22% agreed, while 17% strongly disagreed and another 17% disagreed, and the rest were in the middle. In contrast, in another Gallup survey, also conducted in 2014, 96% of the provosts interviewed believed they were preparing their students for success in the workforce. Another survey by the Association of American Colleges and Universities highlighted the discrepancy between students’ and employers’ views on graduates preparedness. “For example, while 59 percent of students said they were well prepared to analyze and solve complex problems, just 24 percent of employers said they had found that to be true of recent college graduates.”

In Britain, research commissioned by the Edge Foundation in 2011 underscored the same discrepancies. The project encompassed 26 higher education institutions and 9 employers. The report concluded, “While there are numerous examples of employers and HEIs working to promote graduate employability in the literature and in our research, there are still issues and barriers between employers and many of those responsible for HEI policy, particularly in terms of differences in mindset, expectations and priorities. There are concerns from some academics about employability measures in their universities diminishing the academic integrity of higher education provision. There is also frustration from employers about courses not meeting their needs.”

Specifically, the reported noted, “Employers expect graduates to have the technical and discipline competences from their degrees but require graduates to demonstrate a range of broader skills and attributes that include team-working, communication, leadership, critical thinking, problem solving and often managerial abilities or potential.” One could argue, this is indeed a widespread expectation among employers whether in the developed or developing countries.

Predictably, in a world that is increasingly addicted to rankings as a tool of market differentiation and competition, national and international employability rankings have emerged. One of the best known is the one by Times Higher Education, whose 2017 edition lists 150 universities from 33 countries. As with the general global rankings of universities, the rankings are dominated by American institutions, with 7 in the top 10 and 35 overall, followed by British universities with 3 in the top 20 and 9 overall. Africa has only one university in the league, the University of the Witwatersrand listed in last place at 150.

What, then, are some of the most effective interventions to enhance the employability of university graduates? There is no shortage of studies and suggestions. Clearly, it is critical to embed employability across the institution from the strategic plan, to curriculum design, to the provision of support services such as internships and career counseling. The importance of carefully crafted student placements and experiential and work-related learning cannot be overemphasized. We can all borrow from each other’s best practices duly adapted to fit our specific institutional and local contexts.

Cooperative education that combines classroom study and practical work has long been touted for its capacity to impart employability skills and prepare young people transition from higher education to employment. Work-integrated learning and experiential learning encompass various features and practices including internships, placements, and service learning. In the United States and Canada several universities adopted cooperative education and work-integrated learning in the first decades of the 20th century. The movement has since spread to many parts of the world. The World Council of Cooperative Education, which was founded in 1983, currently has 913 institutions in 52 countries.

What, then, are some of the most effective interventions to enhance the employability of university graduates?… Clearly, it is critical to embed employability across the institution from the strategic plan, to curriculum design, to the provision of support services such as internships and career counseling. The importance of carefully crafted student placements and experiential and work-related learning cannot be overemphasized. We can all borrow from each other’s best practices duly adapted to fit our specific institutional and local contexts.

The Developing Employability Initiative (DEI), a collaboration comprising 30 higher education institutions and over 700 scholars internationally, defines employability as “the ability to create and sustain meaningful work across the career lifespan. This is a developmental process which students need to learn before they graduate.” It urges higher education institutions to embed employability thinking in their teaching and learning by incorporating what is termed basic literacy, rhetorical literacy, personal and critical literacy, emotional literacy, occupational literacy, and ethical, social and cultural literacy.

The DEI has developed a suggestive framework of what it calls essential employability qualities (EEQ). These qualities, “are not specific to any discipline, field, or industry, but are applicable to most work-based, professional environments; they represent the knowledge, skills, abilities, and experiences that help ensure that graduates are not only ready for their first or next job, but also support learners’ foundation for a lifetime of engaged employment and participation in the rapidly changing workplace of the 21st century.” Graduates with EEQ profile are expected to be communicators, thinkers and problem solvers, inquirers and researchers, collaborators, adaptable, principled and ethical, responsible and professional, and continuous learners.

Equipping students with employability skills and capacities is a continuous process in the context of rapidly changing occupational landscapes. I referred earlier to the disruptions caused by the fourth industrial revolution which will only accelerate as the 21st century unfolds. Automation will lead to the disappearance of many occupations—think of the transport industry with the spread of driverless cars, sales jobs with cashless shops, or medical careers with the spread of machine and digital diagnoses. But new occupations will also emerge, many of which we can’t even predict, a prospect that makes the skills of liberal arts education and lifelong learning even more crucial.

We should not be preparing students for this brave new world in the same manner as many of us were educated for the world of the late 20th century. To quote Robert Aoun, President of Northeastern University in the USA that is renowned for its cooperative education, let us provide robot-proof higher education, one that “is not concerned solely with topping up students’ minds with high-octane facts. Rather, it calibrates them with a creative mindset and the mental elasticity to invent, discover, or create something valuable to society.” The new literacies of the new education include data literacy, technological literacy, and human literacy encompassing the humanities, communication and design.

Achieving the ambitious agenda of equipping university students with employability skills, attributes, experiences, and mindsets for the present and future requires the development of effective and mutually beneficial, multifaceted and sustained engagements and partnerships between universities, employers, governments and civil society. Within the universities themselves there is need for institutional commitment at all levels and a compact of accountability between administrators, faculty, and students.

This entails developing robust systems of learning assessment including verification of employability skills, utilization of external information and reviews, integration of career services, and cultivating strong cultures of student, alumni and employer engagement, representation and partnerships in assuring program relevance and quality. Pursuing these goals is fraught with challenges, in terms of striking a balance between the cherished traditions of institutional autonomy and academy freedom, in engaging employers without importing the insidious cultures of what I call the 5Cs of the neo-liberal academy: corporatization of management, consumerization of students, casualization of faculty, commercialization of learning, and commodification of knowledge.

The challenges of developing and fostering employability skills among students in our universities are real and daunting. But as educators we have no choice but to continue striving, with the full support and engagement of governments, intergovernmental agencies, the private sector, non-governmental organisations, and civil society organisations, to provide the best experiential and work integrated learning we can without compromising the enduring and cherished traditions and values of higher education. The consequences of inaction or complacency, of conducting business as usual are too ghastly to contemplate: it is to condemn the hundreds of millions of contemporary African youth and the youths yet to be born to unemployable and unlivable lives. That would be an economic, ethical, and existential tragedy of monumental proportions for which history would never forgive us.

This is an abridged version of a keynote address delivered at Malawi’s First International Conference on Higher Education, June 27, 2018.

Continue Reading

Trending