Log into your member account to listen to this article. Not a member? Join the herd.

In recent days, there have been a lot of discussions about the advent of ‘artificial intelligence’ and its potential as a game changer in the way we go about our business. This discussion has come to the fore since the OpenAI, the company behind ChatGPT signed a long-term collaboration agreement with Microsoft worth several billion dollars. In Kenya, the mainstream ‘Sunday Nation’ newspaper on 29th January was equally excited about the development, running a feature on it with a tag line; “ChatGPT: Tool that helps even the lazy write magnificently”.  But despite all the hype generated, the hullabaloo about ChatGPT or machine learning is at least as much an indication of our intellectual decline as it is of any technological advance on our part, and here is why.

I have personally been hesitant to explore what AI actually does, steering clear of what seems to be a rabbit hole that doesn’t have any logical destination. Recently though, I had an in-depth discussion about developing a project proposal with a colleague better versed in this application, and he was kind enough to take me on a ‘tour’. So, I ‘asked’ the app several questions relevant to African wildlife studies (my area of expertise). In my line of questioning, I ‘posed’ as a student seeking to develop a conservation research proposal focusing on human-wildlife conflict. The app dutifully typed out problem statements, research questions, methods, and even a table of contents. The result, however, had nothing that looked even remotely new or original. They were just clichéd excerpts extracted from a myriad of conservation literature available online, which was baffling, because intelligence is defined as “The ability to apply knowledge, to manipulate one’s environment, or to think abstractly”. This definition of intelligence did not appear anywhere in the (limited) experiment that I conducted. What I perceived was just a search engine seeking specified information online and ‘typing’ out results rather than presenting a ready manuscript and creating the impression of some kind of ‘thought process’ at play.

The other cosmetic illustration of intelligence of course is the absence of attribution to any source – an attractive feature which extends itself to the user through the allure of presenting this ‘work’ elsewhere as his or her own. To the casual observer, the document looks like a complete research proposal, but a closer observation from someone familiar with the topic reveals the lack of a logical framework that can be followed from the problem statement, through the hypothesis and methods that would give a result if it were all implemented on the ground in a ‘real world’ situation. There is something scary about this.

Our society shouldn’t be frightened of apparent threats posed by the capabilities of AI, which after all are only as powerful as those of its human programmers. What should scare us is the manner in which people we rely upon as ‘experts’ perceive it as some kind of breakthrough in human development perceptions. The fact that a newspaper, whose editors are supposed to be purveyors of writing excellence, can call the thoughtless regurgitations of ChatGPT “magnificent” is what should worry us because it implies that they wouldn’t be averse to using it as an editing tool. This strange fascination isn’t limited to Kenyan media, and major US publications have also spoken about it in similar tones, with the ‘New York Times’ seemingly enthralled with the bot’s ability to write essays. We live in an age when individual opinions are routinely suppressed by states, mobs, propaganda, religion, formal education systems, and a myriad of powerful forces. This suppression has led to the mass acceptance of ideas like carbon trading and cryptocurrencies that suffer and falter under any logical examination. Consequently, when highly-educated, highly-regarded, or capable humans start offering homage to an idea, we court the threat of an indolent majority treating those opinions as paradigms.

An example of this is the excitement elicited by a January 2023 paper published by Prof. Jonathan Choi of Minnesota University Law School and others entitled “ChatGPT goes to law school”. It reported that the now famous bot passed a law school exam consisting of multiple choice and essay questions, attaining a grade of C+. This was met with the customary excitement amongst the public, despite the consternation amongst teachers who felt that this was a plagiarism threat. What was invisible to the public is that this “pass” result just illustrates a well-known fact that the study of law necessarily contains a large component of reference to existing laws and precedents, all of which are archived on the internet. The pretty average grade attained indicates that the bot hit the skids when faced with interpretation questions. The fact that a “passing’ grade in a specialized field can be achieved out of empty ‘competence’ without any logical thought should be a sobering thought for scholars and philosophers in all fields of study around the world. If this continues, where will we find a philosophical and ethical direction that will guide the progress of human societies? Philosophy, logic, and ethics are necessarily experiential and human attributes that cannot be engineered, but rote learning has reduced our thinking to a level where engineering can be mistaken for intelligence.

Our (mis)education through this method has trained us to believe that our own thinking and experiences count for nothing. My lasting memory of this came from a writing class I took as a first-year undergraduate student. On an elective assignment, I chose to do an essay on livestock keeping, something I was very familiar with, and I wrote extensively about sheep, which are probably my favourite livestock species. Amongst the details I included was the average weights of the sheep at different ages and the lecturer gave me a demerit on that point with the comment “what’s your source?” Apparently, it was unacceptable to state such information in my essay about sheep I had reared and weighed myself without a “source” or reference. If I were to quote a source, it would not likely be about red Maasai sheep (which ours were) which are ‘unimproved’ livestock, generally frowned upon by animal production experts at the time. Even if this was available, my essay was about OUR sheep, not sheep in general. The key lesson I learned from this class is how vehemently contemporary ‘education’ is opposed to the thoughts and indeed the very existence of the individual.

Nearly three decades later, this violence has gotten far worse. If a student today chose to generate an essay for such a class using AI, it would certainly score very high marks because all of the ideas offered therein would have external sources, complete with references to existing literature and data. Nothing would be original. In the years since then, I have taught classes of supervised students at a number of universities around the world at undergraduate and postgraduate levels. My tests and assignments never sought to have students find and regurgitate material because I would seek the material myself and present it to them seeking their philosophies and justifications for the same. The work I expected from them would never be created by AI, so educators worried about AI use amongst students today should re-examine their methods rather than seek to impose ‘bans’ on its use that are impossible to enforce.

The concerns that we should address urgently remain. Universities are supposed to be centers of education and innovation, so if merit can be scored merely by reference without innovation, where will societal progress emerge from? Advanced-level studies are increasingly defined not by the quality of outputs, but by how small the silo is into which they can compartmentalize a subject. This increases the quantitative rather than qualitative output of education. In practice, we have numerous publications whose findings cannot stand on their own or find relevance outside a certain circle of authors and reviewers. This isn’t regarded as a problem, because publication has become an objective in itself and academic prowess has for the last few decades been measured by quantity rather than content of publications. In the same vein, contemporary human progress has seemed to be ‘rapid’ because we now perceive it quantitatively through technological advances, even as other spheres remain stagnant or even regress. History tells us that boundaries didn’t really exist between the arts, science, communication, etc. In the ancient world, famous scholars like Leonardo Da Vinci were outstanding artists and scientists. In contemporary times, we have examples like Chiekh Anta Diop the Senegalese polymath, and Jonathan Kingdon, the naturalist whose artistic skills made him an authority amongst mammologists and many others.

The most recent peak of the ‘noise’ around ChatGPT was probably a stunt by Massachusetts Rep. Jake Auchincloss, a member of the US congress who recently gave an AI-generated speech to the house on 26th January 2023 and members didn’t realise until he said so. The fact that Rep. Auchincloss’s speech didn’t elicit much attention for its content is because it was empty. AI doesn’t generate logic that can be supported or opposed on its own merit; it closely mirrors the majority of speeches typically given in legislatures around the world. It is more a reflection of the decline in quality of our socio-political discourse around the world than the quality of the artificially-compiled (not generated) content.  Many members probably detected words and phrases that they’d heard before. ‘The Verge’ (an online publication) described it as “dull and anodyne as you might expect for a political speech filtered through an AI system based on probabilistic averages”. This is not the sort of material that can engender genuine human progress.

The jury is still out on AI and the excitement is still at a fever pitch, with global tech giants Baidu and Google rushing to develop apps to rival the apparent success of ChatGPT. However, in the long run, artificial intelligence may help us in ways we never envisioned. It may turn out to be the best tool at our disposal to identify AI (actual indolence). Our youth are typically adept at using digital tools in ways that aren’t even envisioned by the developers. If we do that with AI and let our natural intelligence guide us accordingly, then Africa will be just fine.

This article was first published by The Pan African Review.