Log into your member account to listen to this article. Not a member? Join the herd.

1. Introduction

Global technological advancements are entering a transformative phase with the emergence of Artificial Intelligence (AI), which presents both unprecedented opportunities and challenges. Different approaches reflecting different ideological and strategic orientations have emerged as major world powers struggle to fully utilise AI. China takes a state-first stance, using AI to advance its state security and ideological alignment.1 Examples of this include the establishment of a social credit system and the widespread use of predictive policing.2 The United States, on the other hand, takes a market-based strategy, relying on market forces to regulate AI applications rather than enacting federal legislation. In the meantime, the European Union promotes a rights-based strategy that emphasizes the protection of individual rights in the digital domain and is typified by the General Data Protection Regulations (GDPR), and the soon to be effectuated EU Act. Despite these defined approaches, Africa is left to grapple with the implications of emerging technologies like AI without a clear philosophy and without a well-thought-out plan for integrating AI into social frameworks, exposing the third world to problematic outcomes.

2. Overview of major approaches

    2.1. China’s state-led, state-protective approach

    China places a strong focus on state control and intervention in determining the direction of technological development within its borders, as evidenced by its state-first approach to AI. The strategy takes the form of several mechanisms meant to guarantee alignment with state priorities and to consolidate state authority over AI applications. Using social credit and predictive policing as instruments to impose social control and strengthen state security is one well-known example.

    Similarly, furthermore, China’s state-centric approach is reflected in its AI regulatory framework. For example, China’s generative AI law lays out the conditions for AI governance, putting a strong focus on the interests of the state. For instance, the state’s aim to instill ideological conformity in AI technologies is reflected in the requirement that generative AI systems function within constraints that are consistent with socialist core values.3 In addition, the law highlights the state’s primary role in regulating AI applications by placing duties on service providers, technical supporters, and users to guarantee that AI-generated content complies with national and social security imperatives.4

    Though centralised control and alignment with national objectives are made easier by China’s state-first approach to AI governance, it also raises questions about accountability, transparency, and individual rights, which are mentioned in the law,5 but rarely enforced when they are contrary to state interest. The regulatory environment, which is marked by a high degree of censorship and state intervention, raises concerns about how innovation and freedom of expression might be inhibited. Furthermore, the emphasis placed on ideological conformity in AI development may stifle innovation and diversity of viewpoints, preventing AI from reaching its full potential as a tool for advancing society. Furthermore, privacy invasion and individual autonomy are ethical conundrums raised using AI for social control and surveillance, drawing criticism from human rights advocates and international observers.

    2.2. America: The market-led approach

    The market-based orientation of US AI governance is typified by its emphasis on minimizing government intervention and promoting innovation through free-market dynamics. The strategy is in line with the nation’s long-standing commitment to economic liberalism and faith in the ability of market forces to propel societal advancement and technological advancement.6 The absence of extensive federal legislation designed specifically for AI technologies is one of the defining characteristics of the US market-based approach to AI governance. The US has chosen to handle AI-related issues more piecemeal, depending self-regulatory organizations and avoiding federal law on AI regulation. This regulatory minimalism stems from the idea that overzealous government intervention may hinder innovation and reduce American businesses’ ability to compete in the global AI market.

    In this regard, the US government has primarily released guidelines, principles, and voluntary standards to direct the development and application of AI technologies, as opposed to strict regulations. A few executive orders and policy documents, for instance, that outline general guidelines for AI development have been released by the White House. These guidelines include encouraging innovation, safeguarding American values, and guaranteeing public trust and confidence in AI systems. Similarly, organizations like the National Institute of Standards and Technology (NIST) have released standards and voluntary guidelines for AI ethics to encourage responsible AI development and application while barely enforcing onerous regulatory requirements. When enforcement action has been apparent in the US, it has generally been from a market-based perspective such as such the current Federal Trade Commission (FTC) investigation against OpenAI.

    The US market-based approach to AI governance has potential disadvantages as well as challenges, despite providing benefits like flexibility, innovation, and industry competitiveness. The absence of strong privacy, data protection, and algorithmic accountability safeguards is a major worry since it may erode public confidence and make the risks of deploying AI—such as bias, discrimination, and civil rights violations—even more severe. Furthermore, depending too much on voluntary standards and self-regulation may lead to uneven oversight and enforcement, creating gaps in addressing new issues related to AI and guaranteeing fair access to its benefits.

    2.3. The EU rights-based approach

    The development and application of AI technologies should respect democratic values, ethical standards, and fundamental rights, the EU’s rights-based approach to AI governance.7 The strategy reflects the EU’s commitment to protecting human rights and democracy, privacy, non-discrimination, human dignity, and openness while fostering economic competitiveness and innovation within a framework for responsible AI development. It aims to protect individual rights, promote trust, and ensure accountability in the AI ecosystem, the EU has enshrined its rights-based approach in several legislative instruments, policy frameworks, and regulatory initiatives.

    Furthermore, the EU’s rights-based approach to AI governance is supported by its dedication to global norm-building, collaboration, and cooperation. The EU actively engages with international partners, stakeholders, and organizations to promote shared values, principles, and standards for AI governance, acknowledging the global nature of AI development and deployment. This involves taking part in gatherings like the G7, G20, and OECD, where the EU promotes the creation of common standards and guidelines as well as the adoption of human rights-based approaches to AI governance.

    2.4. African Perspective: Wait, copy and (partially) paste without a defined ideological basis

    The “wait-and-see” attitude that characterizes Africa’s approach to AI governance is typified by a reactive approach to the adoption and regulation of AI technologies. Many African countries still lack comprehensive strategies or regulatory frameworks for AI governance, while the African Union (AU) AU-AI Continental Strategy lacks in bite due to its lack of legislative impact. Rather, Africa takes an approach based on waiting for what the three powers do, then react to, or copy their approaches. For example, in data privacy, virtually all African data privacy and protection laws came after the passage of the GDPR and use a language that is eerily to GDPR, while still not capturing GDPR in all its essence.8

    Africa’s “wait-and-see” attitude to AI governance, however, presents serious obstacles and has ramifications for the advancement of technology as well as the welfare of society. The absence of a regulatory framework to control the application of AI is a significant obstacle that creates ambiguity about the moral and legal ramifications of AI technologies. The absence of well-defined guidelines and standards may result in regulatory gaps, inconsistent application of the law, and insufficient safeguarding of individuals’ rights and liberties.

    Additionally, the use of AI in Africa exposes populations to possible risks to their privacy and human rights. In authoritarian regimes or environments with lax rule of law, there is an increased risk of AI systems being used for social control, surveillance, and discriminatory practices in the absence of strong regulations and safeguards. This could exacerbate already-existing inequalities and vulnerabilities within society by leading to violations of freedom of expression, privacy, and other fundamental rights.

    Concerns about Africa’s lack of initiative and influence in establishing international norms and standards for developing technologies are legitimately raised by their passive approach to AI governance. While major actors like the US, and China, have ideologically informed reactions to AI development, and the EU has actively negotiated and created AI laws and policies, African nations run the risk of being left out of the mainstream and becoming dependent on frameworks from outside the industry that might not adequately address their needs and concerns.

    Furthermore, African countries may face existential threats because of their limited involvement in AI governance initiatives and lack of domestic AI developments. Lack of domestic AI capabilities exposes Africa to economic exploitation, technological dependency, and marginalization in the global AI ecosystem, as AI technologies become more and more essential to social development and economic competitiveness. Furthermore, Africa may already be facing difficulties related to job displacement and growing inequality because of the potential socio-economic effects of AI, which emphasizes the need for proactive involvement and strategic planning in AI governance. These factors illustrate the need for an African approach to AI regulation that considers the unique challenges of the Third World.

    3.How should a norm-setting African approach to AI regulation look like?

    3.1. Human rights as the basis for AI policy

    An African AI regulation would need to be crafted with careful consideration of the unique challenges and priorities faced by developing countries, while also ensuring the protection of human rights, democracy, and marginalized groups. The first box to tick for an African AI system is the protection of human rights and democracy. An interdisciplinary approach is necessary to protect democracy and human rights in the context of AI. The protection of fundamental rights like privacy, freedom of speech, and nondiscrimination must be given top priority in all areas of AI development and application. To that end, it is crucial to set clear principles and guidelines. To prevent possible power abuses and algorithmic biases, this calls for the development of strong mechanisms for accountability and transparency within AI systems. These mechanisms should include stringent auditing procedures and oversight mechanisms. Furthermore, AI governance frameworks must be deeply ingrained with democratic principles, encouraging stakeholder engagement, public participation, and inclusive decision-making. By incorporating these foundational principles, AI regulations can effectively uphold human rights and democratic principles, ensuring that the benefits of AI innovation are realized while minimizing risks to individual freedoms and societal values.

    3.2. Promotion of investment and innovation, and the role of policies in protecting the marginalized

    Nonetheless, it is crucial to combine the promotion of investment and innovation with the defense of democracy and human rights since AI is anticipated to have a significant impact on several facets of social, political, and economic life. Third World nations need to take the initiative to establish incentives and support systems that will encourage investment and innovation in AI research, development, and adoption to strike this difficult balance. Developing local talent and capabilities to power AI ecosystems locally requires more than just luring in foreign investment. To achieve sustainable AI innovation, cooperation between government, business, and academia is essential. This is because such collaboration makes it easier to share resources, knowledge, and skills. In addition, nurturing technology transfer and knowledge-sharing programs is crucial to closing the digital gap and keeping developing nations ahead of developed ones in the race for AI supremacy. African nations may harness the transformative potential of AI while preserving basic rights and democratic ideals by adopting inclusive and cooperative approaches to investment and innovation.

    In this context, the defense of marginalized and underrepresented populations becomes a crucial factor that is closely related to promoting investment and innovation in AI as well as upholding democracy and human rights. Targeted policies and interventions that address the unique needs and vulnerabilities of marginalized communities—such as women, children, people with disabilities, and indigenous populations—must be developed. Through this approach, African nations can guarantee inclusive and equitable AI development and implementation. Furthermore, it is crucial to lessen the possible harm that AI technologies could do to marginalized groups, including job loss, social marginalization, and discrimination. To promote a more inclusive and resilient society, it is necessary to take proactive steps to protect the social and economic well-being of marginalized groups.

    3.3. What is the place of African values in this context?

    Moreover, for AI governance frameworks to be effective, culturally sensitive, and contextually relevant, it is imperative to prioritize positive values from the continent. The laws and policies governing AI in Third World nations must be based on their own customs, values, and cultural norms rather than taking a one-size-fits-all approach taken from Western or Eastern models. It is necessary to acknowledge the variety of viewpoints and life experiences found in Africa and to give priority to solutions that are in line with African realities.

    Furthermore, AI governance frameworks must incorporate the principles of solidarity, cooperation, and self-determination to enable developing nations to take charge of their technological destiny. To enable developing nations to take advantage of their combined resources and strengths to effectively navigate the complexities of AI governance, this entails promoting intra-African cooperation, knowledge-sharing, and capacity-building initiatives. AI regulations that prioritize positive values from the Third World can not only encourage cultural diversity and tolerance but also a more just and inclusive global AI ecosystem.

    3.4. Rejection of (meaningless and counterproductive) imposed norms

    Lastly, Africa must reject imposed norms that are counterproductive and meaningless when developing their AI governance frameworks. Declaring that Africa has the right to establish its own AI norms and standards is crucial to ensuring that laws are not imposed from outside sources but rather customized to African priorities and circumstances.

    Furthermore, protecting the rights and interests of developing nations in the field of AI requires promoting just and equitable international agreements and trade policies. African nations can guarantee that AI regulations preserve the values of justice, fairness, and respect for sovereignty by encouraging inclusive and transparent negotiations. In addition, to promote cooperation and unity among African nations in navigating the global AI landscape, it is imperative to fortify regional and intra-African cooperation initiatives. African nations can enhance their presence and impact in international forums and encourage a more equitable and comprehensive approach to AI governance by combining resources, exchanging expertise, and coordinating tactics.

    4. Conclusion

      The emergence of AI ushers in transformative possibilities and challenges for global governance. Major world powers have adopted distinct approaches to AI governance, reflecting their ideological and strategic orientations. China’s state-centric approach prioritizes national security and control, while the US adopts a market-led strategy, and the EU emphasizes rights-based principles. However, Africa lacks a coherent philosophy, adopting a reactive “wait-and-see” approach. The passive stance risks leaving developing countries vulnerable to economic exploitation and marginalization. Thus, African-centric AI legislation must prioritize human rights, foster innovation, protect marginalized groups, uphold positive African values, and reject imposed norms. By doing so, Africa can shape its own AI future in alignment with its unique needs and aspirations.