Building A.I. to serve the Global Majority
The lack of inclusion in global AI governance and development raises several profound implications for socioeconomic growth and global power dynamics.

COMMENTARY BY Chinasa T. Okolo
Artificial intelligence development is primarily concentrated in China and the United States and controlled primarily by Big Tech companies such as Google, Amazon, OpenAI, Alibaba, Microsoft, Baidu and Anthropic. This creates a technological divide where a small group of actors has disproportionate power to shape the future of AI according to their worldviews, experiences and priorities. As a result, AI systems trained primarily on Western datasets struggle to understand cultural nuances from other regions and could embed Western assumptions about concepts like privacy, family structures or social norms.
This power imbalance extends into economic implications as AI becomes more integral to economic competitiveness; nations excluded from AI development risk falling further behind. The Industrial Revolution created lasting disparities between industrialized and non-industrialized nations, and a similar dynamic is unfolding with AI, but at an even faster pace. Promising researchers and developers from the Global Majority often face the need to relocate to major AI hubs like San Francisco, Beijing, New York, Toronto and London, further concentrating technical knowledge, capabilities and the economic benefits of AI in already-dominant regions.
Rising disparities in AI development and governance also raise security implications that must be considered. When AI development is concentrated in certain countries or companies, it can create strategic vulnerabilities for excluded nations that become dependent on external AI technologies. This could lead to new forms of technological colonialism, where access to advanced AI capabilities becomes a tool of geopolitical influence. For example, countries with limited access to computing power may rely on wealthier countries to distribute access to these vital resources.
Additionally, when AI policies and regulations are crafted primarily by a select group of nations, they may not adequately address the needs and concerns of excluded communities. For instance, data privacy frameworks developed for high-income countries, such as the EU General Data Protection Regulation, might not translate well to contexts with different technological infrastructure or social practices. However, several Global Majority economies are investing significantly in localizing AI development. There has been a growing recognition of the need for more inclusive governance frameworks, with various multistakeholder initiatives from the United Nations, international development agencies and the World Bank emerging to broaden participation in AI governance and support countries in developing AI regulatory frameworks.
Deploying AI systems in Global Majority contexts often produces harmful effects amplified by existing inequalities and limited local control over these technologies. The issue of bias in AI systems creates another layer of discrimination against already marginalized communities in the Global Majority. These biases manifest in multiple ways, from image-generation systems that poorly represent non-Western cultures to language models that struggle with non-Western languages or dialects. When these biased systems are deployed in critical applications like lending, healthcare or law enforcement, they can actively harm communities that already face marginalization.
Additionally, disinformation poses a grave threat in Global Majority contexts. AI systems now enable the rapid creation of convincing false content that can be precisely targeted to exploit local tensions and vulnerabilities. For example, AI-generated disinformation can quickly inflame these tensions in countries with recent histories of ethnic or religious conflict by creating and spreading false narratives that appear authentic and authoritative. This capability becomes even more dangerous during elections, as evidenced by a rising amount of literature and news reports exposing the impact of AI-powered disinformation campaigns. Subsequently, labor market impacts reveal how AI can reinforce existing power imbalances by expanding the AI-driven gig economy, often creating precarious working conditions for data labelers and content moderators in Global Majority countries.
Climate impacts compound existing environmental and socioeconomic injustice for Global Majority communities that already bear the brunt of climate change and now face additional pressures from AI infrastructure development. For example, the placement of data centers creates particular concern as these facilities consume enormous amounts of water and energy, generate noise and air pollution and compete with local communities for scarce resources.
The continued development of AI systems without meaningful consideration of Global Majority realities threatens to entrench socioeconomic inequalities with cascading effects across multiple domains of human life. As AI becomes increasingly embedded in critical infrastructure systems, including healthcare, education, financial services, and governance mechanisms, the systematic exclusion of Global Majority perspectives risks amplifying harms to marginalized communities. This exclusionary trajectory could result in the emergence of parallel technological ecosystems, where Global Majority populations are relegated to consuming foreign AI technologies, never achieving the capacity to develop contextually appropriate solutions for their unique challenges.
The compounding nature of these disparities means that early exclusion from AI development creates self-reinforcing cycles where limited technological sovereignty reduces economic competitiveness, thus diminishing resources available for local capacity building and further entrenching dependence on external systems that may not align with local values, needs, or priorities.
The systematic exclusion of Global Majority countries from AI governance and development processes creates far-reaching consequences. Without meaningful participation in AI development, Global Majority countries risk falling further behind in technological capability. This creates a compounding effect where limited participation due to a lack of local AI expertise and sufficient digital infrastructure leads to systems that do not serve local needs well and exacerbate existing disparities, which in turn makes it harder for local AI development to thrive. Economic competitiveness suffers significantly when countries cannot shape AI development to support their economic goals. For example, dependence on imported AI systems designed for different cultural and socioeconomic contexts may limit the ability of companies in Global Majority contexts to compete effectively in global markets.
Recent initiatives at the United Nations represent promising developments toward addressing these systemic inequalities and creating more inclusive pathways for Global Majority participation in AI governance. The establishment of the International Scientific Panel on AI provides a multilateral framework for generating evidence-based policy recommendations that incorporate diverse global perspectives on AI development and regulation. Complementing this scientific framework, the UN’s Global Dialogue on AI serves as a diplomatic platform for facilitating conversations between nations, civil society, and private-sector actors, creating space for Global Majority voices to influence international AI policy discussions.
These UN-led initiatives represent a shift toward recognizing AI governance as a global public good that requires collective stewardship rather than domination by a few powerful actors. The success of these initiatives will largely depend on their ability to translate inclusive dialogue into concrete actions that redistribute resources, knowledge, and decision-making power in AI.
Power asymmetries in AI development create significant challenges for Global Majority countries to effectively participate in AI governance, through direct negative impacts and broader systemic implications. To address these challenges effectively, inclusion must be thought about not just in terms of geographic diversity but also in terms of stakeholder diversity, where groups such as civil society organizations, Indigenous communities and professionals from an array of sectors have an active say in governance development. Objectives for increasing global inclusion should focus on increasing the number of voices at the table and ensuring AI development and governance truly reflect and serve the needs of humanity's full spectrum.
Chinasa T. Okolo, Ph.D., is the founder of Technēcultură, a former fellow at The Brookings Institution, and a recent computer science Ph.D. graduate from Cornell University. Her research focuses on AI governance and safety for the Global Majority, datafication and algorithmic marginalization, and the geopolitical impacts of AI. Dr. Okolo has been recognized as one of the world’s most influential people in AI by TIME, honored in the inaugural Forbes 30 Under 30 AI list, and advises numerous multilateral institutions, national governments, corporations, and nonprofits.