News

The Global Race for AI Dominance and the Perpetuation of Oppressive Systems

Stock Photo Illustration (Credit: Getty Images for Unsplash+/https://tinyurl.com/bdcm57kp)

A race to achieve global dominance in artificial intelligence is underway. Whichever nation can construct the largest AI ecosystem will be able to dictate global standards and reap broad economic and military benefits.

This race has created a new geopolitical divide. Historic understandings of “East vs West” or “Global South vs Former Colonizers” are becoming obsolete. The new divide is digital, between nations with the computing power to build cutting-edge AI systems and those without.

We are currently witnessing a reordering of global spheres of influence, in which once superpowers (e.g. Russia) are being excluded due to their lack of multi-billion-dollar AI data centers. As of this writing, only 33 countries have such advanced facilities; 150, including Russia, have none.

Jockeying for AI global leadership are China, with 22 advanced facilities; the European Union, with 28; and the United States, with 26. Still, the U.S. and China combined operate 90% of all advanced data centers, concentrating AI power in two ideologically opposed nations.

Nations lacking data centers are experiencing a brain drain to these two countries, making them beholden to AI-powered countries that will control access to critical resources, splitting the world into two camps. One is China, which explicitly articulated its national strategy to become the world leader in AI by 2030. The other is the United States.

Thus far, according to Mustafa Suleyman of Microsoft AI, China has led, publishing AI research papers at four and a half times the rate of the U.S. since 2010. This exceeds the combined total of the U.S., U.K., Germany, and India. By 2018, China had filed twice as many quantum technology patents as the U.S.

Safety Concerns

This race for AI strategic advantage between China and the U.S. is making the world less secure. Nations currently employ automatic and semiautomatic AI weaponry that is authorized to select targets and attack without human authorization.

According to former Secretary of State Henry Kissinger, introducing nonhuman logic into military systems and processes makes it more difficult to predict rivals’ actions. This increases the risk of miscalculation, complicating efforts to pursue and maintain international security. Adding to the insecurity are smaller nations lacking nuclear weapons or even conventional weapons. If they invest their resources in leading-edge AI and cyber arsenals, they can create as much havoc internationally as China or the U.S. can.

Rather than worrying about rogue nuclear nations like North Korea, multiple rogue cyber nations or terrorist cells working for failed states can stealthily arise with minimal cost to launch untraceable cyber-attacks. Thus far, more than seventy countries have been found running disinformation campaigns.

One can even imagine such rogue players, without any specialized training, using AI to guide them in creating code-generated viruses, both in the form of computer malware and synthetic biological viruses. One can also imagine them creating a scenario where superpowers compete in a disinformation face-off.

Liability

But AI misused by bad actors is not the only scenario that can cause international havoc. Good people wishing to improve humanity can unleash unintended consequences with just as devastating results. Because AI lacks common sense and the capacity for reflective self-evaluation, which are necessary for ethical responsibility, human programmers and those employing AI remain ethically liable.

I argue that humans cannot delegate their moral agency or responsibility to AI. If humans cannot control the outcomes and/or unintended consequences of a military action, then such an action would be unethical.

I agree with moral philosopher Mariarosaria Taddeo’s framework for AI in military defense. She insists that AI must be fully reliable but never autonomous, remaining under human control and ultimately responsible, with accountability aligned with moral and legal norms. Additionally, it must remain transparent in its operations.

These principles are applicable beyond military use. The government’s deployment of AI in welfare, surveillance, and healthcare must be justified, subject to override, and never functionally replace human judgment, especially in already marginalizing contexts.

Look to Liberation

I believe Taddeo’s framework is more likely to be achieved through the liberationist concept of radical solidarity. Such solidarity demands national AI strategies that redistribute gains, through tax revenues or service improvements, to repair historic inequities caused by colonialization. Although such frameworks address transparency, accountability, and bias, they often overlook the equitable distribution of resources to vulnerable populations.

The European Parliament reports gaps in benefit-sharing and worker exploitation in AI’s rollout. Such redistribution aligns with liberation ethics: Policy should correct structural oppressions, not replicate them in digital form. Any ethics rooted in the worldview of the marginalized requires reparative measures to restore their dignity.

Solidarity is central at the international scale. UNESCO’s 2021 Recommendation on the Ethics of AI was adopted by 193 countries precisely to foster shared principles on human dignity, equity and harm prevention. Global, participatory frameworks can prevent digital neocolonialism, where wealthy nations export exploitative datasets or biased systems to vulnerable regions.

AI abstinence, while tempting, is impractical. Subscribing to AI veganism by forsaking modern technology will be no more possible than the previous century’s attempt to refrain from the combustion engine. While AI could have been a means by which to improve the lives of humanity, I fear that the pervasive neoliberal global economy, rooted in a savage capitalism where winner-takes-all, has left me hopeless.

While regulation might slow the worst impulses in the utilization of AI, I fear it will fall short. Because AI is being normalized and legitimized in the name of progress, resistance seems futile.

Eurocentric ethics is not the answer, as it has a history of spiritually justifying repressive and oppressive structures. AI is poised to expand abuses because the unexamined biases that led to racism, classism, sexism and heterosexism have remained unexamined during the development of AI. 

Previous ArticleNext Article