News

Can AI be ‘democratic’? Race is on for who will define the technology’s future.

In January, OpenAI launched Stargate, a $500 million investment in artificial intelligence infrastructure for the United States. On Wednesday, it announced a plan to bring this type of investment to other countries, too.

OpenAI, the company behind ChatGPT, says that by partnering with interested governments, it can spread what it calls “democratic AI” around the world. It views this as an antidote to the development of AI in authoritarian countries that might use it for surveillance or cyberattacks.

Yet the meaning of “democratic AI” is elusive. Artificial intelligence is being used for everything from personal assistants to national security, and experts say the models behind it are neither democratic nor authoritarian by nature. They merely reflect the data they are trained on. How AI affects politics worldwide will depend on who has a say in controlling the data and rules behind these tools, and how they are used.

Why We Wrote This

The U.S. and China want to lead the way in artificial intelligence development. But governments could use the technology more for their own ends than for the public good. Whose values will AI ultimately reflect?

OpenAI wants as many people as possible using AI, says Scott Timcke, a research associate at the University of Johannesburg’s Centre for Social Change. “I don’t necessarily get the sense [they are] thinking about mass participation at the level of design, or coding, or decision-making.”

Those sorts of decisions are shaping how AI permeates society, from the social media algorithms that can influence political races to the chatbots transforming how students learn.

He says people should consider, “What is our collective control over how these big scientific instruments are used in everyday life?”

Previous ArticleNext Article