News

Test for humans: How to make artificial intelligence safe

The drumbeat of warnings over the dangers of artificial intelligence is reaching a new level of intensity – even as new AI tools have raised hopes of rising productivity and faster human progress. 

Last month, hundreds of AI researchers and others signed onto a statement suggesting humanity should approach the “risk of extinction” from the technology with the same priority it now gives to nuclear war and pandemics.

Why We Wrote This

A story focused on

As tools based on artificial intelligence spread, calls for regulating the technology are rising. A core question is, can we trust AI – and our own responsibility in using it?

It’s not that Terminator-type robots are a near-term risk. But scientists point to the possibility of the technology allowing bad actors to create bioweapons, or being used to disseminate disinformation so effectively that a nation’s social cohesion breaks down. 

Legislators on both sides of the Atlantic are eager to set up guardrails for the burgeoning technology, such as by creating a new regulatory agency.

“We as a society are neglecting all of these risks,” Jacy Reese Anthis, a doctoral student at the University of Chicago and co-founder of the Sentience Institute, writes in an email. “We use training and reinforcement to grow a system that is extremely powerful but still a ‘black box’ to even its designers. That means we can’t reliably align it with our goals, whether that’s the goal of fairness in criminal justice or of not causing extinction.”

The drumbeat of warnings over the dangers of artificial intelligence is reaching a new level of intensity. While AI researchers have long worried that AI could push people out of jobs, manipulate them with fake video, and help hackers steal money and data, some increasingly are warning the technology could take over humanity itself. 

In April, leading tech figures published an open letter urging all AI labs to stop training their most powerful systems for at least six months. Last month, hundreds of AI researchers and others signed onto a statement suggesting humanity should approach the “risk of extinction” at the hands of the technology with the same priority it now gives to nuclear war and pandemics.

“The idea that this stuff will get smarter than us and might actually replace us, I only got worried about a few months ago,” AI pioneer Geoffrey Hinton told CNN’s Fareed Zakaria on June 11. “I assumed the brain is better and that we were just trying to sort of catch up with the brain. I suddenly realized maybe the algorithm we’ve got is actually better than the brain already. And when we scale it up, we’ll get things smarter than us.”

Why We Wrote This

A story focused on

As tools based on artificial intelligence spread, calls for regulating the technology are rising. A core question is, can we trust AI – and our own responsibility in using it?

Mr. Hinton quit his job at Google in May, he says, so he could talk freely about such dangers. 

Noah Berger/AP/File

Geoffrey Hinton, known as the “godfather of artificial intelligence,” poses at Google in Mountain View, California, in 2015. He resigned from Google in 2023 to warn about unchecked AI.

Other scientists pooh-pooh such doomsday talk. The real danger, they say, is not that humanity accidentally builds machines that are too smart, but that it begins to trust computers that aren’t smart enough. Despite the big advances the technology has made and potential benefits it offers, it still makes too many mistakes to trust implicitly, they add. 

Yet the lines between these scenarios are blurry – especially as AI-driven computers grow rapidly more capable without having the moral-reasoning abilities of humans. The common denominator is questions of trust – how much of it do machines deserve? How vulnerable are humans to misplaced trust in machines?

Previous ArticleNext Article