News

Washington rushing to put guardrails on AI – fast enough?

Washington is increasingly heeding warnings that the emerging powers of artificial intelligence are so consequential that they require an entirely new governance regime – similar to that instituted in response to nuclear weapons.

With AI advancing at an exponential pace, this represents perhaps the fastest-moving scientific challenge that the slow-moving creature of Washington has ever grappled with. 

Why We Wrote This

With artificial intelligence advancing at lightning speed, many experts, and increasingly policymakers, say that Washington needs to move faster than usual on regulation and oversight.

While AI could be tremendously helpful in a number of areas when constructively harnessed, the White House and Congress have taken initial steps to develop guardrails. 

One key idea that has gained currency is creating a regulatory agency to oversee the fast-growing field and ensure that the object of protecting humanity is not mixed with the object of making money, as it would be in a private company. Many would also like to hold AI developers liable if their systems are used for nefarious purposes. There’s also a push to require that AI-generated content, such as political advertisements, be clearly identified as such. 

A Senate hearing last week underscored the seriousness with which both parties are approaching the issue, putting aside partisan sniping.

“What you see here is not all that common, which is bipartisan unanimity,” said Democratic Sen. Richard Blumenthal during the event.

Computer science professor Stuart Russell had been thinking about the massive potential benefits as well as the risks of artificial intelligence long before AI became a buzzy acronym with the rise of the ChatGPT app this year. 

“It’s as if an alien civilization warned us by email of its impending arrival, and we replied, ‘Humanity is currently out of the office,’” said Professor Russell of the University of California, Berkeley at a congressional hearing last week. But he gave a nod to the growing awareness among the public as well as policymakers in Washington that this emerging technology requires oversight. “Fortunately, humanity is now back in the office and has read the email.” 

Of course, it’s a long jump from registering the warning to preparing for the arrival of a potent new force, but waking up to its risks is an important first step. And over the past year, Washington has made initial efforts to size up the challenge and strategize about how to install some guardrails – before AI races past them. However, this represents perhaps the fastest-moving scientific challenge the slow-moving creature of Washington has ever grappled with, requiring it to streamline its typically bureaucratic approach to problem-solving. 

Why We Wrote This

With artificial intelligence advancing at lightning speed, many experts, and increasingly policymakers, say that Washington needs to move faster than usual on regulation and oversight.

“We don’t have a lot of time,” CEO Dario Amodei of Anthropic, a San Francisco-based firm that aims to create “reliable, beneficial” AI systems, told senators last week. “Whatever we do, we have to do it fast.”

The reason for urgency? Experts say that, with AI capable of making advances at an exponential pace, efforts to control how it is used – or to avoid unintended harm to society – may only get harder over time.

As AI-related discussions have been happening around Washington over the past year, several key ideas have gained currency: (1) creating a regulatory agency to oversee the fast-growing field and ensure that human interests are not mixed with profits as they would be in a private company, (2) establishing liability so that AI developers know they will be held responsible if their systems are used for nefarious ends, and (3) requiring transparency in AI models and clear identification of AI-generated materials, such as by a watermark or a red frame around a political ad.  

Previous ArticleNext Article