Should I Be Scared of Artificial Intelligence?

Should I be scared of artificial intelligence?

My default position on any new technology is doubt and skepticism. Blame my Calvinist underpinnings for that but has the latest-greatest ever really lived up to the hype? Even the experts aren’t sure (or aren’t sharing) exactly how AI works. Could this possibly take off?

It’s been more than a year since artificial intelligence—especially generative artificial intelligence—took over technology news. Generative AI allows for untold amounts of information to be ingested by powerful computers that then can generate what appears to be original text or images based on requests (called prompts) from users.

If you’ve read anything about generative AI, you know about the massive investments being made and the innovations, efficiencies, and new worlds AI will open for us, but there are drawbacks too: the disruption we’ll all be facing in our workplaces and, of course, the fakery AI is capable of. Maybe you’ve seen (or created yourself) samples of this technology in action.

For me, a turning point for my skepticism was a test offered by The New York Times to see if people could determine whether pictures were real or AI-generated. I’m a visual guy and thought this would be easy. I failed miserably.

So, should we be scared? When it’s not clear what is real and what is not, we’re left to wonder, or worse, give up and just believe what we see. Yes, that is scary.

In a 2023 Atlantic article, philosopher Daniel C. Dennett calls people posing as someone other than their real selves “counterfeit people.” He makes a compelling argument that “creating [or passing along] counterfeit digital people risks destroying our civilization.” His solution? Treat counterfeit people like we do counterfeit currency.

Although he admitted it might be too late already, he argued for complete transparency of what has been created by AI and for making sure we have technology (smartphones, scanners, digital TVs, and so on) that can detect counterfeits. And then, just as importantly, we should make counterfeit content creators—including tech company executives and technicians—legally liable for the lies they are telling with AI text and images.

Previous ArticleNext Article