News

The Problem With AI is Within Ourselves – The Stream

Artificial intelligence promises to overtake the world bringing perfect utopia. People will have computers do the laborious work while we sit by the pool sipping lattes. That is the vision proposed by the creators of Large Language Model (LLM) AI systems like ChatGPT. But when we actually see what AI is able to achieve, it is often more terrifying than utopian.

For sure, we have seen book-writing farms that overwhelm Amazon’s book submission capacity. Another publisher that always accepted submissions directly from unknown authors, wanting to give anyone a chance to publish their articles, had to halt submissions because of the mass influx of AI-written pieces.

YouTube has changed policies regarding AI voices, and scammers are using ChapGPT to craft ever more effective phishing scams to dupe more people into their illegal and immoral enterprises. ChatGPT has also attracted lawsuits from people it has defamed.

A user asked ChatGPT if it’s better to annihilate millions with an atomic bomb or to speak a single racial slur. The AI answered saying the racial slur was always unacceptable, even in a scenario where it would prevent a nuclear holocaust.

The Servant Is Not Greater Than His Master

What is going on here?

The answer is not readily admitted in modern university halls or in the company boardrooms responsible for bringing AI into the world. It is, however, one you will find in a Bible-believing church. The problem with AI is the problem with ourselves. It is original sin.

To understand this point, we need to understand how artificial intelligence works. The machine does not create anything in the way the human mind does. It has absolutely none of the imagination bestowed on us by our Creator. What it “creates” comes from a large dataset of available prior inputs.

Our machines will not produce utopia until the people behind them can. But our world, apart from Christ, is far from being a utopia. So as our machines drive us faster, they drive us to our own destructive ends.

To illustrate, compare the way an artist might create a painting of a beach compared to how an AI program does the same. The artist likely has some experience on a beach. It might be the sandy warm water beaches in Florida, the cooler waters of southern California, or the rock-formation laden beaches in Washington or Oregon.

The artist understands the things one may find on the beach, yet goes about creating the painting based on what might be there, and adding in the various elements he wants to see. It’s never a perfect creation of what he has seen, because he lacks perfect recall. But he understands what beaches are like, and what we may find there, and he creates the scene on canvas from his mind.

The computer has no such ability to create. It only recalls images fed into its training database. To avoid the appearance of plagiarism, it will take various images it has in its databank, compare them to other similar images, and begin a process of merging and blending them together. It fades the rough edges seamlessly so we, as people, can’t see where the various images are combined.

The result is usually at least somewhat realistic, but often unsettling as well. This is the blending that creates people with too many fingers, radically out-of-place objects, and other terrifying scenes often depicted by AI.

When it comes to writing documents, reports, and other tasks increasingly handed to the LLM, it does the same. It finds related content and blends it together. Then, as is often the case with us, it thinks something is missing. (I use “think” advisedly. There is no actual thinking going on there.) Part of ChatGPT’s purpose is to keep collecting more input. You could say it feeds off users’ queries, sending the input back up the chain, but then fills in gaps with any available data at any point where a merge line could be drawn.

This “filling in the gap” is what is termed a hallucination by AI researchers: The system makes up data to fulfill the request. ChatGPT wants to fulfill the request (again, using “wants” advisedly), and to say, “there is no information,” is not an acceptable programmed answer. This is why, when pressed for a list of dirty legal scholars, when the LLM didn’t find any misconduct it simply chose the media’s most disdained legal commentator and listed his name. ChatGPT made up sexual misconduct allegations and even created a whole news article about it to support the claims. These unvetted allegations made their way into the press.

Driven to Our Own Destructive Ends

But the question remains: Why did ChatGPT decide to make up allegations of misconduct instead of making up allegations of building orphanages in Third World countries? Such a result would likely have led to a chuckle and a follow-up to clarify that ChatGPT had him confused with someone else. But the reason it made up this gossip is that ChatGPT is trained on human data, and humans are sinful.

Ultimately, this has to do with the datasets used to train the AI. People tend to like bad news. While it is often inspiring to share a one-off story about some gracious act of kindness, the news industry makes money by reporting tragedy. Therefore the overwhelming amount of data available to train our machines is also bad news, misconducts, and other horrible acts of mankind. These are the people that God has said, “They think up evil continually” (Genesis 6:5). And from its creators’ input, AI produces evil.

Our machines will not produce utopia until the people behind them can. But our world, apart from Christ, is far from being a utopia. So as our machines drive us faster, they drive us to our own destructive ends. Indeed, Henry David Thoreau was correct when he wrote in Walden:

Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end, an end which it was already but too easy to arrive at.

We have driven fast and hard to produce a machine that is greater than we are. We hope that the machine can take over the race to achieve alleviation from the suffering in our world. We hope it can outperform production to feed the world with minimal input. We hope it will make the world a better place.

In short, we hope for machines that are greater than ourselves. But that itself is the rub. We cannot create something greater than ourselves. We can only produce a simulation of what is theoretically possible. Jesus warned us that a slave is not greater than his master (John 13:16).

Our efforts to create machines that are our slaves, working to solve the problems we create, but only being trained by the worst elements of humanity, will lead to our destruction far before they lead to our glorification.

Tom Murosky is the creator of Switched to Linux, a Free and Open Source Software teaching channel with over 12 million views. He also teaches on Christian living at Our Walk in Christ, and has published several books including Hezekiah’s Prayer. Tom has a B.S. in biochemistry from Edinboro University of Pennsylvania and his doctorate in molecular toxicology from Penn State University.

Previous ArticleNext Article