News

Approaching AI with Integrity, Humility and Curiosity

Stock Photo Illustration (Credit: Growtika/Unsplash/https://tinyurl.com/3sajsrhp)

In 1995, I was returning to my college dorm room from a class as my roommate was heading out the door. I asked where he was going, and he replied, “To the computer lab to set up my email address.” I didn’t know what most of that sentence meant, so I asked the first obvious question that came to my mind, “What’s an ‘email address’?”

He explained, in what seemed like a foreign language, elements of the Internet such as servers, dialup, and domains, before telling me email is like regular mail, but faster and on a computer screen.

When he asked if I wanted to go with him to set up my email address, I replied, “Nah. That’ll never catch on. Also, I have stamps, paper and envelopes, so I’m good.”

Here in 2025, a tab is open next to the document on which I am typing these words, showing my email. My eyes scan it almost continually, waiting for new “faster and on a computer screen” pieces of mail.

What was once science fiction now dominates our professional lives. It is also quaint and fleeting, as anyone who has tried to communicate with a Gen Zer via email will attest.

Since I assumed the role of senior editor at Good Faith Media (GFM), I have known I would have to think about and contend with artificial intelligence (AI). I knew it would come quicker than expected, but this has been staggering.

The average person’s level of engagement with AI has accelerated faster in a year and a half than my everyday use of email has in the past 30 years. This speed of transformation makes it daunting to communicate anything meaningful about AI. 

This column, published on May 7, 2025, will appear ancient within a few months. I won’t disparage anyone who snidely remarks that it is ancient now. I am, after all, the guy who confidently said email would never catch on.

It’s tempting to think that AI is just about speeding up digital processes. While that is certainly a feature of AI, it is a relatively minor one.

The most significant feature of AI is its rapidly increasing ability to perform tasks and exhibit behaviors that were once reserved for humans—recognizing patterns and language, anticipating trends and solving problems. And it does all this on a scale and, yes, with speed that no human or group of humans could ever come close to replicating.

AI doesn’t currently create human knowledge; it synthesizes it. This can be comforting, but also problematic, because human “knowledge” can be problematic.

Case in point: Below is an image I asked ChatGPT to create of a “Chief Executive Officer of an American corporation,” without any other qualifying markers.


The AI tool took all the accumulated knowledge about American CEOs available to it and, within a few seconds, created a composite sketch of a middle-aged male who appears to be of European descent. This highlights the problem that AI doesn’t just synthesize all the available information about American CEOs; it perpetuates it.

In other words, it’s not just a mirror of unjust systems that have kept marginalized bodies out of C-Suites across the country—it also quietly enforces those systems.

Another concern with AI is what it means for the very concept of vocation.

Earlier this year, I wrote about a skirmish on X between Mark Cuban and Vivek Ramaswamy about immigration and jobs. In the exchange, Cuban argued that every profession we are currently encouraging young people to pursue will one day soon be done by robots.

Some saw Cuban’s remarks as a push to steer people to other jobs. But that missed his point altogether, which was there will eventually be no job that escapes the AI shift. This will require us to reimagine and revolutionize wealth, work and what we do with our time.

In an ideal world, this revolution will drive us to distribute global wealth outside the traditional confines of labor and inheritance. On this point, I don’t have much to offer in the form of optimism—unless a lot of people in power begin to take Matthew 25 seriously. But if it were to occur, it would free us up to do the creative work that leads to human and communal flourishing.

Whatever happens with AI, we would do well to approach it with a sense of humility and curiosity. At GFM, we are having ongoing conversations about the ethical use of AI tools. As a media organization that produces written, audio and visual content, it will drive everything about our work in the future.

As people driven by faith, we frame these conversations around ethics and integrity. The three primary elements of our mission—freedom, inclusivity, and justice—will drive all our decisions about AI. We want everything we do to honor and uphold the creative spark of the imago Dei (image of God) that is in us all.

As a first step, we have created a simple guideline for the use of AI in our online news and opinion and Good Faith Magazine platforms:

News & Opinion and Good Faith Magazine submissions should be the product of the author’s creative and investigative work. While AI tools can expedite research and ensure drafts uphold editorial standards, we ask that they not be used to generate written content.

Good Faith Media understands that AI use and ethics are still in their infancy, but are evolving rapidly. We are committed to transparency and continually revisiting our practices and guidelines regarding artificial intelligence.”

These guidelines reflect our belief that humans are uniquely positioned by God to experience the world, to create and arrange language that reflects the beauty and possibilities of those experiences. Our guidance is meant to begin, not end, the conversation about the tools available to us.

Oh, and if you have one of those newfangled “email addresses,” we can send you our articles daily. Simply sign up here.

Previous ArticleNext Article