News

Journalists and AI: Is the newsroom big enough for both?

Many news outlets – and their consumers – are reorienting themselves to a media landscape where artificial intelligence plays an increasingly large role. Some publications are using sophisticated AI tools like ChatGPT to supplement articles, or even create full ones.

Yet the very nature of chatbots – which predict what belongs in a sentence based on previous information it has “learned” – seems to conflict with the fundamental purpose of journalism: to provide citizens with accurate information about world events. Experience has shown that AIs are prone to error and bias, as they parrot mistakes acquired from their learning processes.

Why We Wrote This

News media are beginning to experiment with artificial intelligence to supplement and even write articles. But AI doesn’t know if what it writes is true. How can it be used for responsible journalism?

“The kind of tone of a lot of these generative AI tools is very authoritative,” says researcher Jenna Burrell. “It will give you answers that sound very confident, but it’s in fact statistical prediction. It’s not actually knowledge generation.”

As a result, AI-produced articles must still be reviewed by people, experts say, in order to ensure that any journalism it contributes is responsible and accurate.

“If the basis of journalism is accuracy and trust is critical, then we need to be very careful about how we use [AI],” says researcher Nic Newman. “The whole question of transparency and labeling is also going to be really critical over the next few years.”

European media giant Axel Springer – owner of newspapers Bild and Die Welt in Germany, Politico in the United States, and various other publications – splashily announced in February that it was preparing to lay off staff, go digital only, and reemphasize creation of original and investigative content. Perhaps most significantly, it said it was doing so in anticipation of an information future dominated by tools such as artificial intelligence chatbot ChatGPT.

“Artificial intelligence,” wrote Axel Springer CEO Mathias Döpfner in an internal memo, “has the potential to make independent journalism better than it ever was – or simply replace it.”

Axel Springer is just one of many news outlets – and their consumers – who are reorienting themselves to a media landscape where AI plays an increasingly large role. With the popularity of Silicon Valley company and Microsoft partner OpenAI’s increasingly sophisticated AI tools, and Google and Meta engineers hot on their heels, it is becoming quick and easy to generate AI-produced text with a minimum of prompting, leading some publications to experiment with using them to supplement articles, or even create full ones.

Why We Wrote This

News media are beginning to experiment with artificial intelligence to supplement and even write articles. But AI doesn’t know if what it writes is true. How can it be used for responsible journalism?

Yet the very nature of chatbots like ChatGPT – which don’t actually understand what they’re writing about, but only predict what belongs in a sentence based on previous information it has “learned” – seems to conflict with the fundamental purpose of journalism: to provide citizens with accurate information about world events. What role can, and should, AI play in the media landscape if it is unable to discern the difference between what is true and what is not? With society’s trust in what journalists put out already at an all-time low, the answer to such questions may be critical for determining whether AI enhances journalism, or diminishes it.

“AI is not for thinking, but for making lazy, rapid, intuitive decisions about the world, and for optimizing our understanding of the world not for truth, but for information that confirms our views and attitudes,” says Tomas Chamorro, author of “I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique” and a professor of business psychology at University College London. “Journalists still have a potentially really important role to educate people. They can use tools like ChatGPT to really discover or identify biases that exist in how people think, and then take this intermediate role as a filter between these tools and millions of users so that people become aware of these threats. Journalists can step in and say, ‘Hey what’s up? I’m using it as well, and here are some things it does that are inaccurate are contributing to misinformation.’”

Michael Dwyer/AP

The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, March 21, 2023, in Boston.

“It’s not actually knowledge generation”

ChatGPT, like other AI-driven chatbots that have followed it, is deceptively simple to use. Type in a question or make a request for a specific sort of prose, and the application quickly produces neatly written text in response, trying as best it can to fulfill the user’s directive. Trained on databases of text, ChatGPT can produce everything from college essays to lists of birthday party ideas to poems about scanning groceries in a self-checkout line.

Previous ArticleNext Article