News

Artificial Intelligence, Artificial Wisdom: What Manner of Harms are We Creating? – The Stream

Richard Stevens’ May 11 Stream article, “AI Legal Theories,” suggests we consider making Artificial Intelligence companies legally responsible for the harms they cause. We do that already with consumer products, so in principle it should be possible to do the same with AI. Enforcement would be by civil law. Injured parties would presumably be given standing to sue the source of the harm without having to prove negligence.

That gets us somewhere, but not far enough. It settles the question of who is legally responsible. But responsible for what? Specifically, what will we call harm? Who will decide? Based on what standard of wisdom?

Stevens gives this example of harm, citing an earlier Stream article by Robert J. Marks: “The Snapchat ChatGPT-powered AI feature ‘told a user posing as a 13-year-old girl how to lose her virginity to a 31-year-old man she met on Snapchat.’” Yes, that happened.

A Moral Mess

Is that harm? Yes, absolutely, if you ask me. Probably most Stream readers would agree. Not everyone in this crazy world we live in now would, though. And what if the advice were how to lose her virginity to another 13-year-old? Even fewer Americans would call that harm. For some it would be “exploring her sexuality,” or “discovering the fullness of her physicality.”

One more example, not at all fanciful: What if AI invented a story about a girl teddy bear in a boy teddy bear’s body, and read it to kindergartners? I would absolutely call that harm. But if we develop a legal theory that said we could sue for that, just watch where else that could take us.

The human race is not trained for this, and we are not ready to interact with a machine that looks so intelligent and wise, yet actually knows and understands the world no better than the screws holding its case together do.

Not that I totally dislike the idea of holding people responsible for such things, but a lawsuit would be contentious and unwieldy at best, leading us to a moral/strategic morass. Are we willing as a society to call serious moral harm for what it is, and make its perpetrators liable for it? Could we even begin to decide what that might mean?

AI isn’t the first potential source of such harm, but it’s new on the block, and it’s bound to raise these questions. And it brings with it special problems of its own, chiefly that it is absolutely devoid of understanding.

An Adviser Who Understands Nothing: What Could Go Wrong?

AI has literally no idea what it’s doing. AI produces pixels on screens. Human interpreters can translate those images into thoughts, ideas, and advice, but the computer itself has no more capacity to assess matters of moral wisdom and responsibility than the small part that breaks off the toy can warn a toddler not to put it in her mouth.

Even there we see an important difference, though. Everyone knows what it means for a child to choke, and if for some odd reason we weren’t sure, we could always turn to a universally acknowledged expert class, physicians, to explain it — in court if need be.

Experts don’t always agree, but the courts have ways to settle their differences within agreed boundaries of standard legal and professional practice.

We have no such agreed boundaries for deciding questions of moral wisdom, however, and no legally or socially recognized expert class to pronounce on them. We have in fact tossed such wisdom out the window. We’ve filled that gap (or so we think) with “expertise,” and we rightfully rely on it in court. But expertise has in these days supplanted wisdom, so how could there possibly be expertise on wisdom?

We Would Have to Legally Define Moral Harm in a World of Moral Disagreement

Even if there could be expertise of that sort, I worry about the thought of developing a new class of experts propounding moral wisdom. It would be tantamount to establishing a religion, in religion’s moral aspects, at least. It could easily lead to case law and/or legislation defining morality according to experts’ view of “harm.”

Further, it’s hard to imagine such laws being applied only to AI. Once moral harm is legally defined, anyone who disagreed with it could become by definition an agent of harm, making it vulnerable to civil actions just for stating a contrary position. It would be no different from what we’d be suing an AI for, would it?

In a world rife with moral disagreement, I would call this cause for worry. We’re touching on that territory already in jurisdictions that outlaw “non-affirming” counseling for homosexuals and gender-confused persons. We don’t need such intrusive controls carried further, into broader regions of belief and morality.

But the creation of legal standards for AI-generated moral harm would push us hard in the direction of human moral harm. We don’t need that kind of shove.

Apparent Intelligence

Yet I remain convinced that the real problem with AI is in fact AI, where the A stands not only for “artificial,” but also for “apparent.” The whole point of it is to produce something that either (a) is intelligent or (b) at least looks like it. ChatGPT meets that goal: It looks and acts as if it knows and understands. Naturally therefore, when we read its output, we think we’re receiving knowledge and understanding. It’s fast, it can access superhuman stores of knowledge, it speaks as if it has authority. By nature we are inclined to trust such a thing — because it looks for all the world as if it knows and understands.

It does what it does well enough to reinforce that impression. It is, in fact, quite apparently intelligent. But it is still artificial: Not just in the sense that it is man-made, but in the sense that its “intelligence” isn’t real.

The human race is not trained for this, and we are not ready to interact with a machine that looks so intelligent and wise, yet actually knows and understands the world no better than the screws holding its case together do. We are extremely unprepared to hand it over to children and teens. Lacking their own wisdom, they will too easily treat AI’s output as wise advice.

And that takes us to perhaps the greatest harm AI can cause, one that no lawsuit could ever remedy. We have already separated wisdom from godliness. That’s bad enough.

But AI will lead people to think wisdom can be separated from humanness. If that happens, it isn’t just the end of wisdom. For those who fall for it, it’s the end of humanness.

Tom Gilson (@TomGilsonAuthor) is a senior editor with The Stream and the author or editor of six books, including the recently released Too Good To Be False: How Jesus’ Incomparable Character Reveals His Reality.

React to This Article

What do you think of our coverage in this article? We value your feedback as we continue to grow.

Previous ArticleNext Article