Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
How Would You Fact-Check a Chat Bot? 3 min read
Journalism

How Would You Fact-Check a Chat Bot?

The New Yorker’s Interview Issue is out, and is it any surprise that a voice that we just had to hear from was deemed to be none other than ChatGPT. Andrew Marantz, who often writes about tech for the magazine, copped to the reality that the idea is perhaps

By Cary Littlejohn

The New Yorker’s Interview Issue is out, and is it any surprise that a voice that we just had to hear from was deemed to be none other than ChatGPT.

Andrew Marantz, who often writes about tech for the magazine, copped to the reality that the idea is perhaps not entirely new or original, but it still made for an interesting read.

The online title of the story itself was one of a handful of responses that made me laugh out loud. It comes from this exchange:

AM:It’s just that . . . well, never mind. I think we’re getting off on the wrong foot. Maybe I find the personal pronouns a bit creepy because they seem designed to nudge me into feeling some sort of kinship or intimacy with you. But I don’t know you like that.

ChatGPT: I understand where you’re coming from. It’s true that the use of personal pronouns is a convention that makes it easier for users to interact with the model. I do not have feelings or consciousness, so it’s not possible for me to feel or be creepy.

As Maranta rightly points out to the bot, “Um, no, bro, you’re totally able to be creepy.”

While it’s an interesting look at the power and capability of the technology and the writing from Marantz is funny and informative all on its own, the nerdy side of me was fascinated with the concept of fact-checking the interview.

Would such an effort even be necessary? More than that, would it even be possible?

The reality is (likely) that Marantz would simply need to show a record of his conversation with the bot, and that’s that. It’s a factual record that he’s not fabricating what he passes off as the bot’s own words.

But if it were a traditional interview subject, it would likely be subject to The New Yorker’s famously rigorous fact-checking process. In magazines with a lesser pedigree than The New Yorker, fact-checking is a way of life. It usually involves a separate editorial staff member (editor or someone in the designated role of “fact checker”) calling and essentially verifying (almost re-reporting) the story, the quotes, the descriptions, basically anything that can be considered a statement of fact rather than the author’s subjective opinion.

The New Yorker even ran a story about its fact-checking process when it allowed none other than Harry Potter himself, Daniel Radcliffe, to embed with the fact-checking department as research for a show he was doing that had him playing the role of a fact checker. You can even hear him doing it, as part of a podcast companion to the story. (Side note: I love that he calls himself “Dan from The New Yorker.”)

So back to ChatGPT. Yes, yes, I know the reality of this isn’t that interesting. “Did the bot throw out the responses Marantz said it did? Yes? OK, moving on.”

But what if it was subject to the regular fact-checking process? Theoretically, a person could go in and type out Marantz’s exact phrasing, but would the bot respond the same way the second time? With it’s constantly evolving and updating scope of knowledge and learning from all the various inputs from the millions of questions that have been added to it so far by the public, would it now provide “smarter” answers? Could that same conversation ever be replicated?

Even more logistically complicated, did Marantz’s questions need to be fact-checked before they were asked of the bot? He conveys a lot of factual information in the stem of his questions, but theoretically, if he got something wrong, what would a fact checker do with that? A change in the question stem could (would?) completely change the question and thus likely the response from the bot.

It’s an interesting concept, and not nearly as silly as it seems because so much of the conversation focuses on how similar to a person the bot seems, how it’s designed to replicate human interaction but (seemingly with “self-awareness”) knows it cannot truly achieve such a thing.

Mostly, for a journalism nerd like me, it simply raises questions around how to handle the normal steps of magazine work when not dealing with a human interview subject (but approximates to a scary degree of verisimilitude).